id
stringlengths 10
10
| title
stringlengths 5
246
| abstract
stringlengths 42
3.32k
| authors
stringlengths 5
21.5k
| published_date
timestamp[s] | link
stringlengths 33
34
| markdown
stringlengths 140
1.08M
| abstract_ja
stringlengths 0
1.35k
|
---|---|---|---|---|---|---|---|
2305.00379 | Image Completion via Dual-path Cooperative Filtering | Given the recent advances with image-generating algorithms, deep image
completion methods have made significant progress. However, state-of-art
methods typically provide poor cross-scene generalization, and generated masked
areas often contain blurry artifacts. Predictive filtering is a method for
restoring images, which predicts the most effective kernels based on the input
scene. Motivated by this approach, we address image completion as a filtering
problem. Deep feature-level semantic filtering is introduced to fill in missing
information, while preserving local structure and generating visually realistic
content. In particular, a Dual-path Cooperative Filtering (DCF) model is
proposed, where one path predicts dynamic kernels, and the other path extracts
multi-level features by using Fast Fourier Convolution to yield semantically
coherent reconstructions. Experiments on three challenging image completion
datasets show that our proposed DCF outperforms state-of-art methods. | Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger | 2023-04-30T03:54:53 | http://arxiv.org/abs/2305.00379v1 | # Image Completion via Dual-Path Cooperative Filtering
###### Abstract
Given the recent advances with image-generating algorithms, deep image completion methods have made significant progress. However, state-of-art methods typically provide poor cross-scene generalization, and generated masked areas often contain blurry artifacts. Predictive filtering is a method for restoring images, which predicts the most effective kernels based on the input scene. Motivated by this approach, we address image completion as a filtering problem. Deep feature-level semantic filtering is introduced to fill in missing information, while preserving local structure and generating visually realistic content. In particular, a Dual-path Cooperative Filtering (DCF) model is proposed, where one path predicts dynamic kernels, and the other path extracts multi-level features by using Fast Fourier Convolution to yield semantically coherent reconstructions. Experiments on three challenging image completion datasets show that our proposed DCF outperforms state-of-art methods.
Pourya Shamsolmoali\({}^{1}\), Masoumeh Zareapoor\({}^{2}\), Eric Granger\({}^{3}\)\({}^{1}\)Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, China
\({}^{2}\)School of Automation, Shanghai Jiao Tong University, China
\({}^{3}\)Lab. d'imagerie, de vision et d'intelligence artificielle, Dept. of Systems Eng., ETS, Canada Image Completion, Image Inpainting, Deep Learning.
## 1 Introduction
The objective of image completion (inpainting) is to recover images by reconstructing missing regions. Images with inpainted details must be visually and semantically consistent. Therefore, robust generation is required for inpainting methods. Generative adversarial networks (GANs) [2, 18] or auto-encoder networks [16, 20, 21] are generally used in current state-of-the-art models [10, 11, 19] to perform image completion. In these models, the input image is encoded into a latent space by generative network-based inpainting, which is then decoded to generate a new image. The quality of inpainting is entirely dependent on the data and training approach, since the procedure ignores priors (for example smoothness among nearby pixels or features). It should be noted that, unlike the generating task, image inpainting has its own unique challenges. First, image inpainting requires that the completed images be clean, high-quality, and natural. These constraints separate image completion from the synthesis tasks, which focuses only on naturalness. Second, missing regions may appear in different forms, and the backgrounds could be from various scenes. Given these constraints, it is important for the inpainting method to have a strong capacity to generalize across regions that are missing. Recent generative networks have made substantial progress in image completion, but they still have a long way to go before they can address the aforementioned problems.
For instance, RFRNet [7] uses feature reasoning on the auto-encoder architecture for the task of image inpainting. As shown in Fig. 1, RFRNet produces some artifacts in output images. JPGNet and MISF [5, 8] are proposed to address generative-based inpainting problems [7, 12, 15] by reducing artifacts using image-level predictive filtering. Indeed, image-level predictive filtering reconstructs pixels from neighbors, and filtering kernels are computed adaptively based on the inputs. JPGNet is therefore able to retrieve the local structure while eliminating artifacts. As seen in Fig. 1, JPGNet's artifacts are more efficiently smoother than RFRNet's. However, many details may be lost, and the actual structures are not reconstructed. LaMa [19] is a recent image inpainting approach that uses Fast Fourier Convolution (FFC) [3] inside their ResNet-based LaMa-Fourier model to address the lack of receptive field for producing repeated patterns in the missing areas. Previously, researchers struggled with global self-attention [22] and its computational complexity, and they were still unable to perform satisfactory recovery for repeated man-made structures as effectively as with LaMa. Nonetheless, as the missing regions get bigger and pass the object boundary, LaMa creates faded structures.
Figure 1: Examples of an image completed with our DCF model compared to baseline methods on the Paris dataset. DCF generates high-fidelity and more realistic images.
In [12], authors adopts LaMa as the base network, and can captures various types of missing information by utilizing additional types of masks. They use more damaged images in the training phase to improve robustness. However, such a training strategy is unproductive. Transformer-based approaches [20, 23] recently have attracted considerable interest, despite the fact that the structures can only be estimated within a low-resolution coarse image, and good textures cannot be produced beyond this point. Recent diffusion-based inpainting models [13, 17] have extended the limitations of generative models by using image information to sample the unmasked areas or use a score-based formulation to generate unconditional inpainted images, however, these approaches are not efficient in real-world applications.
To address this problem, we introduce a new neural network architecture that is motivated by the predictive filtering on adaptability and use large receptive field for producing repeating patterns. In particular, this paper makes two key contributions. First, semantic filtering is introduced to fill the missing image regions by expanding image-level filtering into a feature-level filtering. Second, a Dual-path Cooperative Filtering (DCF) model is introduced that integrates two semantically connected networks - a kernel prediction network, and a semantic image filtering network to enhance image details.
The semantic filtering network supplies multi-level features to the kernel prediction network, while the kernel prediction network provides dynamic kernels to the semantic filtering network. In addition, for efficient reuse of high-frequency features, FFC [3] residual blocks are utilized in the semantic filtering network to better synthesize the missing regions of an image, leading to improved performance on textures and structures. By linearly integrating neighboring pixels or features, DCF is capable of reconstructing them with a smooth prior across neighbors. Therefore, DCF utilizes both semantic and pixel-level filling for accurate inpainting. Following Fig. 1, the propose model produces high-fidelity and realistic images. Furthermore, in comparison with existing methods, our technique involves a dual-path network with a dynamic convolutional operation that modifies the convolution parameters based on different inputs, allowing to have strong generalization. A comprehensive set of experiments conducted on three challenging benchmark datasets (CelebA-HQ [6], Places2 [24], and Paris StreetView [4]), shows that our proposed method yields better qualitative and quantitative results than state-of-art methods.
## 2 Methodology
Predictive filtering is a popular method for restoring images that is often used for image denoising tasks [14]. We define image completion as pixel-wise predictive filtering:
\[I_{c}=I_{m}\vartriangle T, \tag{1}\]
in which \(I_{c}\in\mathbb{R}^{(H\times W\times 3)}\) represents a complete image, \(I_{m}\in\mathbb{R}^{(H\times W\times 3)}\) denotes the input image with missing regions from the ground truth image \(I_{gr}\in\mathbb{R}^{(H\times W\times 3)}\). The tensor \(T\in\mathbb{R}^{(H\times W\times N^{2})}\) has \(HW\) kernels for filtering each pixel and the pixel-wise filtering operation is indicated by the operation \({}^{\prime}\vartriangle^{\prime}\). Rather than using image-level filtering, we perform the double-path feature-level filtering, to provides more context information. Our idea is that, even if a large portion of the image is destroyed, semantic information can be maintained. To accomplish semantic filtering, we initially use an auto-encoder network in which the encoder extracts features of the damaged image \(I_{m}\), and the decoder maps the extracted features to the complete image \(I_{c}\). Therefore, the encoder can be defined by:
\[f_{L}=\rho(I_{m})=\rho_{L}(...\rho_{l}(...\rho_{2}(\rho_{1}(I_{m})))), \tag{2}\]
in which \(\rho(.)\) denotes the encoder while \(f_{l}\) represents the feature taken from the deeper layers (\(l^{th}\)), \(f_{l}=\rho_{l}(f_{l-1})\). For instance, \(f_{l}\) shows the last layer's result of \(\rho(.)\).
In our encoder network, to create remarkable textures and semantic structures within the missing image regions, we adopt Fast Fourier Convolutional Residual Blocks (FFC-Res) [19]. The FFC-Res shown in Fig. 2 (b) has two FFC layers. The channel-wise Fast Fourier Transform (FFT) [1] is the core of the FFC layer [3] to provide a whole image-wide receptive field. As shown in Fig. 2 (c), the FFC layer divides channels into two branches: a) a local branch, which utilizes standard convolutions to capture spatial information, and b) a global branch, which employs a Spectral Transform module to analyze global structure and capture long-range context.
Figure 2: Overview of the proposed architecture. (a) Our proposed DCF inpainting network with (b) FFC residual block to have a larger receptive field. (c) and (d) show the architecture of the FFC and Spectral Transform layers, respectively.
Outputs of the local and global branches are then combined. Two Fourier Units (FU) are used by the Spectral Transform layer (Fig. 2 (d)) in order to capture both global and semi-global features. The FU on the left represents the global context. In contrast, the Local Fourier Unit on the right side of the image takes in one-fourth of the channels and focuses on the semi-global image information. In a FU, the spatial structure is generally decomposed into image frequencies using a Real FFT2D operation, a frequency domain convolution operation, and ultimately recovering the structure via an Inverse FFT2D operation. Therefore, based on the encoder the network of our decoder is defined as:
\[I_{c}=\rho^{-1}(f_{L}), \tag{3}\]
in which \(\rho^{-1}(.)\) denotes the decoder. Then, similar to image-level filtering, we perform semantic filtering on extracted features according to:
\[\hat{f}_{l}[r]=\sum_{s\in\mathcal{N}_{\kappa}}T_{\kappa}^{l}[s-r]f_{l}[s], \tag{4}\]
in which \(r\) and \(s\) denote the image pixels' coordinates, whereas the \(\mathcal{N}_{\kappa}\) consist of \(N^{2}\) closest pixels. \(T_{\kappa}^{l}\) signifies the kernel for filtering the \(\kappa^{th}\) component of \(T_{l}\) through its neighbors \(\mathcal{N}_{\kappa}\). To incorporate every element-wise kernel, we use the matrix \(T_{l}\) as \(T_{\kappa}^{l}\). Following this, Eq. (2) is modified by substituting \(f_{l}\) with \(\hat{f}_{l}\). In addition, we use a predictive network to predict the kernels' behaviour in order to facilitate their adaptation for two different scenes.
\[T_{l}=\varphi_{l}(I_{m}), \tag{5}\]
in which \(\varphi_{l}(.)\) denotes the predictive network to generate \(T_{l}\). In Fig. 2(a) and Table 2, we illustrate our image completion network which consist of \(\rho(.),\rho^{-1},\) and \(\varphi_{l}(.)\). The proposed network is trained using the \(L_{1}\) loss, perceptual loss, adversarial loss, and style loss, similar to predictive filtering.
## 3 Experiments
In this section, the performance of our DCF model is compared to state-of-the-art methods for image completion task. Experiments are carried out on three datasets, CelebA-HQ [6], Places2 [24], and Paris StreetView [4] at \(256\times 256\) resolution images. With all datasets, we use the standard training and testing splits. In both training and testing we use the diverse irregular mask (20%-40% of images occupied by holes) given by PConv [9] and regular center mask datasets. The code is provided at _DCF_.
**Performance Measures:** The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and Frechet inception distance (FID) are used as the evaluation metrics.
### Implementation Details
Our proposed model's framework is shown in Table 2.
**Loss functions.** We follow [15] and train the networks using four loss functions, including \(L_{1}\) loss (\(\ell_{1}\)), adversarial loss (\(\ell_{A}\)), style loss (\(\ell_{S}\)), and perceptual loss (\(\ell_{P}\)), to obtain images with excellent fidelity in terms of quality as well as semantic levels. Therefore, we can write the reconstruction loss (\(\ell_{R}\)) as:
\[\ell_{R}=\lambda_{1}\ell_{1}+\lambda_{a}\ell_{A}+\lambda_{p}\ell_{P}+\lambda_ {s}\ell_{S}. \tag{6}\]
\begin{table}
\begin{tabular}{l|c|c||c|c} \hline \multicolumn{4}{c||}{Feature extracting network} & \multicolumn{2}{c}{Predicting network} \\ \hline Layer & In. & Out/size & In. & Out/size \\ \hline \hline conv(7,3,64) & \(I_{m}\) & \(f_{1}\) / 256 & \(I_{m}\) & \(e_{1}\) / 256 \\ conv(4,64,128) & \(f_{1}\) & \(f_{2}\) / 128 & \(e_{1}\) & \(e_{2}\) / 128 \\ pooling & \(f_{2}\) & \(f_{2}\) / 64 & \(e_{2}\) & \(e_{2}\) / 64 \\ conv(4,128,256) & \(f_{2}\) & \(f_{3}\) / 64 & \([f_{2}^{\prime},e_{2}^{\prime}]\) & \(e_{3}\) / 64 \\ \(f_{3}\) \(\
in which \(\lambda_{1}=1\), \(\lambda_{a}=\lambda_{p}=0.1\), and \(\lambda_{s}=250\). More details on the loss functions can be found in [15].
**Training setting.** We use Adam as the optimizer with the learning rate of \(1e-4\) and the standard values for its hyperparameters. The network is trained for 500k iterations and the batch size is 8. The experiments are conducted on the same machine with two RTX-3090 GPUs.
### Comparisons to the Baselines
**Qualitative Results.** The proposed DCF model is compared to relevant baselines such as RFRNet [7], JPGNet [5], and LaMa [19]. Fig. 3 and Fig. 4 show the results for the Places2 and CelebA-HQ datasets respectively. In comparison to JPGNet, our model preserves substantially better recurrent textures, as shown in Fig. 3. Since JPGNet lacks attention-related modules, high-frequency features cannot be successfully utilized due to the limited receptive field. Using FFC modules, our model expanded the receptive field and successfully project source textures on newly generated structures. Furthermore, our model generates superior object boundary and structural data compared to LaMa. Large missing regions over larger pixel ranges limit LaMa from hallucinating adequate structural information. However, ours uses the advantages of the coarse-to-fine generator to generate a more precise object with better boundary. Fig. 4 shows more qualitative evidence. While testing on facial images, RFRNet and LaMa produce faded forehead hairs and these models are not robust enough. The results of our model, nevertheless, have more realistic textures and plausible structures, such as forehead form and fine-grained hair.
**Quantitative Results.** On three datasets, we compare our proposed model with other inpainting models. The results shown in Table 2 lead to the following conclusions: 1) Compared to other approaches, our method outperforms them in terms of PSNR, SSIM, and FID scores for the most of datasets and mask types. Specifically, we achieve 9% higher PNSR on the Places2 dataset's irregular masks than RFRNet. It indicates that our model has advantages over existing methods. 2) We observe similar results while analyzing the FID. On the CelebA-HQ dataset, our method achieves 2.5% relative lower FID than LaMa under the center mask. This result indicates our method's remarkable success in perceptual restoration. 3) The consistent advantages over several datasets and mask types illustrate that our model is highly generalizable.
## 4 Conclusion
Dual-path cooperative filtering (DCF) was proposed in this paper for high-fidelity image inpainting. For predictive filtering at the image and deep feature levels, a predictive network is proposed. In particular, image-level filtering is used for details recovery, whereas deep feature-level filtering is used for semantic information completion. Moreover, in the image-level filtering the FFC residual blocks is adopted to recover semantic information and resulting in high-fidelity outputs. The experimental results demonstrate our model outperforms the state-of-art inpainting approaches.
#### Acknowledgments
This research was supported in part by NSFC China. The corresponding author is Masoumeh Zareapoor.
\begin{table}
\begin{tabular}{l|l|c c|c c|c c} \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{Method} & \multicolumn{3}{c|}{CelebA-HQ} & \multicolumn{3}{c|}{Places2} & \multicolumn{3}{c}{Paris StreetView} \\ \cline{3-8} & & Irregular & Center & Irregular & Center & Irregular & Center \\ \hline \multirow{8}{*}{PSNR\(\uparrow\)} & RFRNet [7] & 26.63 & 21.32 & 22.58 & 18.27 & 23.81 & 19.26 \\ & JPGNet [5] & 25.54 & 22.71 & 23.93 & 19.22 & 24.79 & 20.63 \\ & TFill [23] & 26.84 & 23.65 & 24.32 & 20.49 & 25.46 & 21.85 \\ & LaMa [19] & 27.31 & 24.18 & **25.27** & 21.67 & 25.84 & 22.59 \\ & GLaMa [12] & 28.17 & 25.13 & 25.08 & 21.83 & 26.23 & 22.87 \\ & DCF (ours) & **28.34** & **25.62** & 25.19 & **22.30** & **26.57** & **23.41** \\ \hline \multirow{8}{*}{SSIM\(\uparrow\)} & RFRNet [7] & 0.934 & 0.912 & 0.819 & 0.801 & 0.862 & 0.849 \\ & JPGNet [5] & 0.927 & 0.904 & 0.825 & 0.812 & 0.873 & 0.857 \\ & TFill [23] & 0.933 & 0.907 & 0.826 & 0.814 & 0.870 & 0.857 \\ & LaMa [19] & 0.939 & 0.911 & 0.829 & 0.816 & 0.871 & 0.856 \\ & GLaMa [12] & 0.941 & 0.925 & **0.833** & 0.817 & 0.872 & 0.858 \\ & DCF (ours) & **0.943** & **0.928** & 0.832 & **0.819** & **0.876** & **0.861** \\ \hline \multirow{8}{*}{FID\(\downarrow\)} & RFRNet [7] & 17.07 & 17.83 & 15.56 & 16.47 & 40.23 & 41.08 \\ & JPGNet [5] & 13.92 & 15.71 & 15.14 & 16.23 & 37.61 & 39.24 \\ & TFill [23] & 13.18 & 13.87 & 15.48 & 16.24 & 33.29 & 34.41 \\ & LaMa [19] & 11.28 & 12.95 & 14.73 & 15.46 & 32.30 & 33.26 \\ & GLaMa [12] & 11.21 & 12.91 & 14.70 & 15.35 & 32.12 & 33.07 \\ \cline{2-8} & DCF w.o. Sem-Fil & 14.34 & 15.24 & 17.56 & 18.11 & 42.57 & 44.38 \\ & DCF w.o. FFC & 13.52 & 14.26 & 15.83 & 16.98 & 40.54 & 41.62 \\ & DCF (ours) & **11.13** & **12.63** & **14.52** & **15.09** & **31.96** & **32.85** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study and quantitative comparison of our proposed and state-of-art methods on center and free form masked images from the CelebA-HQ, Places2, and Paris StreetView datasets. | 画像生成アルゴリズムの最近の進歩により、深層画像補完方法は大きく前進しています。しかし、最先端の方法は、通常、シーン間の汎化能力が低いことが知られており、生成されたマスク領域にはぼやけが生じることがあります。予測フィルタリングは、入力シーンに基づいて最も効果的なカーネルを予測する画像の修復方法です。このアプローチを動機づけ、私たちは画像補完をフィルタリング問題として捉えます。深層特徴レベルのセマンティックフィルタリングを導入して欠損情報を補完し、ローカル構造を維持して視覚的に現実的な内容を生成します。特に、動的なカーネルを予測する双方向協力フィルタリング(DCF)モデルが提案されています。一方のパスは動的なカーネルを予測し、もう一方のパスは、高速フーリエ変換を使用して多段階の特徴を抽出することで、セマンティックに整合性のある再構築を実現します |
2307.16362 | High Sensitivity Beamformed Observations of the Crab Pulsar's Radio
Emission | We analyzed four epochs of beamformed EVN data of the Crab Pulsar at 1658.49
MHz. With the high sensitivity resulting from resolving out the Crab Nebula, we
are able to detect even the faint high-frequency components in the folded
profile. We also detect a total of 65951 giant pulses, which we use to
investigate the rates, fluence, phase, and arrival time distributions. We find
that for the main pulse component, our giant pulses represent about 80% of the
total flux. This suggests we have a nearly complete giant pulse energy
distribution, although it is not obvious how the observed distribution could be
extended to cover the remaining 20% of the flux without invoking large numbers
of faint bursts for every rotation. Looking at the difference in arrival time
between subsequent bursts in single rotations, we confirm that the likelihood
of finding giant pulses close to each other is increased beyond that expected
for randomly occurring bursts - some giant pulses consist of causally related
microbursts, with typical separations of $\sim\!30{\rm\;\mu s}$ - but also find
evidence that at separations $\gtrsim\!100{\rm\;\mu s}$ the likelihood of
finding another giant pulse is suppressed. In addition, our high sensitivity
enabled us to detect weak echo features in the brightest pulses (at
$\sim\!0.4\%$ of the peak giant pulse flux), which are delayed by up to
$\sim\!300{\rm\;\mu s}$. | Rebecca Lin, Marten H. van Kerkwijk | 2023-07-31T01:36:55 | http://arxiv.org/abs/2307.16362v2 | # High Sensitivity Beamformed Observations of the Crab Pulsar's Radio Emission
###### Abstract
We analyzed four epochs of beamformed EVN data of the Crab Pulsar at \(1658.49\rm\,MHz\). With the high sensitivity resulting from resolving out the Crab Nebula, we are able to detect even the faint high-frequency components in the folded profile. We also detect a total of \(65951\) giant pulses, which we use to investigate the rates, fluence, phase, and arrival time distributions. We find that for the main pulse component, our giant pulses represent about 80% of the total flux. This suggests we have a nearly complete giant pulse energy distribution, although it is not obvious how the observed distribution could be extended to cover the remaining 20% of the flux without invoking large numbers of faint bursts for every rotation. Looking at the difference in arrival time between subsequent bursts in single rotations, we confirm that the likelihood of finding giant pulses close to each other is increased beyond that expected for randomly occurring bursts - some giant pulses consist of causally related microbursts, with typical separations of \(\sim 30\rm\ \mu s\) - but also find evidence that at separations \(\gtrsim\!100\rm\ \mu s\) the likelihood of finding another giant pulse is suppressed. In addition, our high sensitivity enabled us to detect weak echo features in the brightest pulses (at \(\sim\!0.4\%\) of the peak giant pulse flux), which are delayed by up to \(\sim\!300\rm\ \mu s\).
Pulsars (1306) -- Radio bursts (1339) -- Very long baseline interferometry (1769) 0000-0002-4818-2886]Rebecca Lin
0000-0002-4882-0886]Marten H. van Kerkwijk
0000-0002-4882-0886]D.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.
Investigation of the emission from the Crab Pulsar is complicated by propagation effects along the line of sight, especially at lower frequencies, \(\lesssim 2\ \mathrm{GHz}\). While dispersion can be removed using coherent de-dispersion (either during recording, or afterwards with baseband data), scattering effects are difficult to remove. This includes echoes due to propagation in the Crab Nebula itself, which sometimes are bright and obvious (Backer et al., 2000; Lyne et al., 2001), but can also be quite faint (Driessen et al., 2019), making it difficult to disentangle them from microbursts without having a good pulse sample to look for repeating structure.
Another complication in studying the emission of the Crab Pulsar is the radio-bright nebula in which the pulsar resides. This contributes noise and hence many previous studies relied on long integrations to observe both the weaker pulse components and echoes in the average profile. But the contribution to the noise can be reduced by resolving the nebula, using large dishes or arrays, such as the VLA, Arecibo, and Westerbork (Moffett & Hankins, 1996; Cordes et al., 2004; Karuppusamy et al., 2010; Lewandowska et al., 2022).
In this paper, we use the European VLBI Network (EVN) to resolve out the Crab Nebula and obtain high sensitivity data. In Section 2, we describe our observations and data reduction, and in Section 3, we present the resulting pulse profiles and the components that are detectable at our high sensitivity. We turn to an analysis of GPs in Section 4, investigating their rates, fluence, phase, and arrival time distributions, as well as weak echoes seen in the brightest GPs. We summarize our findings in Section 5.
## 2 Observations and Data Reduction
We analyze observations of the Crab Pulsar taken by the EVN, projects EK036 A-D, at four epochs between 2015 Oct and 2017 May (see Table 1). Throughout these observations, calibrator sources were also observed resulting in breaks in our data. While many dishes participated in these observations, for our analysis we only use telescope data that had relatively clean signals across the frequency range of \(1594.49-1722.49\ \mathrm{MHz}\) in both circular polarizations. At each single dish, real-sampled data were recorded in either 2 bit MARK 5B or VDIF format1, covering the frequency range in either eight contiguous \(16\ \mathrm{MHz}\) wide bands or four contiguous \(32\ \mathrm{MHz}\) wide bands.
Footnote 1: For specifications of MARK5B and VDIF, see [https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/](https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/) and [https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf](https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf), respectively.
For these datasets, single dish data were processed and then combined coherently to form a tied-array beam as described in Lin et al. (2023). The resulting RFI-removed, normalized, de-dispersed (using dispersion measures (DMs) listed in Table 1), parallactic angle corrected, and phased baseband data were squared to form intensity data. As in Lin et al. (2023), we estimate the system equivalent flux density (SEFD) for the phased EVN array as \((S_{\text{CN}}+\langle S_{\text{tel}}\rangle)/N_{\text{tel}}\approx 140-160\ \mathrm{ Jy}\), where \(S_{\text{CN}}\approx 833\ \mathrm{Jy}\) is the SEFD of the Crab Nebula at our observing frequency (Bietenholz et al., 1997), \(\langle S_{\text{tel}}\rangle\simeq 300\ \mathrm{Jy}\) is the average nominal SEFD of the telescopes2 and \(N_{\text{tel}}=7\ \mathrm{or}\ 8\) is the number of telescopes used. By combining the single dishes into a synthesized beam, we resolve out the radio-bright Crab Nebula and increase our sensitivity, thus allowing us to investigate the weaker radio emission of the Crab Pulsar.
Footnote 2: [http://old.evlbi.org/cgi-bin/EVNcalc](http://old.evlbi.org/cgi-bin/EVNcalc).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Observation & & \(t_{\text{sep}}\)a & & & DMc & & & Giant Pulsesd & & \\ & Date & (h) & Telescopes usedb & & & & Giant Pulsesd & \\ & Date & (h) & Telescopes usedb & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Observation and Giant Pulse Log.
## 3 Pulse Profiles
For each of the phased EVN datasets, we create folded pulse profiles using polyco files generated with tempo2(Hobbs and Edwards, 2012) from the monthly Jodrell Bank Crab Pulsar ephemerides3(Lyne et al., 1993) and DM from Table 1. We averaged over all frequencies and used \(512\) phase bins, rotating in phase such that the MP is at phase \(0\). We show the resulting profiles in Figure 1, with each profile scaled to its maximum to ease comparison. With our high sensitivity, we can see all five pulse components expected from the multifrequency overview of Hankins et al. (2015), corresponding to the LFC, MP, IP, HFC1 and HFC2 (with the latter two detected at \(\sim\!1.66\ \mathrm{GHz}\) for the first time).
Footnote 3: [http://www.jb.man.ac.uk/~pulsar/crab.html](http://www.jb.man.ac.uk/~pulsar/crab.html).
We fit the pulse components in the EKO36 datasets with five Gaussians to look for possible changes, both between our epochs and relative to the compilation from Hankins et al. (2015). Our fitted parameters are presented in Table 2, together with the values inferred from Hankins et al. (2015). One sees that the results for our four observations are all consistent. At \(1.4\ \mathrm{GHz}\), Lyne et al. (2013) found that the separations between the MP and IP and between the MP and LFC increase at a rate of \(0\fdg 5\pm 0\fdg 2\) per century and \(11\arcdeg\pm 2\arcdeg\) per century, respectively. Using these rates, we expect pulse phase changes for the IP and LFC of \(\sim\!0\fdg 008\) and \(\sim\!0\fdg 17\), respectively, which are not detectable within our uncertainties.
Comparing with Hankins et al. (2015), we find good agreement in pulse phase for all components (though now we do need to take into account the drift in pulse phase). We noticed, however, that while the widths of our LFC, HFC1 and HFC2 are consistent with those given by Hankins et al. (2015), the widths of the MP and IP seem smaller, even if they are still within the nominal, rather large uncertainties of Hankins et al. (2015). Looking in more detail at their Figure 3 with measurements, one sees considerable scatter for the MP and IP, even though those strong, narrow peaks should be the easiest to measure. This might suggest that some profiles were slightly smeared (e.g., because the data were not dedispersed to exactly the right DM, which is known to vary for the Crab Pulsar, or because of changes in scattering timescale at lower frequencies, see McKee et al., 2018). For a comparison with recent data, we estimated widths from the \(2-4\) and \(4-6\ \mathrm{GHz}\) pulse profiles in Figure 1 of Lewandowska et al. (2022), which were taken using the VLA in D configuration to resolve out the Crab Nebula and thus have high signal-to-noise ratio; we find these are all consistent with ours.
Figure 1: Folded pulse profile of the Crab Pulsar at \(1658.49\ \mathrm{MHz}\) from EK036 observations in \(512\) phase bins centered on the MP. At this frequency, 5 components: LFC, MP, IP, HFC1 and HFC2 are visible. In the left panel, the profiles are normalized to their peak MP component. As the HFC1 and HFC2 components (indicated by arrows) are very faint, we show the grey region of the left panel zoomed in by a factor of \(15\) in the right panel, with vertical lines marking the peak of these components.
At lower frequencies, the pulse profiles often show echo features (e.g., Driessen et al., 2019). At our frequencies, those are expected to be too weak at delays where they might be seen in the folded pulse profile, and indeed we see none. However, at frequencies like ours, echoes can still be seen in individual pulses. For instance, at \(1.4\;\mathrm{GHz}\), Crossley et al. (2004) saw that individual bright pulses all had an echo delayed at \(\sim\!50\;\mathrm{\mu s}\) (which had no counterpart at \(4.9\;\mathrm{GHz}\)). From aligning GPs before stacking them in our datasets, Lin et al. (2023) also saw hints of echo features within \(\sim\!25\;\mathrm{\mu s}\) of the peaks of GPs in EK036 B and D. In Section 4.6, we confirm echoes in our data using a more careful analysis, finding that for EK036 D faint echoes are visible out to to \(\sim\!300\;\mathrm{\mu s}\).
## 4 Giant Pulses
### Search
In Lin et al. (2023), we searched for GPs by flagging peaks above \(8\sigma\) in a \(16\;\mathrm{\mu s}\) wide running average of the intensity time stream. While we reliably found GPs, the long time window meant we could not distinguish between bursts arriving in quick succession within that time window. Hence, the previous technique was unsuitable for one of our goals, of measuring arrival time differences between bursts, including between the microbursts that GPs sometimes are composed of. Below, we describe a revised technique, which allows us to more reliably identify multiple bursts (see Figure 2). Unsurprisingly, with our new technique we detected more multiple bursts than we had previously, as can be seen by comparing numbers listed in Section 6.3 of Lin et al. 2023) with those in Table 3.
For every pulsar period in the EK036 dataset, we take \(2.0\;\mathrm{ms}\) snippets of baseband data centered at the MP and
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{
\begin{tabular}{c} Pulse \\ Comp. \\ \end{tabular} } & Obs./ & Amplitude & Pulse Phase & FWHM \\ & Ref. & (\%) & (deg.) & (deg.) \\ \hline LFC\(\dots\) & A & 3.6(3) & \(-38.0(3)\) & 7.5(6) \\ & B & 3.35(17) & \(-37.67(19)\) & 7.7(4) \\ & C & 3.7(2) & \(-37.2(3)\) & 7.7(6) \\ & D & 3.9(2) & \(-37.8(2)\) & 8.1(5) \\ & H15 & \(\dots\) & \(-35.78(14)\) & 7.2(12) \\ MP \(\dots\) & A & & & 2.786(11) \\ & B & & & 2.708(7) \\ & C & & & 2.756(11) \\ & D & & & 2.836(9) \\ & H15 & & & 3.9(11) \\ IP\(\dots\) & A & 15.2(4) & 145.38(4) & 3.48(10) \\ & B & 15.2(2) & 145.28(3) & 3.59(7) \\ & C & 15.3(4) & 145.25(4) & 3.46(10) \\ & D & 14.4(3) & 145.28(4) & 3.59(8) \\ & H15 & \(\dots\) & 145.25(4) & 5.4(11) \\ HFC1\(\dots\) & A & 0.58(13) & 203(3) & 28(7) \\ & B & 0.88(9) & 198.4(13) & 25(3) \\ & C & 0.68(12) & 194(3) & 34(7) \\ & D & 0.94(11) & 196.2(15) & 36(5) \\ & H15 & \(\dots\) & 198.2(8) & 25(5) \\ HFC2\(\dots\) & A & 1.5(2) & 259.7(8) & 11.8(19) \\ & B & 1.19(14) & 259.2(7) & 11.7(16) \\ & C & 1.23(19) & 257.7(9) & 12(2) \\ & D & 1.51(15) & 259.8(7) & 14.8(16) \\ & H15 & \(\dots\) & 259.1(4) & 11.6(12) \\ \hline \end{tabular} Note. –Amplitudes and phases are relative to the MP. H15 refers to Hankins et al. (2015), and corresponding values are from evaluating the fits presented in his Tables 2 and 3 at our central observing frequency of \(1658.49\;\mathrm{MHz}\). The phases for the LFC and IP have been extrapolated to MJD 57607 (midway between EK036 A and D) using \(d\phi/dt\) values from Lyne et al. (2013). Numbers in parentheses are \(1\sigma\) uncertainties in the last digit.
\end{table}
Table 2: Properties of the Pulse Profile Components.
Figure 2: Sample MP pulse rotations with GPs as detected by our algorithm (see Section 4.1 for details), shown at a time resolution of \(1.25\;\mathrm{\mu s}\). _Top_: Single pulse with scattering tail. _Middle_: Two pulses, each with their own scattering tail. _Bottom_: A profile showing the difficulties inherent in classifying pulses: our algorithm found three pulses, but if another algorithm were to classify this as two or four pulses, that would also seem reasonable.
IP component phase windows (roughly \(2\) times the size of the pulse component determined from the folded pulse profile) and create pulse intensity stacks for each component4. We average these stack across the eight frequency bands and bin over 10 time samples, or \(0.625~{}\mu\)s, a value chosen to be large enough for a reliable GP detection yet well less than the scattering timescale of \(\sim\)\(5~{}\mu\)s during these observations (Lin et al., 2023). To detect GPs, we first subtract the off-pulse region (determined from the \(0.5~{}\mathrm{ms}\) region on either side of each pulse stack), then filter with a uniform filter of size \(5\) (\(3.125~{}\mu\)s), and finally record all samples above a detection threshold of \(5\sigma\).
Footnote 4: We only search for GPs inside these windows since Lin et al. (2023) found none outside for the same dataset.
To turn these sets of above-the-noise locations into detections of individual GPs, we use the following three-step process5. First, we connect detections within \(8\) samples (\(5~{}\mu\)s, i.e., of order the scattering time), since those are likely related. Second, we remove detections spanning \(4\) samples (\(2.5~{}\mu\)s) or less, since these are likely spurious. Third, we increase the width of a detection by \(4\) samples (\(2.5~{}\mu\)s) on either side, mostly to ensure that if we integrate over the mask, we will capture most of the flux independent of pulse strength. With this procedure, the minimum final pulse width is \(8.125~{}\mu\)s, slightly larger than the scattering timescale, and we confidently detect pulses above a threshold of \(\sim\)\(0.15~{}\mathrm{kJy}~{}\mu\)s. The brightest GP we detect has a fluence of \(\sim 560~{}\mathrm{kJy}~{}\mu\)s. With our relatively high initial detection threshold, we do not find any GPs outside our pulse windows, suggesting that we have no false detections in our sample. Nevertheless, as can be seen from the overall pulse statistics in Table 1, we find many GPs, about \(2-3\) per second or about one for every dozen pulsar rotations.
Footnote 5: Using the binary_closing, binary_opening and binary_dilation functions, respectively, from scipy’s multidimensional image processing functions (Virtanen et al., 2020).
In some pulse rotations, we detect more than one distinct GP, where "distinct" means that the pulse is separated by at least \(5~{}\mu\)s (roughly the scattering timescale) from another pulse at our detection threshold. Here, we note that whether or not a GP is detected as single or multiple depends on the detection threshold: a GP classified as a single one at our threshold might be classified as separated at a higher threshold if it has two bright peaks with some flux in between (e.g., because the scattering tail of the first peak overlaps with the start of the next one, or a weaker burst fills in the space in between). This dependence on detection threshold may explain why Bhat et al. (2008) found no pulses wider than \(10~{}\mu\)s, as they took a high detection cutoff, of \(3~{}\mathrm{kJy}~{}\mu\)s. This kind of arbitrariness seems unavoidable given the variety in pulse shapes that we see; it often is a rather subjective decision on what to take as a single bursts. To give a sense, we show in Figure 2 an example of a pulse rotation with a single burst as well as two examples of rotations with multiple bursts. In Section 4.5, we estimate the fraction of multiple bursts that is causally related from the statistics of pulse separations.
### Rates
With the high sensitivity of the phased EVN array, we detected a total of \(65951\) GPs over \(7.32~{}\mathrm{hr}\), implying an average detection rate of \(2.5~{}\mathrm{s}^{-1}\). From Table 1, one sees that the rates are not the same for each epoch. Comparable detection rates are seen for both MP and IP GPs in EK036 A and C, but those are about a factor \(2\) smaller than the rates for EK036 B and D (which are comparable to each other).
Similar changes in detection rate were found for bright pulses by Lundgren et al. (1995) at \(800~{}\mathrm{MHz}\), Bera & Chengalur (2019) at \(1330~{}\mathrm{GHz}\) and by Kazantsev et al. (2019) at \(111~{}\mathrm{MHz}\). Lundgren et al. (1995) suggests that almost
Figure 3: GP pulse detection rates in each EK036 observation. Times when the telescope was not observing the Crab Pulsar are shaded grey. The MP (blue) and IP (orange) detection rates appear to scale together and are relatively constant across each observation.
certainly, these are due to changes in the scattering screen, which are known to cause changes in the scattering time on similar timescales and are expected to cause changes in magnification as well. To verify that there are no variations at shorter timescales, we calculated rates at roughly \(5\,\mathrm{min}\) intervals. As can be seen in Figure 3, we find that in a given epoch, the rates are indeed steady.
### Fluences
The fluence distribution of the Crab Pulsar's GPs is typically described by power-law approximations to the reverse cumulative distribution,
\[N_{\mathrm{GP}}(E>E_{0})=CE_{0}^{\alpha}, \tag{1}\]
where \(\alpha\) is the power-law index, \(C\) a proportionality constant, and \(E_{0}\) the GP fluence such that \(N_{\mathrm{GP}}(E>E_{0})\) is the occurrence rate of GPs above \(E_{0}\). For our data, one sees in Figure 4, that for all observations the distributions indeed appear power-law like at high fluence, with \(\alpha\approx-2.0\) and \(-1.6\) for MP and IP, respectively. These values are roughly consistent with values found at similar frequencies: e.g., Popov & Stappers (2007) find \(-1.7\) to \(-3.2\) for MP GPs and \(-1.6\) for IP GPs at \(1197\,\mathrm{MHz}\), and Majid et al. (2011) finds \(\alpha=-1.9\) for the combined MP and IP distribution at \(1664\,\mathrm{MHz}\).
However, as noted by Hankins et al. (2015) already, the power-law indices show large scatter and should be taken as roughly indicative only, showing, e.g., that at higher frequencies, very bright pulses are relatively rare. Indeed, in our data, like in more sensitive previous studies (e.g., Lundgren et al., 1995; Popov & Stappers, 2007; Bhat et al., 2008; Karuppusamy et al., 2010), the fluence distribution clearly flattens at lower fluences. At the very low end, this is because our detection method misses more pulses, but the changes above \(\sim 0.2\,\mathrm{kJy}\,\mathrm{\mu s}\) are real. This turnover may at least partially explain why a variety of power-law indices was found previously, as the measured index will depend on what part of the fluence distribution is fit (which will depend also on the magnification by scattering), as well as why for very high fluences, well away from the turn-over, the power-law index seems fairly stable (Bera & Chengalur, 2019).
Comparing the distributions for the different epochs, one sees that they are very similar except for a shift left or right in the figure. This confirms that the differences in rates seen between the epochs are due differences in magnification due to scintillation (and not due to the Crab Pulsar varying the rate at which pulses are emitted, which would, to first order, shift the distributions up and down).
As the fluence distributions looked roughly parabolic in log-log space, we also show cumulative log-normal distributions in Figure 4, of the form,
\[N_{\mathrm{GP}}(E>E_{0})=\frac{A}{2}\left[\mathrm{erfc}\left(\frac{\ln E_{0}- \mu}{\sigma\sqrt{2}}\right)\right], \tag{2}\]
where \(A\) is a scale factor, \(\mu\) and \(\sigma\) are the mean and standard deviation of \(\ln E_{0}\), and \(\mathrm{erfc}\) is the complementary error function. One sees that these describe the observed cumulative distributions quite well.
Figure 4: Reverse cumulative GP fluence distribution showing the occurrence rates of GPs. For comparison, power-law distributions (solid black lines) and log-normal distributions (dashed black line) are shown, with indices \(\alpha\) and widths \(\sigma\) as listed in the legend.
If the intrinsic distributions were log-normal, it would imply that especially for the MP, most of the flux is already captured and that the total rate of GPs is not much larger than our detection rate. For the log-normal distribution shown in Figure 4, for the MP, \(A=2.7\ \mathrm{s}^{-1}\) and the mean GP fluence is \(\langle E\rangle=\exp(\mu+\frac{1}{2}\sigma^{2})=1.2\ \mathrm{kJy\,\mu s}\) and only 1.5% of the total flux is below \(0.15\ \mathrm{kJy\,\mu s}\), while for the IP, \(A=1.6\ \mathrm{s}^{-1}\) and \(\langle E\rangle=0.24\ \mathrm{kJy\,\mu s}\), and 13% of the flux is below.
We can verify whether our MP GPs account for most of the flux by calculating pulse profiles with and without removing pulse rotations where GPs are detected. As can be seen in Figure 5, significant flux remains in both MP and IP. For the MP, even though the remaining signal is brighter in epochs B and D, the fraction is lower: about 18% in B and D, in comparison with 23% in A and C. This again can be understood if the larger detection rate is due to an overall magnification: a larger fraction of the pulses - and hence of the total flux - is detected.
Our result is similar (but more constraining) than that of Majid et al. (2011), who showed that at least \(54\%\) of overall pulsed energy flux for the Crab Pulsar is emitted in the form of GPs. But it is in contrast for what is seen by Abbate et al. (2020) for PSR J1823\(-\)3021A, where the detected GPs make up only a small fraction of the integrated pulse emission (\(4\%\) and \(2\%\) for their C1 and C2 components, respectively), and by Geyer et al. (2021) for PSR J0540\(-\)6919, where the detected GPs only make up \(7\%\) of the total flux. This might indicate a difference in the emission process. As these authors noted, however, a larger population of undetected GPs may still be hidden below their detection threshold.
For our observations, for both MP and IP, the residual flux is much larger than expected based on the log-normal distribution, thus indicating that the true fluence distribution has more pulses at low fluence (many more for the IP); if additional pulses were emitted also in rotations that we do not detect them, their typical fluence would be the residual flux integrated over one cycle, which is \(\sim 25\ \mathrm{Jy\,\mu s}\) for MP and a little less for IP. This is well below our detection limit, so consistent in that sense, but from the distributions shown in Figure 4, one would expect a much smaller rate than once per pulse period at \(25\ \mathrm{Jy\,\mu s}\). This might suggest that there are even more but typically fainter bursts (note that it cannot be fainter bursts accompanying the GPs we already detect, since we excluded the full rotations in calculating the resid
Figure 5: Mean and median MP and IP pulse profiles obtained using all pulse rotations (in blue and orange, respectively) and using only those in which no GPs were detected (green and red, respectively) in \(6.25\ \mathrm{\mu s}\) bins. Note that because the noise in an individual profile is not normally distributed, but rather follows a \(\chi_{k}^{2}\) distribution, the median is slightly below zero in the off-pulse region, by \((1-2/3k)^{3}-1\simeq-6/9k\simeq-0.0002\) of the SEFD of \(\sim\!150\ \mathrm{Jy}\) (Section 2), or \(\sim\!-0.03\ \mathrm{Jy}\) given \(k=3200\) degrees of freedom (complex dedispersed timestream squared, averaged over 2 polarizations, 8 bands, and 100 time bins).
ual emission), or that there is some steady underlying emission. It would be worthwhile to test this with more sensitive future observations.
### Pulse Phases
Defining the time of arrival of a GP as the time when an increase in flux is first detected, the longitude windows where MP and IP GPs occur have total widths of \(\sim 680\)\(\mu\)s and \(860\)\(\mu\)s (or \(\sim\!7\fdg 3\) and \(\sim\!9\fdg 2\)), respectively (averaged over the four epoch). As can be seen in Figure 6, the majority of GPs occur within much narrower windows: the root-mean-square deviations around the mean arrival phases are \(\sim\!100\)\(\mu\)s and \(\sim\!130\)\(\mu\)s (or \(\sim\!1\fdg 1\) and \(\sim\!1\fdg 4\)), respectively. The number distribution is roughly Gaussian, with a slightly negative skewness (i.e., a longer tail toward earlier phases and thus with a mode towards later phases). This was also observed by Majid et al. (2011) at a similar frequency of \(1664\)\(\mathrm{MHz}\). In EKO36 D, a few MP pulses are detected beyond the range found in the other epochs. As we will discuss in Section 4.6, these "outlier" detections are due to echoes (hence, they are are omitted in our determinations of widths above).
In Figure 6, we also show the flux distributions as a function of pulse phase, including the median flux of the GPs detected in any given phase bin. One sees no obvious variation, i.e., no hint of, e.g., brighter pulses having an intrinsically narrower phase distribution. This suggests that only the probability of seeing a pulse depends on pulse phase. In our earlier work on these data, where we studied how the pulse spectra and their correlations are affected by scattering (Lin et al., 2023), we concluded that we resolved the regions from which the nanoshots that comprise individual GPs are emitted, and that this is most easily understood if the emitting plasma is ejected highly relativistically, with \(\gamma\simeq 10^{4}\) (as was already suggested by Bij et al., 2021). If so, the emission would be beamed to angles much smaller than the width of the phase windows, and the range of phases over which we observe GPs would reflect the range of angles over which plasma is ejected.
### Arrival Times
Several studies (e.g., Karuppusamy et al., 2010; Majid et al., 2011) have found that GPs in different rotations are not correlated, and that there is no correlation between MP and IP GPs, but that instead the distribution of the time delays between successive GPs follows an exponential distribution, as expected for a Poissonian process. Within a given cycle, though, multiple correlated microbursts can occur (Sallmen et al., 1999; Hankins and Eilek, 2007).
With our high sensitivity, we can investigate this in more detail. In Table 3 we show the number of rotations in which we detect multiple MP or IP bursts (i.e., double, triple etc.), as well as the number expected (listed only where larger than 0) for the case where all events are independent,
\[N_{n}=p_{n}N_{r}=\begin{pmatrix}N_{\mathrm{p}}\\ n\end{pmatrix}\left(\frac{1}{N_{r}}\right)^{n}\left(1-\frac{1}{N_{r}}\right)^{ N_{\mathrm{p}}-n}N_{r}, \tag{3}\]
where \(p_{n}\) is the probability of a given rotation to have \(n\) bursts (assuming a binomial distribution), \(N_{r}\) is the total number of rotations observed, and \(N_{\mathrm{p}}\) is the total number of bursts found (and where for numerical values we inserted numbers from Table 1: \(N_{\mathrm{p}}=N_{\mathrm{MP}}\) or \(N_{\mathrm{IP}}\) and \(N_{r}=t_{\mathrm{exp}}/P_{\mathrm{Crab}}\), where \(P_{\mathrm{Crab}}=33.7\)\(\mathrm{ms}\) is the rotation period of the pulsar). One sees that we detect significantly more multiples than expected by chance6, i.e., some of the detected pulses are composed of multiple, causally related microbursts.
Footnote 6: In Lin et al. (2023), we wrongly concluded the multiples were consistent with arising by chance. Sadly, we used incorrect estimates of \(N_{n}\).
In principle, one could estimate the number of independent bursts, \(N_{\mathrm{p}}^{\mathrm{ind}}\), in each epoch by subtracting from \(N_{\mathrm{p}}\) the excess pulses from Table 3, but this would not be quite correct since the excess would be relative to estimates made using the total number of observed pulses \(N_{\mathrm{p}}\), not the (lower) number of independent pulses \(N_{\mathrm{p}}^{\mathrm{ind}}\). One could iterate, but an easier, unbiased estimate of \(N_{\mathrm{p}}^{\mathrm{ind}}\) can be made using the observed fraction of rotations in which we do not see any bursts, which should equal \(N_{0}/N_{r}=p_{0}=\left(1-1/N_{r}\right)^{N_{\mathrm{p}}^{\mathrm{ind}}}\). Solving for \(N_{\mathrm{p}}^{\mathrm{ind}}\), we find that \(N_{\mathrm{p}}^{\mathrm{ind}}=fN_{\mathrm{p}}\) with fractions \(f\) that are consistent between all epochs, at \(91.8\pm 0.2\) and \(95.2\pm 0.5\)% for MP and IP, respectively. Hence, about 8 and 5% of the detected MP and IP pulses, respectively, are extra components. Or, as fractions of independent MP and IP pulses, \((6,1,0.12)\) and \((4,0.3,0.0)\%\), respectively, are causally related double, triple, or quadruple microbursts.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Observation & \multicolumn{3}{c}{MP} & \multicolumn{3}{c}{\(\dots\)} & IP & \multicolumn{3}{c}{\(\dots\)} \\ Code & 2 & 3 & 4 & 5 & 6 & 2 & 3 & 4 \\ \hline \hline EK036 A & 1820(599) & 200(12) & 24 & 0 & 0 & 144(17) & 4 & 2 \\ EK036 B & 1431(611) & 170(18) & 22 & 3 & 1 & 237(43) & 16 & 2 \\ EK036 C & 611(213) & 67 (4) & 6 & 0 & 0 & 54( 7) & 4 & 0 \\ EK036 D & 934(395) & 117(10) & 23 & 6 & 1 & 116(19) & 9 & 0 \\ \hline \end{tabular} Note. – Numbers in parentheses are those expected if bursts occur randomly; for that case, one does not expect to find any rotations with 4 or more MP bursts or 3 or more IP bursts. Note that our GP detection method does not differentiate between microbursts and echoes, which becomes important for a few very bright pulses in EKO36 D, for which echoes were present. In addition, we are not able to distinguish microbursts that occur very close together in time. The number of detections differ from Lin et al. (2023) as a different, more robust, search algorithm is implemented here (see Section 4.1).
\end{table}
Table 3: Number of Rotations with Multiple Bursts.
To investigate the distributions further, we show histograms of the time delay between pulses in Figure 7. Overdrawn are expectations for randomly arriving, independent pulses. We constructed these by bootstrapping, where we repeatedly reassign new random pulse cycles to our observed sets of pulses, and then recalculate the time delay distributions. Note that in our bootstraps, we do not randomize pulse phase, so that the observed phase distribution is correctly reflected in the time delays. One sees that as a function of pulse cycle (right column panels for MP and IP GPs in Fig. 7), the time delay distributions are not well defined.
Figure 6: MP GP and IP GP fluence and count distributions as a function of pulse phase for each EK036 observation. We used pulse phase bins of \(0.1\%\) and fluence bins of \(0.1\ \mathrm{dex}\). The light purple line in the fluence panels show the median for bins with more than \(2\) detected pulses.
ure 7), the observed histograms follow the expected exponential distribution (although the observed counts are slightly lower than the expected ones because not all pulses are independent, as is implicitly assumed in the bootstraps).
For the time delays between pulses that occur in the same cycle (left column panels for MP and IP GPs in Figure 7), the observed distributions are very different from those expected for randomly occurring bursts. One sees a large peak at short delays, representing the excess microbursts from Table 3, following a roughly exponential distribution with a mean time between bursts of \(\sim 30\;\mu\)s or so. Intriguingly, at somewhat larger time difference, there seem to be fewer bursts than expected for independent events. This suggests that while a given detection has an enhanced probability of being in a group of causally related microbursts, the occurrence of a burst also suppresses the likelihood of another, independent, burst being produced in the same rotation. Thus, our results confirm that GPs are often composed of multiple microbursts, and they indicate that another, independent GP is less likely to occur right after.
### Scattering Features
In Figure 6, one sees that in EK036 D, several MP GPs were detected at pulse phases quite far from the median phase. To investigate this, we looked at the arrival times of all GPs detected in EK036 D (see left panel of Figure 8). We found that the outliers occurred in two pulse rotations, which turned out to contain the brightest GPs in EK036 D. Looking at the pulse profiles of these brightest GPs, one sees that they are very similar (see right panels of Figure 8). In fact, closer
Figure 7: Time delays between successive GPs for the MP (in blue) and IP (in orange) components for each EK036 observation. On the left MP and IP columns, time delays within a pulse rotation are shown with bins of \(10\;\mu\)s and \(20\;\mu\)s for the MP and IP respectively; the low counts in the first bin reflect the minimum separation of \(8.75\;\mu\)s between detected pulses. On the right MP and IP columns, time delays in pulse rotations are shown with bins of \(1\) rotation and \(4\) rotations for the MP and IP respectively. The red lines show the average time delay histograms for \(1000\) bootstrap iterations, in which we randomized the rotation in which a pulse was seen (but not the phase, to keep the observed phase distribution).
examination reveals that all of the brightest GPs detected in EK036 D show similar pulse profiles. This implies that the pulses far from the median pulse phase arrive late because they are actually weak echoes of the main burst, with amplitudes down to \(\sim 0.4\%\) of the peak flux and delays up to \(\sim 300~{}\mu\)s.
In Figure 9, we show singular value decomposition (SVD) approximations of the average MP GP profile for each epoch (for the IP, too few bright pulses were available). This was created from MP GP rotations with peak intensities greater than \(200~{}\mathrm{Jy}\) and seemingly single peaks, aligned using time offsets found by correlation with a reference pulse. To avoid giving too much weight to the brightest pulses, and thus risking that remaining substructure enters the average profile, we normalized each rotation by the intensity at the correlation maximum before doing the SVD. One sees that all profiles are fairly sharply peaked, but sit on top of a base, which has the expected asymmetric part extending to later time due to scattering, as well as a more symmetric component, likely resulting from the collective effect of faint microbursts. Comparing the epochs, one sees that for EK036 A-C, the profile dropoff is relatively smooth and becomes undetectable after \(\sim\!200~{}\mu\)s, while in EK036 D, the tail is much longer, extending to \(\sim\!400~{}\mu\)s, and is much more bumpy.
Almost certainly, all bumps are echoes, including those at shorter delay in EK036 B (more clearly seen in the linear-scale plots in Lin et al.2023), Indeed, looking carefully at the stack of profiles in Figure 9, one sees that the echoes in EK036 D drift in time, moving slightly further away from the MP during the observation, with perhaps even a hint that echoes further away from the main bursts drift faster than those closer in. (Note that this stack is not completely linear in time, although given that the GP detection rate is roughly constant throughout, it is not far off.) This change in time is expected for echoes off a structure with changing distance from the line of sight, and indeed has been seen for a very prominent echo by Backer et al. (2000); Lyne et al. (2001). Overall, our observations suggests echoes are common, as also concluded from daily monitoring at \(600~{}\mathrm{MHz}\) by Serafin-Nadeau et al. (2023, in prep.).
Figure 8: _Left_: MP GPs and IP GPs detected in the EK036 D data. The gray shaded regions indicate when the telescope was not observing the Crab Pulsar and the black vertical lines mark our MP GP and IP GP windows. In the inset, we show two pulse rotations containing the brightest GPs “A” and “B”, in red and orange respectively. _Right, Top_: Waterfalls of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution and \(1~{}\mathrm{MHz}\) frequency resolution. _Right, Bottom_: Pulse profile of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution scaled to the peak of each pulse. Pulses “A” and “B” show similar features and we conclude that during the EK036 D observations, weak echoes were present at large delays.
## 5 Summary of Conclusions
The fine time resolution and high sensitivity in our beam-formed EVN data allowed us to confidently detect \(65951\) GPs with fluences above \(\sim 150\ \mathrm{Jy\ \mu s}\) over a short period of \(7.32\mathrm{hr}\). Within each of our four observations, we found that the GP detection rates are fairly constant, but that between epochs they differ by a factor of \(\sim\!2\). Similar changes were seen previously, and were suggested by Lundgren et al. (1995) to reflect changes in overall magnification of the scattering screens along the line of sight.
The changes in magnification are consistent with the pulse fluence distributions, which are power-law like at high fluence, but with a flattening at lower fluences; the distributions from the different epochs can be shifted to each other with a change in fluence scale. We noted that the fluence distributions are similar to what is expected for log-normal distributions, but found that the residual signals seen in the GP phase windows after removing the GPs we detected were larger than expected if the log-normal distribution continued also below our detection limit. Nevertheless, it suggests that with only somewhat more sensitive observations, it should be possible to get a fairly complete sampling of all GPs that contribute to the average flux, at least for the MP component.
Analyzing the pulse phase distributions, we confirm previous observations showing that the majority of GPs occur within very narrow phase windows. Furthermore, we observe no significant variations in the median flux distributions as a function of pulse phase. This suggests that it is the probability of observing a pulse that depends on pulse phase, not its energy, implying that the angle within which a pulse is emitted is much narrower than the rotational phase window, as expected if the plasma causing them is travelling highly relativistically (Bij et al., 2021; Lin et al., 2023).
With our high detection rates, we were able to investigate the distribution of time delays between successive bursts within the same pulse rotation. We detect a larger number than expected if all bursts were due to a Poissonian process, and infer that \(\sim\!5\%\) of bursts come in groups of 2 or 3 causally related microbursts, with a typical separation in time of \(\sim\!30\ \mu\)s.
Additionally, our high sensitivity revealed weak echo features for individual bright pulses, which drift slightly but sig
Figure 9: _Line plots_: SVD approximation of the MP pulse profile for all observations. In EK036 B, echoes are seen close to the profile’s peak (see Lin et al., 2023 for more details). The profile for EK036 D shows multiple weak echoes up to \(\sim\!300\ \mu\)s. _Image_: The MP pulse stack for EK036 D, using a logarithmic colour scale to bring out faint features. Each pulse is aligned by correlating with the rotation with the brightest pulse in EK036 D (which is appears to be a simple single microburst) and then normalized by the intensity at time \(0\) (the black dashed line). The echoes appear to move out over time, as one can see by comparing the location of the most prominent faint echo with the dashed white vertical line near it (time is increasing both upwards and to the right in this image).
nificantly even over our timescales of just a few hours. We infer that echo events are not rare.
Given our findings, we believe even more sensitive follow-up studies of the Crab Pulsar would be very useful. This would be possible using more small dishes (spaced sufficiently far apart that the Crab Nebula is well-resolved) and by recording a larger bandwidth.
## Acknowledgements
We thank the anonymous referee for their comments, which improved the clarity of this manuscript. We thank the Toronto Scintillometry group, and in particular Nikhil Mahajan, for useful discussion on GP statistics. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium (Loken et al., 2010; Ponce et al., 2019). SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. M.Hv.K. is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) via discovery and accelerator grants, and by a Killam Fellowship.
The European VLBI Network (EVN) is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project codes: EK036 A-D.
astropy (Astropy Collaboration et al., 2013, 2018, 2022), Baseband (Van Kerkwijk et al., 2020), CALC10 (Ryan & Vandenberg, 1980), numpy (Harris et al., 2020), matplotlib (Hunter, 2007), pulsarbat (Mahajan & Lin, 2023), scipy (Virtanen et al., 2020), tempo2 (Hobbs & Edwards, 2012).
| ```
Crab PulsarのEVNデータ4エポックを解析しました。Crab Nebulaを解像することで高感度を得て、foldedprofile内の微弱な高周波成分も検出できます。また、65951個の巨大なパルスを検出しています。これらは、その発生率、 fluence、相、到着時間分布を調査するために使用されています。主パルス成分に関しては、私たちの巨大なパルスは約80%の総輝度を占めています。これは、巨大なパルスエネルギー分布がほぼ完全に含まれていることを示唆していますが、その分布が残りの20%まで拡張されることは、微弱なブーストの大量発生を必要とする可能性があることを示唆しています。単一回転のブーストの到着時間差を観察することで、その間隔が近い巨大パルスが見つかる確率は、ランダムなブースト発生よりも増加しています。ある巨大パルスは、 |
2304.00050 | kNN-Res: Residual Neural Network with kNN-Graph coherence for point
cloud registration | In this paper, we present a residual neural network-based method for point
set registration that preserves the topological structure of the target point
set. Similar to coherent point drift (CPD), the registration (alignment)
problem is viewed as the movement of data points sampled from a target
distribution along a regularized displacement vector field. While the coherence
constraint in CPD is stated in terms of local motion coherence, the proposed
regularization term relies on a global smoothness constraint as a proxy for
preserving local topology. This makes CPD less flexible when the deformation is
locally rigid but globally non-rigid as in the case of multiple objects and
articulate pose registration. A Jacobian-based cost function and
geometric-aware statistical distances are proposed to mitigate these issues.
The latter allows for measuring misalignment between the target and the
reference. The justification for the k-Nearest Neighbour(kNN) graph
preservation of target data, when the Jacobian cost is used, is also provided.
Further, to tackle the registration of high-dimensional point sets, a constant
time stochastic approximation of the Jacobian cost is introduced. The proposed
method is illustrated on several 2-dimensional toy examples and tested on
high-dimensional flow Cytometry datasets where the task is to align two
distributions of cells whilst preserving the kNN-graph in order to preserve the
biological signal of the transformed data. The implementation of the proposed
approach is available at https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/
under the MIT license. | Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky | 2023-03-31T18:06:26 | http://arxiv.org/abs/2304.00050v2 | # kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration
###### Abstract
In this paper, we present a residual neural network-based method for point set registration that preserves the topological structure of the target point set. Similar to coherent point drift (CPD), the registration (alignment) problem is viewed as the movement of data points sampled from a target distribution along a regularized displacement vector field. While the coherence constraint in CPD is stated in terms of local motion coherence, the proposed regularization term relies on a global smoothness constraint as a proxy for preserving local topology. This makes CPD less flexible when the deformation is locally rigid but globally non-rigid as in the case of multiple objects and articulate pose registration. A Jacobian-based cost function and geometric-aware statistical distances are proposed to mitigate these issues. The latter allows for measuring misalignment between the target and the reference. The justification for the k-Nearest Neighbour(kNN) graph preservation of target data, when the Jacobian cost is used, is also provided. Further, to tackle the registration of high-dimensional point sets, a constant time stochastic approximation of the Jacobian cost is introduced. The proposed method is illustrated on several 2-dimensional toy examples and tested on high-dimensional flow Cytometry datasets where the task is to align two distributions of cells
whilst preserving the kNN-graph in order to preserve the biological signal of the transformed data. The implementation of the proposed approach is available at [https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/](https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/) under the MIT license.
## 1 Introduction
Point set registration is a widely studied problem in the field of computer vision but also arises in other fields e.g. bioinformatics as is discussed below. The problem involves aligning a deformed target set of \(d\)-dimensional points to another reference point set by applying a constrained transformation. This alignment allows for improved comparison and analysis of the two sets of points and is used in a variety of fields including object tracking, body shape modeling, human pose estimation, and removal of batch effects in biological data. [1, 2, 3, 4, 5]
Point set registration techniques are typically categorized based on two main properties, first, whether the technique is a correspondence-based or a correspondence-free technique, and second, whether the estimated transformation is rigid or non-rigid. Correspondence-based techniques require the availability of correspondence information (e.g. labels) between the two point sets, while correspondence-free, sometimes called simultaneous pose and correspondence registration, does not require such information and therefore is considered a significantly more difficult problem. Rigid registration techniques are also generally simpler. A rigid transformation is an isometric transformation that preserves the pairwise distance between points and such transformation is typically modeled as a combination of rotation and translation. Several rigid registration techniques have been proposed in [6, 7, 8, 9, 10, 11, 12, 13, 14]. Assuming the transformation is rigid, however, makes the types of deformations that could be handled quite limited. Non-rigid transformations allow for more flexibility; however, this makes the problem ill-posed as there are an infinite number of transformations that could align two point sets, thus, non-rigid registration techniques employ additional constraints.
### Problem Formulation
In this section, we formulate the alignment problem. Inspired by CPD [15], we view an alignment method as finding a map \(\phi\) that transforms data points sampled from an underlying distribution \(Q\) to distribution \(P\) in such a way that preserves the topological structure of data sampled from \(Q\). This is an ill-posed density estimation problem, therefore, we require an additional desiderium for \(\phi\) to be as simple as possible. In this context, we call a map \(\phi\) simple if it is close to the identity transformation. Importantly, this could be visualized as data points sampled from \(Q\) moving along a regularized displacement vector field \(F\).
More formally, we denote two sets of \(d\)-dimensional vectors (points), a ref
erence point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\), generated by a probability distributions \(P\) and \(Q\) respectively. Additionally, a \(k\)-Nearest Neighbour (kNN) graph is associated with (or constructed from) the set \(\mathbf{T}\) which must be preserved after transformation. A kNN graph for set \(\mathbf{T}\) is a directed graph such that there is an edge from node \(i\) to \(j\) if and only if \(\mathbf{y}_{j}\) is among \(\mathbf{y}_{i}\)'s \(k\) most similar items in \(\mathbf{T}\) under some similarity measure \(\rho\).
Thus, the goal of an alignment method, given the sets \(\mathbf{R}\) and \(\mathbf{T}\) in a matrix form of \(X\in\mathbf{R}^{n\times d}\) and \(Y\in\mathbf{R}^{m\times d}\) respectively, is finding a transformation \(\phi\) parameterized by \(\theta\) such that:
\[\hat{\theta}=\arg\max_{\theta}D(\phi(Y;\theta),X) \tag{1}\]
subject to the constraints:
\[\texttt{kNN}_{g}(\phi(Y;\theta))=\texttt{kNN}_{g}(y) \tag{2}\]
where \(D\) is a statistical distance that measures the difference between two probability distributions.
### Limitations of existing approaches
A classic example of a such constraint is found in CPD [15] and its extensions [16, 17, 18]. CPD uses a Gaussian Mixture Model to induce a displacement field from the target to source points and uses local motion coherence to constrain the field such that nearby target points move together. CPD achieves this however via a global smoothing constraint which makes it locally inflexible, and therefore unsuitable for articulated deformations in 3D human data, scenes with multiple objects, and biological data [19].
In this work, we introduce a Jacobian orthogonality loss and show that it is a sufficient condition for preserving the kNN graph of the data. Jacobian orthogonality introduced as a penalty \(|\mathbf{J}_{\mathbf{X}}^{\top}\mathbf{J}_{\mathbf{X}}-\mathbf{I}_{d}|\) where \(\mathbf{J}_{\mathbf{X}}\) is the Jacobian matrix at a point \(\mathbf{x}\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. The penalty has been proposed in other contexts as well, such as unsupervised disentanglement [20] and medical image registration [21, 22].
In [21], the finite difference method is employed to compute the Jacobian penalty for the B-splines warping transformation, and mutual information of corresponding voxel intensities is used as the similarity measure. Instead of using finite difference for the Jacobian penalty, which produces a numerical approximation of first-order derivatives, the authors of [22] derive an analytical derivative specific to the multidimensional B-splines case. Such approaches however are limited to low dimensions by the nature of the transformations used, the way in which the Jacobian penalty is computed, and their proposed similarity measures.
### Contributions
To address these limitations, we use Hutchinson's estimator [20, 23] for fast computation of the Jacobian loss for high-dimensional point clouds, a scalable residual neural network (ResNet) [24] architecture as our warping transformation, and geometry-aware statistical distances. The choice of ResNet with identity block \(\phi(x)=x+\delta(x)\) is natural since we view alignment similar to CPD as a regularized movement of data points along a displacement vector field; which in our case is simply \(\phi(x)-x=\delta(x)\). It is also worth mentioning that ResNets can learn identity mapping more easily. Further discussion on this choice is given in section 2.2. Moment-matching ResNet(MM-Res) [5] use a similar ResNet architecture with RBF kernel maximum-mean discrepancy as its similarity measure [25, 26], however, no topological constraints are provided to preserve the topological structure of the transformed data nor to limit the nature of the learned transformation as shown in Figure 1. Additionally, while maximum-mean discrepancy is a geometry-aware distance, we address some limitations by incorporating Sinkhorn divergence into our framework [27].
Figure 1: Stanford Bunny example showing the effect of the Jacobian penalty on the learned transformation.
To elaborate further, we first start by defining Maximum Mean Discrepancy (MMD):
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{3}\]
where \(\alpha,\beta\in M_{1}^{+}(X)\) are unit-mass positive empirical distributions on a feature space \(X\), \(\zeta=\alpha-\beta\), and \(k(\mathbf{x},\mathbf{y})\) is a kernel function. MM-Res uses an RBF kernel which is suitable for high-dimensional Euclidean feature spaces (e.g. to represent \(X\subset\mathbb{R}^{n}\)) and makes training complexity low as it scales up to large batches, nonetheless, such kernel blinds the model to details smaller than its standard deviation, and the networks' gradient suffers from the well-known vanishing gradient problem. One simple solution is to decrease the standard deviation of the kernel; however, this introduces another issue, namely, the target points will not be properly attracted to source points [28]. In practice, this makes such a framework incapable of learning simple deformations with sizable translations as we show in section 4. Optimal transport (OT) losses do not typically suffer from this issue and produce more stable gradients; however, such losses require solving computationally costly linear programs. A well-known efficient approximation of the OT problem is entropic regularized \(OT_{\epsilon}\)[29], for \(\epsilon>0\), it is defined as:
\[\texttt{OT}_{\epsilon}(\alpha,\beta):=\min_{\pi_{1}=\alpha,\pi_{2}=\beta}\int _{X^{2}}C(\mathbf{x},\mathbf{y})d\pi+\epsilon\texttt{KL}(\pi|\alpha\times\beta) \tag{4}\]
where \(C\) is a cost function (typically the Euclidean distance), \((\pi_{1},\pi_{2})\) denotes the two marginals of the coupling measure \(\pi\in M_{1}^{+}\) and KL is the KL-divergence. The solution for this formulation could be efficiently computed using the Sinkhorn algorithm as long as \(\epsilon>0\). It is clear that by setting \(\epsilon\) to 0, this minimization problem reduces back to standard OT. Sinkhorn divergence combines the advantages of MMD and OT and is defined as:
\[S_{\epsilon}(\alpha,\beta)=\texttt{OT}_{\epsilon}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{\epsilon}(\alpha,\alpha)+\texttt{OT}_{\epsilon}(\beta,\beta)) \tag{5}\]
The authors of [29] show that:
\[\lim_{\epsilon\to 0}S_{\epsilon}(\alpha,\beta)=\texttt{OT}(\alpha,\beta) \tag{6}\]
and
\[\lim_{\epsilon\rightarrow\infty}S_{\epsilon}(\alpha,\beta)=\frac{1}{2} \texttt{MDD}_{-C}^{2}(\alpha,\beta) \tag{7}\]
where \(C\) is the kernel used by MMD.
In the following section, we review other related methods.
### Other related work
Several point cloud registration approaches have been proposed. Thin plate spline functions-based techniques preserve the local topology via local rigidity on the surface of a deformed shape; however, these approaches are not scalable
to large datasets and are typically limited to 3-dimensional point clouds [30, 31, 32, 33, 34, 35]. To address these limitations, a deep learning paradigm for point cloud registration has been adopted. Deep learning-based approaches can be divided into two categories, namely, features-based, and end-to-end learning. In features-based methods, a neural network is used as a feature extraction. By developing sophisticated network architectures or loss functions, they aim to estimate robust correspondences by the learned distinctive feature [30, 36, 37, 38]. While feature-based learning typically involves elaborate pipelines with various steps such as feature extraction, correspondence estimation, and registration, end-to-end learning methods combine various steps in one objective and try to solve the registration problem directly by converting it to a regression problem [39, 40]. For example, [39] employs a key point detection method while simultaneously estimating relative pose.
Another class of methods is Graph Matching techniques, which are quadratic assignment problems (QAP) [40]. The main challenge for such methods is finding efficient approximate methods to the otherwise NP-hard QAP. Congruent Sets Gaussian Mixture (CSGM) [41] uses a linear program to solve the graph-matching problem and apply it to solve the cross-source point cloud registration task. Another approach is a high-order graph [42] that uses an integer projection algorithm to optimize the objective function in the integer domain. Finally, Factorized Graph Matching (FGM) method [43] factorizes the large pairwise affinity matrix into some smaller matrices. Then, the graph-matching problem is solved with a simple path following optimization algorithm.
## 2 Proposed model
### Methodology
In our case, we parametrize the transformation \(\phi\) as a residual neural network and formulate the optimization problem as:
\[\mathcal{L}(\theta)=\mathcal{L}_{1}+\lambda\mathcal{L}_{2} \tag{8}\]
where \(\mathcal{L}_{1}\) is the alignment loss \(D(\theta(Y;\theta),X)\) and \(\lambda\) is a hyperparamater, and \(\mathcal{L}_{2}\) is the topology preserving loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{y}\in T}|\mathbf{J}_{X}^{\top} \mathbf{J}_{X}-\mathbf{I}_{d}| \tag{9}\]
where \(\mathbf{J}_{y}\) is the Jacobian matrix at points \(y\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. In section 2.4 we prove that the orthogonality of the Jacobian matrix is indeed a sufficient condition for preserving the kNN graph of the data. We use two statistical distances, namely, Sinkhorn divergences, and maximum mean discrepancy. Sinkhorn divergences is a computationally efficient approximation for the Wasserstein distance in high dimensions and converge to the maximum mean discrepancy.
\[\mathcal{L}_{1}(\theta)=S(\alpha,\beta)=\texttt{OT}_{2}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{2}(\alpha,\alpha)+\texttt{OT}_{2}(\beta,\beta)) \tag{10}\]
where \(\texttt{OT}_{\epsilon}\) is the optimal transport with \(\mathcal{L}_{2}\)-norm cost, and \(\alpha\) and \(\beta\) are measures over reference and target distributions respectively. The measures \(\alpha\) and \(\beta\) are unknown and are only known via samples from \(\mathbf{R}\) and \(\mathbf{T}\) respectively. Although \(S_{\epsilon}(\alpha,\beta)\) interpolates to MMD as \(\epsilon\) goes to infinity, we still maintain an efficient standalone MMD distance for data where MMD performs better than the Wasserstein distance and therefore no need for the interpolation overhead. Specifically, we use Gaussian-based MMD:
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{11}\]
### Architecture
We use a simple ResNet identity block with a skip connection as our transformation where the output dimension is equal to the input dimension, and the output is calculated as such: \(\phi(\mathbf{y};\theta)=\mathbf{y}+\delta(\mathbf{y};\theta)\), where \(\delta\) is a standard multi-layer perceptron (MLP) with LeakyRelu activation functions and \(\theta\) represents the trainable weights of the network. The ResNet identity block has been chosen for two reasons: biasing \(\theta\) to have small values via weight decay or initializing the output layer using a distribution with mean zero and a small standard deviation minimizes the contribution of \(\delta(\mathbf{y};\theta)\) to the final transformation which makes \(\phi(\mathbf{y};\theta)\) close to the identity. Additionally, this follows the same recipe from CPD of viewing the alignment function as a smooth displacement field.
The ResNet identity block is chosen for the following two reasons. Biasing \(\theta\) to have small values via weight decay or initialization using a distribution with close to zero values minimizes the contribution of \(\delta(\mathbf{x}:\theta)\) to the final transformation which in turn makes \(\phi(\mathbf{x}:\theta)\) close to the identity by design. Additionally, since we take a similar approach to CPD by viewing the alignment transformation as a regularized movement of data point along displacement vector field \(F\); having a ResNet identity block is mathematically convenient since a displacement vector is a difference between the final position \(\phi(\mathbf{x}:\theta)\) (transformed point) and the initial position (data point) \(\mathbf{x}\) such that \(F(\mathbf{x})=\phi(\mathbf{x}:\theta)-\mathbf{x}=\delta(\mathbf{x}:\theta)\), therefore, we only need to worry about \(\delta(\mathbf{x}:\theta)\) instead of \((\phi(\mathbf{x}:\theta)-\mathbf{x})\) absent skip connection.
### Orthogonal Jacobian preserves kNN graph
In this section, we show that the orthogonality of the Jacobian matrix evaluated at data points is a sufficient condition for preserving the kNN graph of the data. A vector-valued function \(\mathcal{F}:\mathbb{R}_{n}\rightarrow\mathbb{R}_{n}\) preserves the kNN graph of data points \(X\in\mathbb{R}_{n}\) if, for every two points \(\mathbf{v}\) and \(\mathbf{w}\) that are in some small \(\epsilon\)-neighborhood of \(\mathbf{u}\), the following holds:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}<||\mathbf{u}-\mathbf{w}||_{2}^{2} \rightarrow||F(\mathbf{u}),F(\mathbf{v})||_{2}^{2}<||F(\mathbf{u}),F( \mathbf{w})||_{2}^{2}, \tag{12}\]
where \(||\cdot||_{2}^{2}\) is the squared Euclidian distance. Without loss of generality, we choose two points \(\mathbf{w}\), \(\mathbf{v}\) that lie in \(\epsilon\) neighborhood of point \(\mathbf{u}\) and linearize the vector field \(F\) around point \(\mathbf{u}\) such that:
\[F(\mathbf{x};\mathbf{u})\approx F(\mathbf{u})+\mathbf{J}_{\mathbf{u}}(\mathbf{x }-\mathbf{u}), \tag{13}\]
where \(\mathbf{J}_{\mathbf{u}}\) is the Jacobian matrix evaluated at point \(\mathbf{u}\).
The squared distance of \(\mathbf{u}\) and \(\mathbf{v}\) is:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}=(\mathbf{u}-\mathbf{v})^{\top}(\mathbf{u}- \mathbf{v})=\sum_{i}^{n}\left(\mathbf{u}_{i}-\mathbf{v}_{i}\right)^{2} \tag{14}\]
Similarly, the squared distance between \(F(\mathbf{u};\mathbf{u})\) and \(F(\mathbf{v};\mathbf{u})\) computes as follows
\[\begin{array}{rcl}||F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})||_{2 }^{2}&=&(F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})^{\top}(F(\mathbf{u} ;\mathbf{u})-F(\mathbf{v};\mathbf{u}))\\ &=&F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u})^ {\top}(F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}- \mathbf{u}))\\ &=&(\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u}))^{\top}(\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u}))\\ &=&(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})\\ &=&(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}-\mathbf{u})\end{array}\]
The last step follows from the orthogonality of \(\mathbf{J}_{\mathbf{u}}\) i.e. \((\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{\mathbf{u}}=\mathbf{I})\)
### Jacobian Loss Via Finite Difference
Given a vector-valued function \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), a data batch \(X\in\mathbb{R}^{m\times d}\), and the Jacobian \(\mathbf{J}_{X}\) of \(F\) at points \(\mathbf{X}\) is an \(\mathbb{R}^{m\times d\times d}\) tensor, it is possible to compute \(\mathbf{J}_{\mathbf{X}}\) analytically using autodifferentiation modules, however, such computation is highly inefficient, thus, we use numerical approximation.
Given a \(d\)-dimensional vector \(\mathbf{x}=[x_{1},...,x_{d}]\), the partial first derivative of \(F\) with respect to \(x_{i}\) is:
\[\frac{\partial F}{\partial x_{i}}=\lim_{\epsilon\to 0}\frac{F(\mathbf{x}+ \epsilon e_{i})-F(\mathbf{x})}{\epsilon}, \tag{15}\]
where \(e_{i}\) is a standard basis vector (i.e. only the \(i\)th component equals 1 and the rest are zero). This could be approximated numerically using a small \(\epsilon\). The Jacobian matrix \(\mathbf{J}_{\mathbf{x}}\) is simply \(\lfloor\frac{\partial F}{\partial x_{i}},...,\frac{\partial F}{\partial x_{d}}\rfloor\). To ensure the orthogonality of the Jacobian at \(\mathbf{X}\), we minimize the following loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}| \tag{16}\]
This process could be computed efficiently in a few lines of code as indicated in algorithm 1.
### Training
The training process (algorithm 2) takes advantage of two sets of \(d\)-dimensional vectors (points), a reference point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\). First, we sample points from \(\mathbf{R}\) and \(\mathbf{T}\) and create two matrices \(\mathbf{X}\) and \(\mathbf{Y}\). We feed \(\mathbf{Y}\) to the model and obtain \(\hat{\mathbf{Y}}\). Under the GMM assumption, we compute the GMM posterior probability as a similarity matrix and estimate \(\mathcal{L}_{1}\) as the negative log-likelihood. For the Sinkhorn divergence approach, we compute equation (10). We use the SoftkNN operator to construct the kNN graph for both the input \(\mathbf{Y}\) and the output \(\hat{\mathbf{Y}}\) and compute \(\mathcal{L}_{2}\) as the mean squared error between the two. Finally, we use backpropagation by minimizing the loss \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\) until convergence.
### Stochastic Approximation of Orthogonal Jacobian Loss
Using finite difference to compute the Jacobian for low-dimensional point clouds is efficient, however, the computational cost increases linearly with the dimension of the data. Thus, an approximate estimate with the constant computational cost is introduced.
Given a vector-valued function \(F\), and a sample \(\mathbf{x}\), we would like to minimize the following:
\[\mathcal{L}_{\mathbf{J}}(F)=|\mathbf{J}^{\top}\mathbf{J}\circ(1-\mathbf{I})| _{2}=\sum_{i\neq j}\frac{\partial F_{i}}{\partial x_{j}}\frac{\partial F_{j}} {\partial x_{i}} \tag{17}\]
Following [20, 23], the Hutchinson's estimator of \(\mathcal{L}_{\mathbf{J}}(F)\) can be approximated as such:
\[\mathcal{L}_{\mathbf{J}}(F)=\texttt{Var}_{r}(r_{\epsilon}^{\top}(\mathbf{J}^{ \top}\mathbf{J})r_{\epsilon})=\texttt{Var}_{r}((\mathbf{J}r_{\epsilon})^{\top }(\mathbf{J}r_{\epsilon})) \tag{18}\]
where \(r_{\epsilon}\) denotes a scaled Rademacher vector (each entry is either \(-\epsilon\) or \(+\epsilon\) with equal probability) where \(\epsilon>0\) is a hyperparameter that controls the granularity of the first directional derivative estimate and \(\texttt{Var}_{r}\) is the variance. It
is worth noting that this does not guarantee orthonormality, only orthogonality. In practice, however, we find that such an estimator produces comparable results to the standard finite difference method and could be efficiently implemented in Pytorch as shown in algorithm 3.
```
Input: \(\mathbf{R}\), and \(\mathbf{T}\) pointsets, blurring factor \(\sigma\), step size \(\epsilon\), regularisation \(\lambda\), and batch size \(b\); Output: Trained model \(\triangleright\) Simple mini-batches of size \(b\) from \(\mathbf{R}\) and \(\mathbf{T}\) while\((\mathbf{X},\mathbf{Y})\in(\mathbf{R},\mathbf{T})\) until convergencedo \(\phi(\mathbf{Y})\leftarrow\mathbf{Y}+\delta(\mathbf{Y})\); ifloss=="sinkhorn"then \(\mathcal{L}_{1}=\mathtt{S}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); else \(\mathcal{L}_{1}=\mathtt{MMD}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); \(\mathbf{J}_{\mathbf{Y}}[i,:]=\frac{\delta(\mathbf{Y}+\epsilon\epsilon_{i})- \delta(\mathbf{Y})}{\epsilon}\); \(\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}|\); \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\); \(\triangleright\) backpropogation step Minimize(\(\mathcal{L}\));
```
**Algorithm 2**Training kNN-Resnet
### Parameters Selection
The proposed model has three main hyperparameters, namely: \(\sigma\), \(\epsilon\), and \(\lambda\). In the case of Sinkhorn divergence, \(\sigma>0\) is the blur (interpolation) parameter between OT and MMD, with a default value of \(0.01\) for datasets that lie in the first quadrant of the unit hypercube (minmax normalized data). Decreasing \(\sigma\) has the effect of solving for an exact OT, which typically produces very accurate registration, however, this comes at a slower convergence cost. In the cases where it is more advantageous to use MMD, \(\sigma\) represents the standard deviation of the Gaussian kernel. \(\epsilon>0\) represents the finite difference step size and controls the radius of topology preservation around each point. It is worth noting that a large epsilon value that covers all data tends to produce globally isomorphic transformations. \(\lambda>0\) is simply a regularization parameter that prioritizes regularization over alignment and is typically less than \(0.01\). An additional hyperparameter \(k\) is introduced when using a stochastic approximation of Jacobian orthogonality for high-dimensional data. This hyperparameter determines the number of Rademacher vectors sampled to estimate the Jacobian orthogonality penalty. Generally, a large \(k\) tends to produce a more accurate estimation, however; in practice, \(k=5\) seems to be a sufficient number for the datasets we experimented with.
```
1defstochastic_orth_jacobian(G,z,epsilon=0.01):
2''
3InputG:FunctiontocomputetheJacobianPenaltyfor.
4Inputz:(batchsize,d)InputtoGthattheJacobianis
5computedw.r.t.
6Inputk:numberofdirectionstosample(default5)
7Inputepsilon:(default0.1)
8Output:mean(\(|\mathbf{J}_{X}^{T}\mathbf{J}_{X}-\mathbf{I}_{d}|\))
9'
10r=torch.randint(0,2,size=torch.Size((k,*z.size()),))
11#r:rademacherrandomvector
12r[r==0]=-1
13vs=epsilon*r
14diffs=[G(z+v)-Gzforvinvs]
15#std:stochasticfinitediffs
16sfd=torch.stack(diffs)/epsilon
17loss=torch.var(sfd,dim=0).max()
18returnloss
19
```
**Algorithm 3**PyTorch code for Hutchinson approximation for Jacobian off-diagonal elements at data points \(z\).
## 3 Experiments
In this section, we provide experimental results on several datasets, namely, Chui-Rangarajan synthesized dataset used in [31, 44, 45], and single-cell RNA data used in [5]. The Chui-Rangarajan synthesized dataset is comprised of two shapes; a fish shape, and a Chinese character shape. Each shape is subjected to 5 increasing levels of deformations using an RBF kernel, and each deformation contains 100 different samples. The samples are generated using different RBF coefficients which are sampled from a zero-mean normal distribution with standard deviation \(\sigma\), whereby increasing \(\sigma\) leads to generally larger deformation.
### Results on 2D Data
We use the root-mean-squared error (RMSE) between the transformed data \(\hat{y_{i}}\) and the ground truth \(y_{i}\) available from the Chui-Rangarajan synthesized dataset: \(error=\sqrt{\frac{1}{m}\sum_{i=0}^{m}{(\hat{y_{i}}-y_{i})^{2}}}\).
It is important to note that such ground-truth correspondence is absent during training time and is only available during test time. Figures 2 and 3 show the initial point set distributions and their corresponding aligned versions for the Chinese character and the fish examples respectively. We also report results for our kNN-Res, MM-Res[5], CPD [15], TRS-RPM [31], RPM-LNS [45], and GMMREG [32] over 5 deformation levels and 100 samples per level. Figures 4b and 4b show results for tested models for the Chinese character, and Fish datasets respectively. We notice that after a certain level of
non-rigid deformation, MM-Res is unable to converge. For our kNN-Res, we set \(\epsilon=.005,\lambda=10^{-5},\sigma=.001\) and number of hidden units = 50. We start with a relatively high learning rate (0.01) for ADAM [46] optimizer and use a reduce-on-plateau scheduler with a reduction factor of 0.7 and minimum learning rate of \(5\times 10^{-5}\). Qualitatively, the grid-warp representations from the second column in figures 2 and 3 indicate that our estimated transformations are, at least visually, "simple" and "coherent". Furthermore, to quantitatively assess neighborhood preservation we use the hamming loss \(\mathcal{L}_{H}\) to estimate the difference between the kNN graph before and after transformation:
\[\mathcal{L}_{H}=\sum_{i=0}^{m}\sum_{j=0}^{m}I(\hat{p}_{i,j}^{k}\neq p_{i,j}^{k})\]
where \(p_{i,j}^{k}\) is the \(i\),\(j\) element of the k-NN graph matrix before transformation, \(\hat{p}_{i,j}^{k}\) is the corresponding element after transformation, and \(I\) is the indicator function. Figures 5b and 5a show the difference in neighborhood preservation between MM-Res and our kNN-Res for the Chinese character, and Fish datasets respectively for three different levels of deformations.
Moreover, despite the additional topology regularization term, our kNN-Res generally incurred smaller alignment errors and was able to converge under large deformation levels.
### Results on High-Dimensional CyTOF Data
Cytometry by time of flight (CyTOF) provides the means for the quantification of multiple cellular components data, however, is susceptible to the so-called batch effect problem, where systematic non-biological variations during
Figure 2: The Chinese character deformation example: Top row represents original and deformed sets, Mid row represents the vector field, and Bottom row is the final alignment.
the measuring process result in a distributional shift of otherwise similar samples. This effect breaks the intra-comparability of samples which is a crucial component of downstream tasks such as disease diagnosis and typically requires the intervention of human experts to remove these batch effects. The CyTOF dataset used in our experiments was curated by the Yale New Haven Hospital. There are two patients, and two conditions were measured on two different days. All eight samples have 25 markers each representing a separate dimension ('CD45', 'CD19', 'CD127', 'CD4', 'CD8a', 'CD20', 'CD25', 'CD278', 'TNFa', 'Tim3', 'CD27', 'CD14', 'CCR7', 'CD28','CD152', 'FOXP3', 'CD45RO', 'INFg', 'CD223', 'GzB', 'CD3', 'CD274', 'HLADR', 'PD1', 'CD11b'), and a range of cells (points) between 1800 to 5000 cells per sample. The split is done such that samples collected on day 1 are the target, and samples collected on day 2 are the target, and samples collected on day 3 are the target, and samples collected on day 4 are the target, and samples collected on day 5 are the target, and samples collected on day 6 are the target, and samples collected on day 7 are the target, and samples collected on day 8 are the target, and samples collected on
Figure 5: The figures show Hamming loss for the following levels of deformations: (left) level 1, (mid) level 2, (right) level 3.
Figure 6: The blue and red dots represent 1st and 2nd principal components of reference (patient #2 on day 2) and the target samples (patient #2 on day 1) correspondingly.
the reference, resulting in four alignment experiments.
We follow the exact preprocessing steps described in [5]. To adjust the dynamic range of samples, a standard pre-preprocessing step of CyTOF data is applying a log transformation [47]. Additionally, CyTOF data typically contains a large number of zero values (40%) due to instrumental instability which are not considered biological signals. Thus, a denoising autoencoder (DAE) is used to remove these zero-values [48]. The Encoder of the DAE is comprised of two fully-connected layers with ReLU activation functions. The decoder (output) is a single linear layer, with no activation function. All layers of the DAE have the same number of neurons as the dimensionality of the data. Next, each cell is multiplied by an independent Bernoulli random vector with probability =.8, and the DAE is trained to reconstruct the original cell using an MSE loss. Furthermore, the DAE is optimized via RMSprop and weight decay regularization. The zero values in both reference and target are then removed using the trained DAE. Finally, each feature in both target and reference samples is independently standardized to have a zero-mean and unit variance. For our kNN-Res, we set \(\epsilon=0.05,\lambda=0.1,\sigma=0.04\), \(k=5\) for Hutchinson's estimator, and the number of hidden units to 50. We start with a relatively high learning rate (0.01) for the ADAM optimizer and use a reduce-on-plateau scheduler with a reduction factor of.7, and a minimum learning rate of \(5\times 10^{-5}\). Figure 6 shows the first two principal components of data before and after alignment using two kNN-Res models with different lambdas. Although the two samples appear less aligned when using a large \(\lambda\), this comes with the benefit of preserving the topology of the data as shown by the learned transformation in figure 7 where points (cells) are moved in a coherent way.
This becomes more clearer when looking at the marginals in figure 13 in the appendix. In this experiment, we trained five models with five different
Figure 7: Point set transformation(alignment) of patient #2 sample on day 1 and day 2, shown in space of 1st and 2nd principal components.
lambdas ranging from 0 to 1. It is clear that having a small \(\lambda\) favors alignment over faithfulness to the original distribution, however, increasing \(\lambda\) preserves the shape of the original data after transformation, which is desirable in biological settings. For results of other experiments see Appendix.
## 4 Discussion
### Implications
Point-set registration methods are typically used for problems in computer vision to align point clouds produced by either stereo vision or by Light Detection and Ranging devices (e.g. Velodyne scanner) for instance to stitch scenes and align objects. These datasets are usually of 2 or 3 dimensions and hence the methods had limited exposure to high-dimensional datasets. Biological data, on the other hand, is usually of high dimension and hence methods from point-set registration do not directly translate to biological data. The proposed method in this study was tested on a 25-dimensional CyTOF dataset. However, in flow and mass cytometry, data could easily go beyond 50 dimensions (markers). For instance, methods that combine protein marker detection with unbiased transcriptome profiling of single cells provide an even higher number of markers. These methods show that multimodal data analysis can achieve a more detailed characterization of cellular phenotypes than transcriptome measurements alone [49, 50] and hence recently gained significant traction. Unfortunately, these methods require more sophisticated batch normalization algorithms, since manual gating and normalization using marginal distributions become infeasible. It is worth mentioning that even though the experts are making sure that the marginal distributions are aligned, there is still no guarantee that the samples are aligned in the higher dimensional space. Moreover, the alignment might result in such nonlinear and non-smooth transformations that break biological relationships or introduce non-existing biological variabilities. The proposed method mitigates these issues and guarantees smooth transformations.
### Limitations
It is clear from the last step of the proof that the orthogonal Jacobian is too strong a condition for preserving the kNN graph:
\[(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})=(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}- \mathbf{u}) \tag{19}\]
The objective is satisfied by preserving inequality and not equality. In other words, it is only necessary and sufficient for \(\mathbf{J}\) to preserve the kNN graph if the following holds:
\[\mathbf{u}^{\top}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{v}\rightarrow\mathbf{ u}^{\top}\mathbf{J}^{\top}\mathbf{J}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{J} ^{\top}\mathbf{J}\mathbf{v} \tag{20}\]
or
\[\langle\mathbf{u},\mathbf{u}\rangle\leq\langle\mathbf{v},\mathbf{v}\rangle \rightarrow\langle\mathbf{J}\mathbf{u},\mathbf{J}\mathbf{u}\rangle\leq \langle\mathbf{J}\mathbf{v},\mathbf{J}\mathbf{v}\rangle \tag{21}\]
Having strict equality puts a limitation on the kind of transformations the model is capable of learning. Furthermore, even if the deformation could theoretically be expressed, such a penalty makes convergence unnecessarily slower. On the empirical side, we only have a limited number of experiments to test the proposed method. More experimentation and ablation are required to better understand the limits of our current approach and to learn how it fairs on a wider selection of real-world data such as RNA-Seq.
### Future Work
An important future direction is incorporating local or partial matching using modified alignment losses such as Gromov-Wasserstein distance. This should lead to a much more robust solution than global matching, especially in the case of outliers and missing objects. We also posit that solving point set registration under topological constraints such as preserving the kNN graph is naturally extendable to dimensionality reduction.
## 5 Conclusion
This paper presents a simple, scalable framework for point cloud registration. At its core, it consists of three components, namely (a) residual neural network with identity blocks as a parametrized displacement field, (b) Jacobian penalty as a topology-preserving loss, and (c) Sinkhorn Divergence as a sample-based, geometry-aware statistical distance. Additionally, by incorporating Hutchinson's estimator for the Jacobian loss, we show that our model is easily extensible to high dimensions with constant complexity. Furthermore, we offer both qualitative and quantitative analysis for synthetic and CyTOF datasets showing the flexibility and applicability of our model in multiple domains.
| この論文では、点セットの登録に適用する残差ネットワークに基づく方法を提案します。この方法では、対象点セットの拓撲構造を保ちます。相対的にcoherent point drift (CPD) と同様、登録 (アライメント) 問題は、標的分布から抽出したデータポイントが正規化された位相ベクトル場に沿って移動したものと見なされます。CPD のcoherence 制約は、局所的な動き相関性で述べられますが、提案された正規化項は、局所的なトポロジーを保つためのグローバルな滑らかさ制約を代理として使用しています。これは、複数のオブジェクトや複雑な姿勢登録の場合、CPD の柔軟性が低下する原因となっています。この問題に対処するため、ヤコビアンベースのコスト関数と幾何学的Awareの統計的距離が提案されました。後者は、標的と参照との間のずれを測定するのに使用 |
2305.19628 | On the origin of the evolution of the halo occupation distribution | We use the TNG300 magneto-hydrodynamic simulation and mock catalogues built
using subhalo abundance matching (SHAM) to study the origin of the redshift
evolution of the halo occupation distribution (HOD). We analyse stellar-mass
selected galaxy samples with fixed number densities, spanning the redshift
range $0 \le z \le 3$. We measure their halo occupation functions and fit the
HOD parameters to study their evolution over cosmic time. The TNG300 galaxy
population strongly depends on the baryonic physics implemented in the
simulation. In contrast, the galaxy population predicted by a basic SHAM model
without scatter is a direct result of the cosmology of the dark matter
simulation. We find that the HOD evolution is similar for both models and is
consistent with a previous study of the HOD evolution in semi-analytical
models. Specifically, this is the case for the ratio between the characteristic
halo masses for hosting central and satellite galaxies. The only HOD parameter
whose evolution varies across models is $\sigma_{\rm logM}$, which contains
information about the stellar mass-halo mass relation of the galaxies and does
not strongly impact galaxy clustering. We also demonstrate that the dependence
on the specific values of the cosmological parameters is small. We conclude
that the cosmology of the galaxy sample, i.e. the cosmological hierarchical
growth of structure, and not the baryonic physics prescriptions, governs the
evolution of the HOD for stellar mass-selected samples. These results have
important implications for populating simulated lightcones with galaxies and
can facilitate the interpretation of clustering data at different redshifts. | Sergio Contreras, Idit Zehavi | 2023-05-31T07:54:14 | http://arxiv.org/abs/2305.19628v2 | # On the origin of the evolution of the halo occupation distribution
###### Abstract
We use the TNG300 magneto-hydrodynamic simulation and mock catalogues built using subhalo abundance matching (SHAM) to study the origin of the redshift evolution of the halo occupation distribution (HOD). We analyse stellar-mass selected galaxy samples with fixed number densities, spanning the redshift range \(0\leq z\leq 3\). We measure their halo occupation functions and fit the HOD parameters to study their evolution over cosmic time. The TNG300 galaxy population strongly depends on the baryonic physics implemented in the simulation. In contrast, the galaxy population predicted by a basic SHAM model without scatter is a direct result of the cosmology of the dark matter simulation. We find that the HOD evolution is similar for both models and is consistent with a previous study of the HOD evolution in semi-analytical models. Specifically, this is the case for the ratio between the characteristic halo masses for hosting central and satellite galaxies. The only HOD parameter whose evolution varies across models is \(\sigma_{\rm logM}\), which contains information about the stellar mass-halo mass relation of the galaxies and does not strongly impact galaxy clustering. We also demonstrate that the dependence on the specific values of the cosmological parameters is small. We conclude that the cosmology of the galaxy sample, i.e. the cosmological hierarchical growth of structure, and not the baryonic physics prescriptions, governs the evolution of the HOD for stellar mass-selected samples. These results have important implications for populating simulated lightcones with galaxies and can facilitate the interpretation of clustering data at different redshifts.
keywords: galaxies: evolution - galaxies: formation - galaxies: haloes - galaxies: statistics - cosmology: theory - large-scale structure of universe
## 1 Introduction
In the standard picture of hierarchical structure formation, galaxies reside in dark matter haloes. The formation and evolution of the haloes is dominated by gravity, with the haloes growing by accretion and mergers. The formation of the galaxies and their relation to the dark matter haloes is more complex and depends on the detailed physical processes, leading to the varied observed galaxy properties. As the haloes merge and evolve, the haloes will often host more than one galaxy (since galaxy merger is a slower process). The evolution of the galaxies may thus be impacted by both the baryonic physics and by the merger history of their host haloes.
One of the most useful and popular ways to characterise the dark matter halo-galaxy connection is by measuring the average number of galaxies that populate haloes as a function of halo mass, which provides the basis for the halo occupation distribution (HOD) framework (Jing et al., 1998; Benson et al., 2000; Peacock and Smith, 2000; Berlind et al., 2003; Zheng et al., 2005, 2007; Guo et al., 2015). The HOD formalism has been widely used to interpret clustering data (e.g., Zehavi et al., 2011; Guo et al., 2015), to characterise different galaxy populations (Contreras et al., 2013; Yuan et al., 2022), to create mock galaxy catalogues (e.g., Grieb et al., 2016), to examine galaxy assembly bias effects (e.g., Zehavi et al., 2018; Salcedo et al., 2022) or even constrain cosmological parameters (e.g., Cacciato et al., 2013; More et al., 2015; Zhai et al., 2019; Miyatake et al., 2022; Yuan et al., 2022; Zhai et al., 2023).
The HOD model's strengths as a technique for creating mock galaxy catalogues include its ability to reproduce realistic galaxy clustering, its flexibility, and computational efficiency. Populating a dark matter simulation with galaxies, over large cosmological volumes, takes mere seconds and requires only the position and mass of dark matter haloes. Some HOD improvements, such as velocity bias (Guo et al., 2015) or assembly bias (Hearin et al., 2016; Xu et al., 2021), may also necessitate the simulation's dark matter particles or additional halo properties (see Yuan et al., 2022; Yuan et al., 2022) for the latest developments on HOD modelling). These requirements are significantly smaller than those of other techniques, such as subhalo abundance matching (SHAM, Vale and Ostriker, 2006; Conroy et al., 2006; Guo and White, 2014; Contreras et al., 2015; Kulier and Ostriker, 2015; Chaves-Montero et al., 2016; Lehmann et al., 2017; Contreras et al., 2021, 2021; Favole et al., 2022; Contreras et al., 2023, 20) or semi-analytical models of galaxy formation (SAMs, e.g., Kauffmann et al., 1993; Cole et al., 1994; Bower et al., 2006; Lagos et al., 2008; Benson, 2010, 2012; Jiang et al., 2014; Croton et al., 2016; Lagos et al., 2018; Stevens et al., 2018; Henriques et al., 2020), which require higher resolution simulations, subhalo property computation, and, in most cases, halo merger trees. In turn, these requirements are smaller than those of
hydrodynamic simulations, which are arguably the most advanced way we have today to model galaxies on cosmological volumes.
In hydrodynamic simulations (such as EAGLE, Schaye et al. 2015; Illustris,Vogelsberger et al. 2014b; Magueticum, Dolag et al. 2016; HorizonAGN, Dubois et al. 2014; Simba, Dave et al. 2019; IllustrisTNG, Nelson et al. 2018; Pillepich et al. 2019 and MillenniumTNG, Pakmor et al. 2022), dark matter particles are modelled alongside baryonic particles/cells. These simulations are then able to directly reproduce the interaction between dark matter and baryons and provide unique opportunities to study, in detail, the evolution of galaxies in a cosmological context. The downside of these simulations is their high computational cost, which can be an order of magnitude larger than dark matter-only simulations. Hence, hydrodynamic simulations and SAMs are often used to enhance the accuracy of other, more practical, approaches for modelling galaxies, such as HODs and SHAMs.
Our work follows that of Contreras et al. (2017; C17 hereafter), where we studied the evolution of the HODs of stellar mass-selected samples from two different semi-analytic models generated from the same dark matter simulation. In the SAMs, the haloes from the N-body simulations are populated with galaxies using analytical prescriptions for the baryonic processes. Following the dark matter merger trees, galaxies merge and evolve as new stars form and previous generations of stars change. The evolution of the HOD is characterised by fitting a parametric form to the HODs at different redshifts and studying the evolution of the fitting parameters. C17 present a simple evolution model for each of the HOD parameters. This evolution can be used to populate lightcones constructed from N-Body simulations (e.g., Smith et al. 2017, 2022) or for modelling clustering data at different redshifts. Although the HODs describing the two SAMs exhibit some differences, the evolution of HOD parameters followed a similar pattern. These findings may suggest that the evolution of the HOD is governed by cosmology and not galaxy formation physics.
In this paper, we explore the evolution of the HOD of stellar mass-selected samples for two distinct galaxy population models: a state-of-the-art hydrodynamical simulation, the TNG300, whose galaxy population strongly depends on the baryonic processes of the model, and a basic SHAM model without scatter. In the SHAM model, each subhalo in the simulation is assigned a stellar mass by assuming a monotonic relation to a subhalo property (\(V_{\rm peak}\) in our case), such that the subhalo with the highest value of \(V_{\rm peak}\) has the largest stellar mass and so on (see SS 2.2 for more details). Since we construct our galaxy samples based on a fixed number density, the galaxy samples produced by the SHAM model do not depend on any galaxy formation physics, but rather on the simulation's cosmology. We find that the HODs evolve nearly identically in both models, indicating that the evolution is determined by the cosmological hierarchical accretion picture and not by the galaxy formation physics. Having a universal way in which the HOD parameters evolve, independent of the galaxy formation model assumed, justifies some of the ansatzes assumed today when constructing simulated lightcone galaxy catalogues.
This paper is organised as follows. The simulations and galaxy population models used in this study are described in SS 2. The evolution of HOD in each of these models is depicted in SS 3 and subsequently analysed SS 4. We conclude in SS 5. Appendix A presents results for the evolution of the HOD in the EAGLE hydrodynamical simulation. Appendix B examines the dependence on the values of the cosmological parameters. Unless otherwise stated, the standard units in this paper are \(h^{-1}{\rm M}_{\odot}\) for masses, \(h^{-1}{\rm Mpc}\) for distances, \({\rm km/s}\) for the velocities, and all logarithm values are in base 10.
## 2 Models of Galaxy Clustering
In this section, we describe the galaxy population models employed in the construction and characterization of our galaxy samples. In SS 2.1, we introduce the TNG300 cosmological magneto-hydrodynamic simulation, as well as its dark matter-only counterpart, TNG300-Dark. In SS 2.2, we present the subhalo abundance matching method employed to populate the TNG300-Dark. In SS 2.3, we describe briefly the halo occupation distribution framework, used to characterise the different galaxy samples. Finally, in SS 2.4, we specify how we select and compare the galaxies from TNG300 and the SHAM mock.
### The TNG300
In this work, we use galaxy samples from the Illustris-TNG300 simulation (hereafter TNG300). This simulation is part of "The Next Generation" Illustris Simulation suite of magneto-hydrodynamic cosmological simulations (IllustrisTNG; Nelson et al. 2018; Springel et al. 2018a; Marinacci et al. 2018; Pillepich et al. 2018b; Naiman et al. 2018), the successor of the original Illustris simulation (Vogelsberger et al. 2014b,a; Genel et al. 2014; Sijacki et al. 2015). TNG300 is one of the largest publicly available high-resolution hydrodynamic simulations1. The simulated volume is a periodic box of \(205\,h^{-1}{\rm Mpc}\) (\(\sim 300\) Mpc) aside. The number of gas cells and dark matter particles is \(2500^{3}\) each, implying a baryonic mass resolution of \(7.44\times 10^{6}\,h^{-1}{\rm M}_{\odot}\) and a dark matter particle mass of \(3.98\times 10^{7}\,h^{-1}{\rm M}_{\odot}\). The cosmological parameters assumed in the simulations, \(\Omega_{\rm M}=0.3089\), \(\Omega_{\rm b}=0.0486\), \(\sigma_{8}=0.8159\), \(n_{\rm s}=0.9667\) and \(h=0.6774\), are consistent with recent Planck values (Planck Collaboration et al. 2016).
Footnote 1: [https://www.tmg-project.org/](https://www.tmg-project.org/)
TNG300 was run using the AREPO code (Springel 2010) and features a number of enhancements over its predecessor, the Illustris simulation, including: an updated kinetic AGN feedback model for low accretion rates (Weinberger et al. 2017); an improved parameterization of galactic winds (Pillepich et al. 2018a); and inclusion of magnetic fields based on ideal magneto-hydrodynamics (Pakmor et al. 2011; Pakmor & Springel 2013; Pakmor et al. 2014). The free parameters of the model were calibrated to ensure that the simulation agrees with a number of observations: (i) the stellar mass function, (ii) the stellar-to-halo mass relation, (iii) the total gas mass contained within the virial radius (\(r_{500}\)) of massive groups, (iv) the stellar mass - stellar size relation and the black hole-galaxy mass relation at \(z=0\), and (v) the overall shape of the cosmic star formation rate density up to \(z\sim 10\). The TNG simulations also successfully reproduce many other observables not directly employed in the calibration process (e.g., Springel et al. 2018a; Pillepich et al. 2018b; Springel et al. 2018b; Vogelsberger et al. 2020).
To identify (sub)haloes/galaxies, a friends-of-friends group finder (FOF; Davis et al. 1985) is first used to identify the haloes (with linking length 0.2), within which gravitationally bound substructures are then located and hierarchically characterised using the SUBFIND algorithm (Springel et al. 2001). The SUBFIND catalogue contains both central and satellite subhaloes, with the position of the centrals coinciding with the FOF centres (defined as the minimum of the gravitational potential).
We use as well the dark matter-only version of TNG300, which we refer to as TNG300-Dark. This simulation has the same initial conditions, volume, cosmology, number of outputs and number of dark matter particles as its hydrodynamic counterpart, but with a mass
particle of \(4.27\times 10^{7}\)\(h^{-1}\)M\({}_{\odot}\). We also utilize the merger trees of the simulation, run with the SUBLINK algorithm (Rodriguez-Gomez et al., 2015), to compute \(V_{\rm peak}\) for the subhaloes, needed for constructing the subhalo abundance matching catalogue.
### The subhalo abundance matching
SubHalo Abundance Matching (SHAM; Vale and Ostriker, 2006; Conroy et al., 2006) is an empirical method for populating the subhaloes of a \(N\)-body simulation with galaxies. In its most fundamental form, SHAM assumes a monotonic mapping between the (sub)halo mass of the central galaxies and their stellar mass or luminosity. Recent implementations of SHAM incorporate scatter and satellite galaxies by utilizing subhalo properties at infall or the maximum values throughout their assembly history. These modifications are necessary in order to obtain realistic clustering predictions.
One of the main advantages of SHAM approach is the computational efficiency and simplicity. In contrast to HOD models, which have between five to ten free parameters, standard SHAM models use a single free parameter, the scatter between the subhalo property used and the stellar mass, in the majority of their implementations. Additionally, SHAM predicts galaxy clustering in rough accordance with hydrodynamical simulations and reproduces some, but not all, of its galaxy assembly bias signal (Chaves-Montero et al., 2016; see also Contreras et al., 2021, 2022). At the same time, due to the necessity of identifying the subhalos, the resolution of the N-body simulation required to run a SHAM is greater than that needed for an HOD, which only requires a halo catalogue. Furthermore, SHAM models typically require some subhalo properties that are not always provided by the N-Body codes and need to be computed from the subhalo merger trees, such as the peak halo mass (\(M_{\rm peak}\)), the peak maximum circular velocity (\(V_{\rm peak}\)) or the infall subhalo mass (\(M_{\rm infall}\)).
In this paper, we create SHAM mocks with the subhalo property \(V_{\rm peak}\) using the TNG300-Dark simulation. \(V_{\rm peak}\) is defined as the peak value of \(V_{\rm max}\) over the subhalo's evolution, where \(V_{\rm max}\) is the maximum circular velocity (\(V_{\rm max}\equiv\max(\sqrt{GM(<r)/r})\)). \(V_{\rm peak}\) has been widely used as the SHAM primary property of our SHAM and has been shown to provide a tighter relation to the stellar mass of galaxies than other properties (see also the discussion in Campbell et al., 2018). For simplicity, we do not introduce scatter between the subhalo property and the stellar mass. We use the stellar mass function of the TNG300 to assign stellar masses to the subhaloes. As we select galaxies based on number density, and use a SHAM without scatter, the choice of the stellar mass function has no impact on the results.
We chose a SHAM without scatter created from the dark matter-only simulation since such a model is not influenced by galaxy formation physics and results purely from the input cosmology of the N-body simulation. This is in direct contrast to the case of an hydrodynamic simulation, where baryons are carefully modelled to create realistic galaxy population samples. For completeness, we also tested a SHAM model with scatter, which is in the middle of these two extremes, where the scatter is added to better mimic the properties of TNG300. However, as the results from this model were almost identical to the other two models, we chose not to include them here for the sake of clarity of presentation.
### The halo occupation distribution
#### 2.3.1 HOD modelling
The HOD formalism describes the relationship between galaxies and haloes in terms of the probability distribution that a halo of virial mass \(M_{\rm h}\) contains \(N\) galaxies of a particular type, as well as the spatial and velocity distributions of galaxies within haloes. The fundamental component is the halo occupation function, \(\langle N(M_{\rm h})\rangle\), which represents the mean number of galaxies as a function of halo mass. This approach has the advantage of not requiring assumptions about the physical processes that drive galaxy formation and can be empirically derived from observations.
Standard applications typically assume a cosmology and a parameterized form for the halo occupation functions, which are motivated by the predictions of SAMs and hydrodynamics simulations (e.g., Zheng et al., 2005). The HOD parameters are then constrained using measurements of galaxy clustering from large surveys. This method essentially converts galaxy clustering measurements into a physical relation between the galaxies and dark matter halos, paving the way for comprehensive tests of galaxy formation models.
An important application of this approach is the generation of simulated galaxy catalogues by populating dark matter haloes in N-body simulations with galaxies that reproduce the desired clustering. This method has gained popularity due to its low computational cost and high performance (e.g., Manera et al., 2015; Zheng and Guo, 2016; Yuan et al., 2022). The halo occupation function is typically provided at a specific redshift or over a narrow redshift interval. To generate a mock galaxy catalogue over a wide range of redshifts (e.g., lightcone), an HOD model with a dependence on redshift may be needed. In C17, we presented a novel approach to describe the HOD as a function of redshift. There, we studied the HOD evolution for stellar-mass selected galaxies since \(z=3\), for two different SAMs. Even though the HODs of those two models were different, the evolution of their HODs was similar. A simplified version of our model was later used by Smith et al. (2017, 2022) to populate simulated lightcones built from N-body simulations to create more realistic galaxy catalogues.
#### 2.3.2 HOD parameterization
To parameterize the HOD, it is useful to distinguish between central galaxies, i.e. the primary galaxy at the centre of the halo, and the additional satellite galaxies, and to consider the contributions of each separately (Kravtsov et al., 2004; Zheng et al., 2005). By definition, a dark matter halo cannot contain more than one central galaxy, but there is no upper limit on the number of satellites. Furthermore, for samples defined by properties that scale with halo mass, such as stellar mass or luminosity, a halo is typically populated first by a central galaxy, followed by additional satellite galaxy (although there can be haloes populated by only satellite galaxies in a given sample; see e.g., Jimenez et al., 2019).
The traditional shape for the HOD is a smooth transition from zero to one galaxy for the central galaxies and a transition from zero to a power law for the satellites. The 5-parameter model introduced by Zheng et al. (2005) (see also Zheng et al., 2007; Zehavi et al., 2011) is one of the most widely used parameterizations as it describes well samples of galaxies brighter than a given luminosity or more massive than a given stellar mass. We use this form of the halo occupation function in this work to describe the TNG300 and the SHAM mocks.
The mean occupation function of the central galaxies is described as a step-like function with a softened cutoff profile, to account for the dispersion between the stellar mass (or luminosity) and halo mass. It
has the following form:
\[\langle N_{\rm cen}(M_{\rm h})\rangle=\frac{1}{2}\left[1+{\rm erf}\left(\frac{ \log M_{\rm h}-\log M_{\rm min}}{\sigma_{\log M}}\right)\right]\,, \tag{1}\]
where \(M_{\rm h}\) is the host halo mass and \({\rm erf}(x)\) is the error function,
\[{\rm erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}{\rm d}t\,. \tag{2}\]
The parameter \(M_{\rm min}\) characterizes the minimum mass for hosting a central galaxy above a given threshold, or more specifically, the halo mass at which half the haloes are occupied by a central galaxy (i.e., \(\langle N_{\rm cen}(M_{\rm min})\rangle=0.5\)). The second parameter \(\sigma_{\rm logM}\) represents the "sharpness" (width) of the transition from zero to one galaxy per halo. The value of \(\sigma_{\rm logM}\) indicates the amount of scatter between stellar mass and halo mass.
The occupation function for satellite galaxies is modelled as:
\[\langle N_{\rm sat}(M_{\rm h})\rangle=\left(\frac{M_{\rm h}-M_{\rm cut}}{M_{1} ^{*}}\right)^{\alpha}\,, \tag{3}\]
with \(M_{\rm h}>M_{\rm cut}\), representing a power-law shape with a smooth cutoff at low halo masses. In this equation, \(\alpha\) is the slope of the power law, which is typically close to one, \(M_{\rm cut}\) is the satellite cutoff mass scale (i.e., the minimum mass of haloes hosting satellites), and \(M_{1}^{*}\) is the normalisation of the power law. A related parameter, \(M_{1}=M_{\rm cut}+M_{1}^{*}\), is often used to characterise the halo mass scale for hosting satellite galaxies above a given threshold. Specifically, it measures the average halo mass for hosting one satellite galaxy(i.e., \(\langle N_{\rm sat}(M_{1})\rangle=1\)). In what follows, for clarity, we show results for \(M_{1}\). Nonetheless \(M_{1}^{*}>>M_{\rm cut}\), so \(M_{1}\sim M_{1}^{*}\). We have verified that all trends identified for \(M_{1}\) also hold for \(M_{1}^{*}\).
The occupation functions for centrals and satellites can be fitted independently with this definition, with the total number of galaxies given by their sum:
\[\langle N_{\rm gal}(M_{\rm h})\rangle=\langle N_{\rm cen}(M_{\rm h})\rangle+ \langle N_{\rm sat}(M_{\rm h})\rangle. \tag{4}\]
Figure 1 depicts a schematic representation of the shape of the HOD illustrating which features are sensitive to these five parameters.
We note that often a variant of these expressions is used, such that the cutoff profile for the central galaxies occupation is applied also to the satellite occupation, assuming (using our notation) that the total number of galaxies is given by \(\langle N_{\rm cen}\rangle(1+\langle N_{\rm sat}\rangle)\). In that case, the fitting of the HOD cannot be done separately for centrals and satellites (because of the \(\langle N_{\rm cen}\rangle\langle N_{\rm sat}\rangle\) term). Hence, assuming this form results in a more complex procedure to determine the best-fitting values of the parameters and ultimately gives poorer constraints, particularly for \(M_{\rm cut}\). Furthermore, Jimenez et al. (2019) show that satellite galaxies from a stellar mass-selected sample sometimes populate haloes whose central galaxies are not part of that sample. Assuming this form doesn't allow to account for such cases, and thus might bias the results. For these reasons, we choose to proceed with the formalism as detailed above in Eq. 2-4. We caution that one must be careful when comparing results obtained with different notations.
### Galaxy samples
We study stellar-mass selected galaxy samples corresponding to four different number densities and seven different redshifts. To construct these samples, we choose, at each epoch, the most massive galaxies corresponding to the following number densities: 0.0316, 0.01,
Figure 1: A schematic depiction of the standard 5-parameter form of the halo occupation function, which gives the mean number of galaxies per halo as a function of the host halo mass (based on Fig. 1 of C17). The solid blue line represents the occupation function for all galaxies, which can be further separated into the contributions of central galaxies (red dashed line) and satellite galaxies (red dotted line). As a reference we show an abundance of \(\langle N_{\rm gal}(M_{h})\rangle=1\) as a horizontal grey dotted line; this will be shown in all subsequent HOD plots. The halo occupation function of central galaxies exhibits a gradual transition from zero to one galaxy per halo, which is well described by two parameters: \(M_{\rm min}\), the halo mass at which half of the haloes are populated by a central galaxy, and \(\sigma_{\rm logM}\), which characterises the smoothness of this transition. The satellites occupation function is described by a transition from zero galaxies to a power law with three parameters: \(M_{\rm cut}\), the minimum halo mass for hosting satellites, \(M_{1}\), the mass at which there is, on average, one satellite galaxy per halo, and the power-law slope \(\alpha\). See text for more details and the explicit equations.
Figure 2: Cumulative stellar mass functions for galaxies in the TNG300 simulation. The coloured lines represent different redshifts as labelled. The dashed horizontal lines denote the number density samples adopted in this work (the values are marked at the upper right of each line). The galaxies selected for a given number density and redshift are those to the right of the intersection with their associated dashed line.
0.00316 and 0.001 \(h^{3}\)Mpc\({}^{-3}\). At \(z=0\), these correspond to stellar mass thresholds of \(6.05\times 10^{8}\), \(6.47\times 10^{9}\), \(2.19\times 10^{10}\) and \(4.54\times 10^{10}\)\(h^{-1}\)M\({}_{\odot}\), respectively. The stellar mass of a galaxy in the TNG300 is defined as the sum of the masses of all stellar particles within twice the half-mass radius. We remind the reader that, since we are using the same stellar mass function for both the hydrodynamical and SHAM models, they will share the same cut for each number density.
Fig. 2 shows the cumulative stellar mass functions for the 7 redshifts used in this work, \(z=0,\ 0.5,\ 1.0,\ 1.5,\ 2.0,\ 2.5\ \&\ 3.0\). The horizontal dashed lines correspond to the four different number densities. The galaxies included in each sample are the ones to the right (more massive) of the intersection of these horizontal lines and the cumulative stellar mass functions. We chose these cuts to facilitate the comparison with C17, where we analyzed galaxies from semi-analytical models selected in a similar fashion.
In order to facilitate the comparison of the HOD evolution for the different models, it is also necessary to correct the halo mass function of TNG300. Due to baryonic effects, The TNG300's halo mass function is not identical to that of TNG300-Dark, on which the SHAM mock was run. The cumulative halo mass functions for these two simulations are shown in Fig. 3, for the different redshifts. To facilitate the comparison, we recalibrate the halo mass function of TNG300 to match that of the dark matter-only simulation. This is done by measuring the difference in halo mass between the simulations for each cumulative abundance, and then applying this difference to the TNG300's haloes. This step is particularly helpful for interpreting the evolution of the HOD parameters that represent masses (such as \(M_{\rm min}\), \(M_{1}\), and \(M_{\rm cut}\)), given that the differences between the halo mass functions are not constant with redshift. All TNG300 results presented in this paper incorporate this correction.
Figure 4: The HODs in the TNG300 simulation (top panel) and for a mock galaxy sample run with a SHAM model (middle panel), for stellar-mass selected galaxies corresponding to a number density of 0.0316 h\({}^{3}\)Mpc\({}^{-3}\). The different lines correspond to different redshifts spanning z = 0 to 3, as labelled. To facilitate the comparison between the models, we show in the bottom panel the HODs for both the TNG300 and the SHAM mock at z = 0, 1, 2 and 3.
Figure 3: Cumulative halo mass functions for TNG300 (solid lines) and TNG300-Dark (dashed lines). The coloured lines correspond to different redshifts as labelled. Halo mass is defined as the total mass enclosed in a sphere whose mean density is 200 times the critical density of the Universe (also known as M\({}_{\rm 200,crit}\)). To compare the two samples, we calibrate the halo masses of the TNG300 by matching the halo mass functions (see § 2.4 for more details).
## 3 The evolution of the HOD
We compute the halo occupation functions in the TNG300 simulation and for the SHAM model for the four number density samples previously stated and at the seven redshift outputs between z=0 and z=3. Please note that we are here directly measuring the average halo occupation as a function of halo mass, as opposed to inferring it from the correlation function, as is typical in galaxy clustering studies.
In Fig. 4, we show the HODs for the galaxy samples with a number density of \(n=0.0316\)\(h^{3}\)Mpc\({}^{-3}\) at the seven redshift outputs between \(z=0\) and \(z=3\). The top and middle panels show the HODs for the TNG300 and the SHAM model, respectively, while the bottom panel compares the HODs of both models for a smaller set of redshifts. The evolution of the HOD in both models appears similar. The overall trend is a shift of the halo occupation function toward larger halo masses with decreasing redshift (increase in time). In more detail, for both models, the threshold for hosting central galaxies (at the lower occupation region at low halo masses), increases monotonically with time from \(z=3\) to \(z\sim 1\). From that point until \(z=0\), the shift in halo mass diminishes with the threshold for starting to host galaxies remaining similar. In contrast, the satellite occupation appears to continuously shift with decreasing redshift. These results are consistent with the findings of C17 in semi-analytic galaxy formation models.
To gain a better understanding of the evolution of HODs, we fit the halo occupation functions using the 5-parameter functional form described in SS 2.3.2 and analyse the evolution of those parameters. We fit the central and satellite occupations independently, and assume a constant error per bin for the fitting. In previous works (e.g., Contreras et al., 2013, C17), we tested using different weights on each of the points of the HOD (such as weighting by the abundance of haloes or the effective bias in each mass bin), finding that this tends to overemphasize a specific part of the HOD, resulting in significant discrepancies at high halo masses. We estimate the error on the HOD fitting parameters by normalizing the constant bin errors such that the best fit has a reduced chi-square of one (i.e., \(\chi^{2}_{\rm min}\)/d.o.f = 1).
Fig. 5 presents the values for the best fitting parameter of the HOD, \(M_{\rm min}\), \(M_{1}\), \(\sigma_{\rm logM}\), \(\alpha\) and \(M_{\rm cut}\), as a function of redshift. The solid lines show the values for the TNG300 while the SHAM predictions are shown as dashed lines. The different colours represent different number densities as labelled. We do not show the prediction for the lowest number density for the satellite HOD parameters, since the number of haloes with satellite galaxies at high redshift was too low to do a proper fit.
While there are some differences between the values of the parameters for the TNG300 and the SHAM at different redshifts, overall there is a good agreement between the models for all but one parameter, \(\sigma_{\rm logM}\). While the SHAM technique is known for being able to reproduce the galaxy clustering (and therefore, the HOD) of complex galaxy samples as a hydrodynamic simulation (e.g., Chaves-Montero et al., 2016; Contreras et al., 2021c), it is surprising that even in its most basic form (without scatter), the model is in good agreement with the TNG300 predictions. We remind the reader that the SHAM model does not incorporate any baryonic processes and that the properties of the resulting galaxy population depend solely on the gravitational growth of the subhaloes in the simulation. This in turn depends on the cosmological model corresponding to the dark matter-only simulation used.
These four parameters evolve as follows:
* The characteristic halo mass for hosting a central galaxy, \(M_{\rm min}\)-increases linearly (in logarithmic scale) from \(z=3\) to \(z\sim 1-0.5\) and then remains constant until \(z=0\).
Figure 5: Redshift evolution of the 5 fitting parameters of the HOD, corresponding to the TNG300 simulation (solid lines) and the SHAM mock (dashed lines). From top to bottom, the properties shown in each panel are \(M_{\rm min}\), \(M_{1}\), \(\sigma_{\rm logM}\), \(\alpha\) and \(M_{\rm cut}\). Different colours represent different number density samples, as labelled. For the lowest number density, \(n=0.001\)\(h^{3}\)Mpc\({}^{-3}\), we only show the parameters corresponding to the centrals occupation (\(M_{\rm min}\) and \(\sigma_{\rm logM}\)), since the constraints on the satellites occupation parameters are poor at higher redshifts (due to the limited amount of haloes with satellite galaxies). Error bars represent the standard deviation from the fitted parameter value.
* The characteristic halo mass for hosting a satellite galaxy, \(M_{1}\), increases linearly (in logarithmic scale) from \(z=3\) to \(z=0\).
* The power-law slope of the satellites occupation, \(\alpha\), is roughly constant from \(z=3\) to \(z\sim 1-0.5\), and then increases linearly until \(z=0\). There are some differences in the behaviour of \(\alpha\) in the TNG300 and the SHAM, which are however not that significant given the level of noise in this parameter.
* The satellites cutoff mass scale, namely the minimum mass of haloes hosting satellites, \(M_{\rm cut}\), increases linearly (in logarithmic scale) from \(z=3\) to \(z\sim 1-0.5\), and then stays constant until \(z=0\) (the same as \(M_{\rm min}\)).
Again, for these HOD parameters, the halo occupations in the TNG300 hydrodynamic simulation and in the SHAM implementation exhibit the same evolution trends. This is the main result of this work, indicating that the evolution of these parameters is the same independent of the galaxy formation physics. These trends agree with those found by C17 studying the evolution of HOD parameters in two distinct SAMs.
To further assess the robustness of our results, we also examine the evolution of these parameters in the EAGLE hydrodynamical simulation (Schaye et al., 2015; Crain et al., 2015), as presented in Appendix A. EAGLE has a smaller volume but a higher resolution than the TNG300, and it was executed with an SPH code rather than an adaptive mesh code like the one used in the TNG300. We find the same evolutionary trends as the ones observed for the TNG300, the SHAM model, and the SAMs. We have additionally studied at the evolution of the HOD in TNG300 when selecting galaxies by \(r\)-band magnitude instead of stellar mass, again finding similar evolutionary trends. We refer the reader to SS 3.4 of C17 for a more in-depth analysis of the evolution of these parameters and a simple parameterization of the evolution of the HOD parameters that can be used in the construction of galaxy samples or the interpretation of clustering data at different redshifts.
As for \(\sigma_{\rm logM}\), this property measures the scatter between the halo mass and stellar mass of a galaxy sample. The prediction of a SHAM without scatter will only measure the dispersion between the halo mass and \(V_{\rm peak}\), which is significantly smaller than the expected scatter between stellar mass and halo mass of a fully physical galaxy formation model. As concluded from previous work (e.g., C17), this parameter should be the one that best captures the physics of galaxy formation of a galaxy sample. However, as demonstrated for example in Zehavi et al. (2011), this parameter, along with \(M_{\rm cut}\), have the weakest constraint from galaxy clustering. Hence, it is not required to model \(\sigma_{\rm logM}\) perfectly when creating mocks that attempt to reproduce realistic galaxy clustering. Nonetheless, the values appear relatively constant with redshift, which makes sense given that we do not anticipate a significant change in the evolution of the stellar mass-halo mass relationship. This is one of the foundations of the SHAM model (see Contreras et al., 2015 for further discussion).
## 4 Origin of the HOD evolution
In SS 3 we studied the evolution of the HOD in the TNG300 hydrodynamical simulation and in a SHAM applied to the dark matter-only simulation, finding that the evolution of the HOD parameters is largely the same. Since no galaxy formation physics is included in our SHAM implementation and it lacks any free parameter that attempts to reproduce the impact of baryonic physics (such as a scatter in the \(V_{\rm peak}\)-stellar mass relation, modifying the dynamical friction model, etc.), it appears that the evolution is independent of galaxy formation physics. This is further corroborated by the overall agreement with results from the EAGLE hydrodynamical simulation (Appendix A) and SAMs applied to the Millennium Simulation (C17). This leads us to conclude that the evolution of the HOD is instead dominated by the cosmological model and the hierarchical growth of structure.
We would still like to discern which aspect of the cosmological picture shapes the evolution of the HOD parameters. One possibility is that, at least for the parameters that represent halo masses (such as \(M_{\rm min}\), \(M_{1}\), and \(M_{\rm cut}\)), the evolution arises from the typical growth of haloes. To determine this, we examined the evolution of these parameters as peak height (\(\nu(M,z)=\delta(M)/\delta(z)\)) values rather than halo masses (not shown here). We found that the changes in peak height were greater than when expressing these parameters in mass units, indicating that the evolution is not (solely) due to how structures grow.
Another factor that can potentially influence how HODs evolve is the values of the cosmological parameters. This is a plausible explanation of the agreement since the TNG300-Dark simulation (on which we run the SHAM mock) and the TNG300 simulation share the same cosmology. Moreover, the growth of dark matter haloes is affected by cosmology. A strong cosmological dependence on the evolution of HODs with any cosmological parameter could imply that we could constrain cosmology based on the HOD evolution we infer from galaxy surveys. However, when examining the evolution of the HOD in SHAM mocks built on simulations with different cosmologies, we find only small changes in the evolution of the parameters. The details of this analysis are presented in Appendix B for eight cosmological parameters. This indicates that the specific flavour of the cosmological model also does not influence much the evolution of the HOD.
Since the details of the cosmological model do not have a significant impact on how the HOD evolves, we deduce that this evolution is governed by the hierarchical way in which haloes and subhalos (and therefore galaxies) form and evolve in the \(\Lambda\)CDM model. This becomes more apparent when we examine the evolution of the ratio of the two halo mass parameters \(M_{1}\) and \(M_{\rm min}\), which is frequently
Figure 6: Redshift evolution of the ratio of the two characteristic halo mass parameters of the HOD, \(M_{1}\) and \(M_{\rm min}\). The predictions from the TNG300 simulation are shown as solid lines, while the dashed lines denote the results from the SHAM mock. The different colours represent different number densities as labelled.
employed to characterise a galaxy population (e.g., Zehavi et al., 2011; Coupon et al., 2012; Guo & White, 2014; Skibba et al., 2015). This ratio represents the mass range over which a halo hosts only a central galaxy from the sample before hosting additional satellite galaxies, giving rise to the "plateau" in the halo occupation function (Fig. 1; see also Watson et al., 2011). Fig. 6 shows the evolution of this ratio for the TNG300 and our SHAM model, where the value of \(M_{1}/M_{\rm min}\) roughly plateaus at high redshift and then increases with time, toward the present.
C17 explored the change in this ratio when assuming alternative "non-hierarchical" evolution models for the galaxies. The models tested were a passive evolution model (e.g., Seo et al., 2008), where galaxies no longer form or merge, a tracking evolution, in which galaxies no longer form but continue to merge, and a descendant clustering selection (Padilla et al., 2010) where galaxies are selected based on the evolution of halo clustering (see Fig. 11 in C17 and discussion thereof). All these models show significantly different evolution for \(M_{1}/M_{\rm min}\), with the ratio decreasing toward lower redshifts, in contrast to the evolution found in our SHAM mocks and the TNG300. The Guo et al. (2013) SAM used in C17 also exhibits the same type of evolution as the models presented in this work. 2
Footnote 2: We note that the Gonzalez-Perez et al. (2014) SAM additionally used in C17 showed some variation in the evolution of \(M_{1}/M_{\rm min}\). This is likely due to the different (sub)halo algorithms employed to Guo et al. (2013), TNG300 and TNG300-Dark, which use the FOF and SUBFIND algorithms (see § 2.1 for more details).
We conclude that the evolution of the HOD is independent of galaxy formation physics, or the specifics of the cosmological model. Any galaxy population that grows hierarchically, such as stellar mass selected galaxies, in a \(\Lambda\)CDM (or similar) framework should exhibit similar evolutionary trends to the ones found in this work.
## 5 Conclusion
In this paper, we look at the evolution of the HOD of stellar mass-selected galaxies from two different models; a magneto-hydrodynamic simulation, the TNG300, and a SHAM mock built from the dark matter-only simulation without any baryonic physics implemented. We characterise the cosmic evolution by fitting the HODs at different redshifts with the standard 5-parameter parametric form (Zheng et al., 2005). Our main findings are as follows:
* The HODs for the TNG300 and the SHAM models are similar at all redshifts and number densities, exhibiting a similar evolution of the halo mass parameters. The one standout is \(\sigma_{\rm logM}\), capturing the width of the transition from zero to one galaxy per halo, which varies between the models.
* The values of \(\sigma_{\rm logM}\) are different for the TNG300 and the SHAM model. This parameter is related to the scatter between halo mass and stellar mass and expected to be dependent on the galaxy formation physics model. At the same time, this parameter has little effect on galaxy clustering and thus it is not always essential to define its value or its evolution with high precision.
* The evolution of the HOD is also similar to that measured in the EAGLE hydrodynamical simulation, and for a M\({}_{\rm r}\) magnitude limited sample in the TNG300 simulation. The evolution of the parameters (other than \(\sigma_{\rm logM}\)) is also similar to that of semi-analytical models of galaxy formation, as explored in C17.
* The evolution of the HOD is largely insensitive to variations of the cosmological parameters, with only \(\sigma_{\rm S}\) and \(w_{0}\) somewhat impacting the shape.
* The values and evolution of the \(M_{1}/M_{\rm min}\) ratio are similar for the TNG300 and the SHAM model. They are also in agreement with the ones found by C17 when analysing a SAM with the same (sub)halo identification algorithm, but different to that found when assuming alternative galaxy evolution models (such as passive evolution).
Based on these results, it appears that the physics of galaxy formation has little impact on the evolution of the HOD for stellar mass-selected samples. Given that the HOD and galaxy clustering of a SHAM model without scatter or any free parameter only depend on the cosmological model assumed in the dark matter simulation on which it is based, we can conclude that the cosmological framework dominates the HOD evolution for this type of galaxies. By cosmological framework here we specifically refer to the hierarchical building of haloes and galaxies, as we have also demonstrated that the values of the cosmological parameters have little impact on the HOD evolution.
The way the HOD parameters evolve in the SHAM model is a strong indication of how consistent and good the model is when populating galaxies at different redshifts, and the potential it has for creating mock galaxy catalogues (given sufficient resolution to follow the subhaloes). Furthermore, our results provide an important simplification to the process of creating mock galaxy catalogues over a large redshift range. They lend significant support for some of the ansatzes accepted today when generating mock galaxy catalogues on simulated lightcones, namely that the HOD evolution model is robust and needn't change based on the assumed galaxy formation model. This robustness, in turn, can facilitate the HOD interpretation of clustering measurements at different redshifts from upcoming large galaxy surveys.
We clarify that the results presented here and subsequent conclusions have only been investigated for galaxy samples selected by stellar mass (and luminosity), that grow hierarchically. The HOD of galaxies selected, for example, by star formation rate may not follow the same pattern. We note that the extension of our work to that is not trivial, as it requires a somewhat more complex HOD model as well as a non-trivial extension of the SHAM methodology (S. Ortega Martinez, in prep.), and we reserve this to future work.
## Data Availability
The IllustrisTNG simulations, including TNG300, are publicly available and accessible at www.tng-project.org/data(Nelson et al., 2019). The data underlying this article will be shared on reasonable request to the corresponding author.
## Acknowledgements
We thank Nelson Padilla, Celeste Artale, Carlton Baugh, Peder Norberg, Shaun Cole and Alex Smith for useful comments and discussions. We acknowledge the hospitality of the ICC at Durham University and the helpful conversations with many of its members. SC acknowledges the support of the "Juan de la Cierva Incorporacion" fellowship (IJC2020-045705-I). IZ was partially supported by a CWRU ACES+ Opportunity Grant. The authors also acknowledge the computer resources at MareNostrum and the technical support provided by Barcelona Supercomputing Center (RES-AECT-2020-3-0014) | ```
TNG300 magneto-hydrodynamicシミュレーションとSHAMを用いたmockカタログを用いて、ハロのoccuppation分布の redshift演化の起源を研究します。私たちは、 redshift範囲 0 ≤ z ≤ 3 にわたる固有の質量選択銀河サンプルを分析し、そのハロoccuppation functionを測定しました。これらのハロoccuppation functionを HOD パラメータにフィットさせ、宇宙の時間経過における進化を研究しました。TNG300 の銀河系集団は、シミュレーションに実装されたbaryonic physics に強く依存しています。対照的に、散乱がない基本的な SHAM モデルによって予測される銀河系集団は、ダークマターシミュレーションの kosmology に直接帰属しています。両モデルの HOD 進化は類似しており、半解析的モデルにおける HOD 進化の研究と一致しています。特に、中心銀河とsatellit銀河の宿主質 |
2309.16816 | PROSE: Predicting Operators and Symbolic Expressions using Multimodal
Transformers | Approximating nonlinear differential equations using a neural network
provides a robust and efficient tool for various scientific computing tasks,
including real-time predictions, inverse problems, optimal controls, and
surrogate modeling. Previous works have focused on embedding dynamical systems
into networks through two approaches: learning a single solution operator
(i.e., the mapping from input parametrized functions to solutions) or learning
the governing system of equations (i.e., the constitutive model relative to the
state variables). Both of these approaches yield different representations for
the same underlying data or function. Additionally, observing that families of
differential equations often share key characteristics, we seek one network
representation across a wide range of equations. Our method, called Predicting
Operators and Symbolic Expressions (PROSE), learns maps from multimodal inputs
to multimodal outputs, capable of generating both numerical predictions and
mathematical equations. By using a transformer structure and a feature fusion
approach, our network can simultaneously embed sets of solution operators for
various parametric differential equations using a single trained network.
Detailed experiments demonstrate that the network benefits from its multimodal
nature, resulting in improved prediction accuracy and better generalization.
The network is shown to be able to handle noise in the data and errors in the
symbolic representation, including noisy numerical values, model
misspecification, and erroneous addition or deletion of terms. PROSE provides a
new neural network framework for differential equations which allows for more
flexibility and generality in learning operators and governing equations from
data. | Yuxuan Liu, Zecheng Zhang, Hayden Schaeffer | 2023-09-28T19:46:07 | http://arxiv.org/abs/2309.16816v1 | # PROSE: Predicting Operators and Symbolic Expressions using Multimodal Transformers
###### Abstract
Approximating nonlinear differential equations using a neural network provides a robust and efficient tool for various scientific computing tasks, including real-time predictions, inverse problems, optimal controls, and surrogate modeling. Previous works have focused on embedding dynamical systems into networks through two approaches: learning a single solution operator (i.e., the mapping from input parametrized functions to solutions) or learning the governing system of equations (i.e., the constitutive model relative to the state variables). Both of these approaches yield different representations for the same underlying data or function. Additionally, observing that families of differential equations often share key characteristics, we seek one network representation across a wide range of equations. Our method, called **P**redicting **O**perators and **S**ymbolic **E**xpressions (PROSE), learns maps from multimodal inputs to multimodal outputs, capable of generating both numerical predictions and mathematical equations. By using a transformer structure and a feature fusion approach, our network can simultaneously embed sets of solution operators for various parametric differential equations using a single trained network. Detailed experiments demonstrate that the network benefits from its multimodal nature, resulting in improved prediction accuracy and better generalization. The network is shown to be able to handle noise in the data and errors in the symbolic representation, including noisy numerical values, model misspecification, and erroneous addition or deletion of terms. PROSE provides a new neural network framework for differential equations which allows for more flexibility and generality in learning operators and governing equations from data.
## 1 Introduction
Differential equations are important tools for understanding and studying nonlinear physical phenomena and time-series dynamics. They are necessary for a multitude of modern scientific and engineering applications, including stability analysis, state variable prediction, structural optimization, and design. Consider parametric ordinary differential equations (ODEs), i.e. differential equations whose initial conditions and coefficients are parameterized by functions with inputs from some distribution. We can denote the system by \(\frac{d\boldsymbol{u}}{dt}=f\left(\boldsymbol{u};a_{s}(t)\right)\), where \(\boldsymbol{u}(t)\in\mathbb{R}^{d}\) are states, and \(a_{s}(t)\) is the parametric function with input parameter \(s\). For example, \(a_{s}(t)\) could be an additive forcing term where \(s\) follows a normal distribution. The goal of computational methods for parametric ODEs is to evaluate the solution given a new parametric function, often with the need to generalize to larger parameter distributions, i.e. out-of-distribution predictions.
Recently, _operator learning_ has been used to encode the operator that maps input functions \(a_{s}(-)\) to the solution \(\boldsymbol{u}(-;a_{s}(-))\) through a deep network, whose evaluation is more cost-efficient than fully simulating the differential equations [11, 33, 35, 40, 68]. An advantage of operator
learning compared to conventional networks is that the resulting approximation captures the mapping between functions, rather than being limited to fixed-size vectors. This flexibility enables a broader range of downstream tasks to be undertaken, especially in multi-query settings. However, operator learning is limited to training solutions for an individual differential equation. In particular, current operator learning methods do not benefit from observations of similar systems and, once trained, do not generalize to new differential equations.
Problem StatementWe consider the problem of encoding multiple ODEs and parametric functions, for use in generalized prediction and model discovery. Specifically, we are given \(N\) ODEs \(f_{j}\), and parametric functions \(a_{s}^{j}(t)\), with the goal of constructing a single neural network to both identify the system and the operator from parametric functions \(a_{s}^{j}(-)\) to solutions. Consider a family of differential equations indexed by \(j=1,\cdots,N\), with the form \(\frac{du}{dt}=f_{j}\left(\mathbf{u};a_{s}^{j}(t)\right)\), where the solutions are denoted by \(\mathbf{u}_{j}(-;a_{s}^{j}(-))\). The solution operator \(G^{j}\) encodes the solution's dependence on \(a_{s}^{j}\) and corresponds to the \(j^{\text{th}}\) ODE. When utilizing standard operator learning, it becomes necessary to train separate deep networks for each of the \(N\) equations. That approach can quickly become impractical and inefficient, especially in the context of most nonlinear scientific problems.
This work introduces a multimodal framework for simultaneously encoding multiple operators for use in predicting states at query locations and discovering the governing model that represents the equations of motion describing the data. For data prediction, a novel transformer-based approach which we call _multi-operator learning_ is employed. This entails training the network to learn the solution operator across a set of distinct parametric dynamical systems. In other words, the network learns a single operator \(\bar{G}\) that represents the family of mappings \(\left\{G^{1},\cdots,G^{N}\right\}\) by leveraging shared characteristics among their features. This should also allow the network to predict new operators that share commonalities with those from the family of operators used in training, i.e. generalize to new operators. During testing or prediction, the governing equations (i.e. the mathematical equations defining the dynamics of dependent variables for a given data sequence) are not known, so the algorithm also produces a symbolic expression using a generative model. In other words, the network learns a syntax for representing and articulating differential equations. In this way, the approach yields a network capable of evaluating dependent variables at query locations over wide parameter sets and also "writes" the mathematical differential
Figure 1: **PROSE network illustration. The inputs and outputs (predictions) are multi-modal, each including numerical values (data) and symbolic expressions (governing equations). Here we include just the third term in the governing equations for simpler visualization.**
equation associated to the data. This can be viewed as a large language model for differential equations.
Main ContributionsThe Predicting Operators and Symbolic Expression (PROSE) framework introduces a new approach to learning differential equations from data. The key components of the architecture are illustrated are Figure 1. The main contributions and novelty are summarized below.
* PROSE is the first method to generate both the governing system and an operator network from multiple distinct ODEs. It is one of the first multi-operator learning approaches.
* PROSE incorporates a new modality through a fusion structure. Unlike text modality or labels, the symbolic expression can accurately generate the system solution.
* The network architecture introduces new structural elements, including a fusion transformer that connects the data and embedded symbols.
* We demonstrate accuracy in generating valid ODEs (validity is of \(>\!99.9\%\) on in-distribution tests and \(>\!97.89\%\) on out-of-distribution predictions), showing that PROSE can generate new ODEs from data.
## 2 Related Works
PROSE is both a multi-operator learning and a model discovery approach. We summarize these two distinct research areas in this section.
Operator LearningOperator learning [10, 11, 33, 40, 68, 35] studies neural network approximations to an operator \(G:U\to V\), where \(U\) and \(V\) are function spaces. This approach holds significant relevance in various mathematical problems, including the solution of parametric PDEs [28, 5], control of dynamical systems [36, 67], and multi-fidelity modeling [1, 42, 69]. Operator learning has gained substantial popularity within the mathematical and scientific machine learning community, with applications in engineering domains [45]. Currently, methods for neural operators focus on constructing a single operator, e.g. learning the map from the initial conditions or parameters of a physical system to the solution at a terminal time.
In [10, 11], the authors extended the universal approximation theory from function approximation [13, 24, 3] to operators. This work paved the way for the modern development of deep neural operator learning (DON) as seen in [40, 41, 35]. Building upon the principles of [11], [68] further expanded this approach by constructing operator networks that remain invariant to the input/output function discretizations. The noisy operator learning and optimization is studied in [35]. Another operator approach is the Fourier neural operators (FNO) [61, 33], which use Fourier transformations and their inverses in approximating operators through kernel integral approximations. Comparative analysis can be found in [41, 68].
The multi-input-output network (MioNet) [23] extends operator learning to handle multiple input/output parametric functions within the single operator framework. Recently, the In-Context Operator Network (ICON) [65] was developed for multi-operator learning using data and equation labels (one-hot encoding) as prompts and a test label during inference. This was later extended to include multimodal inputs by allowing captions which are embedded into the input
sequence using a pre-trained language model [66]. Multi-operator learning has significant challenges, especially when encoding the operators or when addressing out-of-distribution problems (i.e. those that extend beyond the training dataset).
Learning Governing EquationsLearning mathematical models from observations of dynamical systems is an essential scientific task, resulting in the ability to analyze relations between variables and obtain a deeper understanding of the laws of nature. In the works [54, 6], the authors introduced a symbolic regression approach for learning constitutive equations and other physically relevant equations from time-series data. The SINDy algorithm, introduced in [7], utilizes a dictionary of candidate features that often includes polynomials and trigonometric functions. They developed an iterative thresholding method to obtain a sparse model, with the goal of achieving a parsimonious representation of the relationships among potential model terms. SINDy has found applications in a wide range of problems and formulations, as demonstrated in [55, 21, 25, 43, 48, 21]. Sparse optimization techniques for learning partial differential equations were developed in [49] for spatio-temporal data. This approach incorporates differential operators into the dictionary, and the governing equation is trained using the LASSO method. The \(\ell^{1}\)-based approaches offer statistical guarantees with respect to the error bounds and equation recovery rates. These methods have been further refined and extended in subsequent works, including [50, 51, 38, 52, 33]. In [12], the Physics-Informed Neural Network with Sparse Regression (PINN-SR) method for discovering PDE models demonstrated that the equation learning paradigm can be leveraged within the PINNs [26, 30, 47] framework to train models from scarce data. The operator inference technique [46] approximates high-dimensional differential equations by first reducing the data-dimension to a small set of variables and training a lower-dimensional ODE model using a least-squares fit over polynomial features. This is particularly advantageous when dealing with high-dimensional data and when the original differential equations are inaccessible.
## 3 Methodology
The main ingredients of PROSE include symbol embedding, transformers, and multimodal inputs and outputs. We summarize these key elements in this section.
TransformersA transformer is an attention-driven mechanism that excels at capturing longer-term dependencies in data [4, 14, 60]. The vanilla transformer uses a self-attention architecture [2, 63], enabling it to capture intricate relationships within lengthy time series data. Specifically, let us denote the input time series data as \(X\in\mathbb{R}^{n\times d}\), where \(n\) is the number of time steps and \(d\) is the dimension of each element in the time series. Self-attention first computes the projections: query \(Q=XW^{Q}\), key \(K=XW^{K}\) and value \(V=XW^{V}\), where \(W^{Q}\in\mathbb{R}^{d\times d_{k}}\), \(W^{K}\in\mathbb{R}^{d\times d_{k}}\), and \(W^{V}\in\mathbb{R}^{d\times d_{v}}\). It then outputs the context \(C\in\mathbb{R}^{n\times d_{v}}\) via \(C=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V\), where the softmax function is calculated over all entries of each row. Self-attention discovers relationships among various elements within a time sequence. Predictions often depend on multiple data sources, making it crucial to understand the interactions and encode various time series data (see Section 3 for details). This self-attention idea has driven the development of the cross-attention mechanism [39, 59, 32]. Given two input time series data \(X,Y\), cross-attention computes the query, key, and value as \(Q=XW^{Q}\), \(K=YW^{K}\), and \(V=YW^{V}\). In the case where \(Y\) represents the output of a decoder and \(X\) represents the output of an encoder, the cross-attention, which directs its focus from \(X\) to \(Y\), is commonly referred to as encoder-decoder attention [60]. Encoder-decoder attention serves as a crucial com
ponent within autoregressive models [20, 32, 60]. The autoregressive model operates by making predictions for a time series iteratively, one step at a time. To achieve this, it utilizes the previous step's generated output as additional input for the subsequent prediction. This approach has demonstrated the capacity for mitigating accumulation errors [19], which makes it desirable for longer-time predictions.
Multimodal Machine LearningMultimodal machine learning (MML) trains models using data from heterogeneous sources [32, 39, 56, 58, 64]. Of major interest in this topic are methods for the fusion of data from multiple modalities, the exploration of their interplay, and the development of corresponding models and algorithms. For instance, consider the field of visual-language reasoning [56, 58, 31], where the utilization of visual content, such as images or videos, with the semantics of language [58] associated with these visual elements, such as captions or descriptions, leads to the development of models with richer information [31]. Another illustrative example is that of AI robots, which use multimodal sensors, including cameras, radar systems, and ultrasounds, to perceive their environment and make decisions [37, 18]. In mathematical applications, researchers employ multiscale mathematical models [17], where each modality is essentially characterized by varying levels of accuracy, to train a single model capable of predicting multiscale differential equations effectively.
Operator Learning StructureThe authors in [11] established a universal approximation theory for continuous operators, denoted by \(G\). Particularly, they showed that the neural operator \(G_{\theta}(u)(t)=\sum_{k=1}^{K}b_{k}(t)p_{k}(\hat{u})\) can approximate \(G(u)(t)\) for \(t\) in the output function domain (under certain conditions). Here \(p(\cdot)\) and \(b(\cdot)\) are neural networks which are called the branch and trunk [40], and \(\hat{u}\) is a discretized approximation to the input function \(u\). In our applications, these input functions \(u\) correspond to ODE solutions sampled in the input intervals, and the output functions are solutions over larger intervals. Based on the output-discretization invariance property of the network [41, 68], the output of the operator network can be queried at arbitrary timepoints, allowing predictions of the solution at any location.
Equation Encoding via Polish NotationMathematical expressions can be encoded as trees with operations and functions as nodes, and constants and variables as leaves [22, 34]. For instance, the tree on the right represents the expression \(\cos(1.5x_{1})+x_{2}^{2}-2.6\).
Trees provide natural evaluation orders, eliminating the need to use parentheses or spaces. Under some additional restrictions (e.g. \(1+2+3\) should be processed as \(1+(2+3)\), \(-1\times x\) is equivalent to \(-x\)), there is a one-to-one correspondence between trees and mathematical expressions. For these reasons, trees provide an unambiguous way of encoding equations. While there are existing tree2tree methods [16, 57], they are usually slower than seq2seq methods at training and inference time. The preorder traversal is a consistent way of mapping trees to sequences, and the resulting sequences are known as Polish or Prefix notation, which is used in our equation encoder. For the above expression \(\cos(1.5x_{1})+x_{2}^{2}-2.6\), its Polish notation is given by the sequence \(\mathbb{[}+\texttt{cos}\,\times\,1.5\,\,x_{1}\,-\,\texttt{pow}\,x_{2}\,\,2 \,\,2.6\mathbb{]}\). Operations such as \(\texttt{cos}\) are treated as single words and are not further tokenized, but they are trainable. In comparison to LaTeX representations of mathematical expressions, Polish notations have shorter lengths, simpler syntax, and are often more consistent. Note that in [22, 34], binary trees of depth-3 are used to generate symbolic approximations directly for the solution of a single differential equation.
Following [9, 15, 29], to have a reasonable vocabulary size, floating point numbers are represented in their base-10 notations, each consisting of three components: sign, mantissa, and exponent, which are treated as words with trainable embedding. For example, if the length of the mantissa is chosen to be 3, then \(2.6=+1\cdot 260\cdot 10^{-2}\) is represented as [+ 260 E-2]. For vector-valued functions, a dimension-separation token is used, i.e. \(\mathbf{f}=(f_{1},f_{2})\) is represented as "\(f_{1}\mid f_{2}\)". Similar to [9, 15, 29], our vocabulary is also of order \(10^{4}\) words.
### Model Overview
Our network uses hierarchical attention for feature processing and fusion, and two transformer decoders for two downstream tasks. Figure 2 provides an overview of the architecture. The PROSE architecture contains five main components trained end-to-end: data encoder, symbol encoder, feature fusion, data decoder, and symbol decoder.
EncodersTwo separate transformer encoders are used to obtain domain-specific features. Given numerical data inputs and symbolic equation guesses (possibly empty or erroneous), the data encoder and symbol encoder first separately perform feature aggregation using self-attention. For a data input sequence \(\mathbf{u}(t_{0}),\cdots,\mathbf{u}(t_{n})\), each element \(\mathbf{u}(t_{i})\), together with its time variable \(t_{i}\), goes through a linear layer to form the Data Feature (purple feature sequence in Figure 2). PROSE then uses self-attention to further process the Data Feature, where the time variables \(t_{i}\) serve as the positional encoding.
The symbolic input (in Polish notation) is a standard word sequence, which can be directly processed with self-attention layers. The word embedding (for operations, sign, mantissa, etc.) is randomly initialized and trainable. Sinusoidal positional encoding [60] is used for the symbol encoder.
Figure 2: **PROSE architecture and the workflow. Data Input and Symbol Input are embedded into Data Feature and Symbol Feature respectively before encoding and fusion through Feature Fusion. PROSE uses Cross-Attention to construct the operator (upper-right structure) from Fused Data Feature, and evaluate it at Query Locations. PROSE generates symbolic expressions in the lower-right portion autoregressively. Attention blocks are displayed in Appendix C.**
Feature FusionHierarchical attention (multi-stream to one-stream) is used in this model for feature fusion. Separately-processed data and symbol features are concatenated into a feature sequence, and further processed through self-attention layers where modality interaction occurs. Following [27], a learnable modality-type embedding is added to the fused features, explicitly signaling to the model which parts of the sequence are from which modality. Positional encoding is not needed since it is already included in the individual encoders.
Data DecoderThe data decoder constructs the operator via the cross-attention mechanism, establishing a link between the input-encoded time sequence (fused data features) and the output functions. The query locations, representing the independent variables of these output functions, serve as the evaluation points. Importantly, these query locations operate independently of each other, meaning that assessing the operator at one point, \(t_{i}\), does not impact the evaluation of the operator at another point, \(t_{j}\). As a result, the time and space complexity scales linearly with the number of query locations. In addition, since the evaluation points are independent of the network generation, this resembles the philosophy of the branch and trunk nets, see Operator Learning Structure in Section 3.
Symbol DecoderThe symbol decoder is a standard encoder-decoder transformer, where the fused symbol feature is the context for generation. The output equation is produced using an autoregressive approach [19, 60]: it starts with the start-of-sentence token and proceeds iteratively, generating each term of the equation based on prior predictions, until it encounters the end-of-sentence token for that specific equation. During evaluation time, greedy search (iterative selection of symbol with maximum probability) is used for efficient symbol generation. While beam search [62] can be used to improve the performance (e.g. percentage of valid expression outputs), we empirically find that greedy search is sufficient for obtaining valid mathematical expressions using the Polish notation formulation.
## 4 Experiments
We detail the numerical experiments and studies in this section. We created a dataset of 15 distinct multi-dimensional nonlinear ODEs. To verify the performance of the PROSE approach, we conduct four case studies (Table 2) with different symbolic and data inputs (Table 1). Additionally, in the ablation study, we confirm that the inclusion of symbolic equation information enhances the accuracy of the data prediction. Hyperparameters and experimental conditions can be found in Appendix A.
DatasetThe dataset is created from a dictionary of 15 distinct ODEs with varying dimensions: twelve 3D systems, two 4D systems, and one 5D system. To generate samples, we uniformly sample the coefficients of each term in the ODEs from the range \([F-0.1F,F+0.1F]\), where \(F\) represents the value of interest. We refer to Appendix B for the details.
The goal is to accurately predict the solutions of ODEs at future timepoints only using observations of a few points along one trajectory. We do not assume knowledge of the governing equation and thus the equations are also trained using the PROSE approach. The operator's input function is the values along the trajectories, discretized using a 64-point uniform mesh in the interval \([0,2]\). The target operator maps this input function to the ODE solution in the interval \([2,6]\). To assess PROSE's performance under noisy conditions, we introduce 2% Gaussian noise directly to the data samples.
The training dataset contains 512K examples, where 20 initial conditions are sampled to generate solution curves for each randomly generated system. The validation dataset contains 25.6K examples, where 4 initial conditions are sampled for each ODE system. The testing dataset contains 102,400 examples, where 4 initial conditions are sampled for each ODE system. The training dataset and the testing dataset contain the same number of ODE systems. In terms of practical applications, given test cases with unknown models, we are free to continue to augment the training and validation sets with any ODE, thus the dataset can be made arbitrarily large.
To test the performance of the equation prediction, we corrupt the input equation by randomly replacing, deleting, and adding terms. The terminologies and settings are found in Table 1.
Evaluation MetricsAs PROSE predicts the operator and learns the equation, we present three metrics to evaluate the model performance for solution and equation learning. For data prediction, the relative \(L^{2}\) error is reported. For the expression outputs (symbolic sequences in Polish notation), a decoding algorithm is used to transform the sequences into trees representing functions. The percentage of outputs that can be transformed into valid mathematical expressions is reported. Valid expressions (which approximate the velocity maps of ODE systems) are evaluated at 50 points in \(\mathbb{R}^{d}\) where each coordinate is uniformly sampled in \([-5,5]\) (i.e. a Monte Carlo estimate) and the relative \(L^{2}\) error is reported. Here \(d\) is the dimension of the ODE system. More specifically, suppose \(f(\mathbf{u})\) and \(\hat{f}(\mathbf{u})\) are true and PROSE-generated ODE velocity maps, we report the average relative \(L^{2}\) error computed at sampled points: \(\frac{\|f-\hat{f}\|_{2}}{\|f\|_{2}}\).
### Results
We observe in Table 2 that all experiments, even those corrupted by noise or random terms, achieve low relative prediction errors (\(<\!5.7\%\)). The data prediction error decreases as we relax the conditions on the symbolic guesses, i.e. when the equations are "Unknown" \(5.7\%\) to "Known" \(2.94\%\). Note in the case that the equations are "Known", we expect that the equations behave more like labels for the dataset. Moreover, the low expression error (\(<\!2.1\%\)) shows PROSE's ability to correct and predict accurate equations, even when erroneous ODE equations are provided.
Data vs. Equation Prediction.We present the results of 10K testing samples in the "Unknown (3D)" experiment in Table 3. We see that the data prediction (whose features are influenced by the symbolic terms) is more accurate than using the learned governing equation
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Experiments & Data- & Unknown & Term & Term & \\ (Expression Type) & Noise & Coefficients & Deletion & Addition & \# ODEs \\ \hline Known & ✓ & ✗ & ✗ & ✗ & 12 \\ Skeleton & ✓ & ✓ & ✗ & ✗ & 12 \\ Unknown (3D) & ✓ & ✓ & ✓ & ✓ & 12 \\ Unknown (Multi-D) & ✓ & ✓ & ✓ & ✓ & 15 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Experiment settings.** Data-noise: additive noise on data. Unknown coefficients: replace the input equation coefficients with placeholders. Term deletion: omit a term in the target equation with 15% chance. Term addition: add an erroneous term with 15% chance. For the last test, all data inputs are padded to the maximum equation dimension. “Unknown expressions” means that the coefficients are unknown and there are terms added and removed.
directly. This shows the value of constructing a data prediction component rather than only relying on the learned governing equation. However, the predicted equations can be further refined using optimization techniques, typically Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, where the predicted expression parameters can be used as a close initial guess.
Out-of-distribution Case Study.We study our model's ability to generalize beyond the training distribution. Specifically, we test on datasets whose parameters are sampled from a large interval \([F-\lambda F,F+\lambda F]\), where \(F\) represents a value of interest. We choose \(\lambda=0.15,0.20\), which are greater than the training setting \(\lambda=0.10\). The results are shown in Table 4. This shows that the approach can be used for prediction even in the case where the parameter values were not observed during training time.
Ablation Study.Since the model is multimodal in both inputs and outputs, we investigate the performance gains by using the equation embedding in the construction of the features. In particular, we compare the performance of the full PROSE model with multimodal input/output (as shown in Figure 2) and the PROSE model with only the data modality (i.e. no symbol encoder/decoder or fusion structures).
The comparison tests are run using varying numbers of input sensors. For consistency, noise on the data samples is not included in this test, although the symbolic inputs do have unknown
\begin{table}
\begin{tabular}{c c c c} \hline \hline Parameter Sample & Relative & Relative & Percentage of Valid \\ Relative Range \(\lambda\) & Prediction Errors (\%) & Expression Error (\%) & Expression Outputs (\%) \\ \hline
0.10 & 3.43, 4.63 & 2.11 & 99.95 \\
0.15 & 3.89, 5.71 & 3.21 & 99.44 \\
0.20 & 4.94, 7.66 & 4.83 & 97.89 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Out-of-distribution Testing Performance. Relative prediction errors are reported for intervals \([2,4]\) and \([2,6]\), respectively.**
\begin{table}
\begin{tabular}{c c c c} \hline \hline Experiments & Relative & Relative & Percentage of Valid \\ (Expression Type) & Prediction Errors (\%) & Expression Error (\%) & Expressions (\%) \\ \hline Known & 2.74, 2.94 & 0.00 & 100.00 \\ Skeleton & 3.39, 4.59 & 2.10 & 99.98 \\ Unknown (3D) & 3.43, 4.63 & 2.11 & 99.95 \\ Unknown (Multi-D) & 3.95, 5.66 & 1.88 & 99.94 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Performance of the model trained with different input expression types. The two relative prediction errors are for interval \([2,4]\) and \([2,6]\), respectively.**
\begin{table}
\begin{tabular}{c c c} \hline \hline & Relative & Percentage of Valid \\ Prediction Generation Method & Prediction Error (\%) & Expression Outputs (\%) \\ \hline Data decoder output & 4.59 & 99.96 \\ Symbol decoder output + BDF method & 14.69 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Performance of data decoder output and symbol decoder output plus the backward differentiation formula (BDF method).**
coefficients and terms added/removed. As shown in Figure 3, the PROSE model with multimodal input/output consistently outperforms the data-modality-only model, demonstrating performance gains through equation embedding. Notably, we do not observe any degradation in the full PROSE model's performance when reducing the number of input sensors, whereas the data-modality-only model's performance declines as sensors are removed from the input function. This showcases the value of the symbol modality in supplying additional information for enhancing data prediction.
In Figure 4, we plot 4 (out of the \(64=8\) layers \(\times\) 8 heads) attention maps corresponding to the Feature Fusion layers on one four-wing attractor example (see Appendix B). This uses the full PROSE model with multimodal input/output and with a data input grid size 32. The non-zero values (which appear as the yellow/green pixels) indicate the connections between the features. More importantly, the non-zero values in the bottom-left and upper-right blocks indicate a non-trivial cross-modality interaction. Together with the improved relative error shown in Figure 3, we see the overall improvements using our multimodal framework.
Output Example.In Figure 5, we display a typical PROSE output from the "Unknown (3D)" experiment in Table 2. Each curve is one trajectory of one state variable \(u_{i}(t)\) for \(i=1,2,3\)
Figure 4: **Sampled attention maps of feature fusion layers.** For each map, non-zero values in the upper left and bottom right corner represent in-modality interactions and non-zero values in the upper right and bottom left blocks represent cross-modality interactions. Other maps are presented in Appendix C.
Figure 3: **Comparing the PROSE model with multimodal input/output and the PROSE model with only the data modality.** The models are trained with different data input lengths for 60 epochs. The relative prediction errors are computed on the same output grid.
The target solution curves (with noise) are the dashed lines (only up to \(t=2\) is seen during testing) and the predicted solution curves are the solid lines. We display the target equation and the generated equation, which is exact with respect to the terms generated and accurate up to two digits (noting that the mantissa has length three).
## 5 Discussion
The PROSE network is developed for model and multi-operator discovery. The network architecture utilizes hierarchical transformers to incorporate the data and embedded symbols in a symbiotic way. We show that the learned symbolic expression helps reduce the prediction error and provides further insights into the dataset. Experiments show that the generated symbolic expressions are mathematical equations with validity of \(>\!99.9\%\) on in-distribution tests and \(>\!97.89\%\) on out-of-distribution tests, and with numerical error of about \(2\%\) (in terms of relative \(L^{2}\) norm). This shows that the network is able to generate ODE models that correctly represent the dataset and does so by incorporating information from other similar ODEs.
The symbolic expression and data fusion yield a scientifically relevant multimodal formulation. In particular, the expressions provide alternative representation for the dataset and its predicted values, enabling the extraction of more refined information such as conserved quantities, stationary points, bifurcation regimes, hidden symmetries, and more. Additionally, since the symbolic expressions are valid functions, they can be used for evaluation and thus lead to alternative predictive algorithms (i.e. simulating the ODE). One future direction is the construction of a PROSE approach for nonlinear partial differential equations with spatio-temporal queries.
#### Acknowledgments
Y. Liu was supported in part by an NSF CAREER Award DMS-2331100. Z. Zhang was supported in part by NSF DMS-2331033. H. Schaeffer was supported in part by AFOSR MURI FA9550-21-1-0084, NSF DMS-2331033, and an NSF CAREER Award DMS-2331100.
Figure 5: **An example of PROSE’s outputs.** Target solution curves are dashed lines and predicted solution curves are solid lines. The input is the data up to \(t=2\). The numbers in the legend refer to the coordinate of the state variable \(u_{i}(t)\) for \(i=1,2,3\). The target and PROSE generated equations are displayed. | 線形近似しない微分方程式をニューラルネットワークを用いて近似することは、リアルタイム予測、逆問題、最適制御、および代理モデルなどの様々な科学計算タスクのための強力で効率的なツールです。過去の研究では、 dynamical システムをネットワークに埋め込むための2つのアプローチに焦点を当ててきました。1つは、入力パラメータ化された関数から解を予測する単一のソリューションオペレーターを学習すること、もう1つは、状態変数に対する支配方程式を学習することです。これらのアプローチは、同じ基礎データや関数を表現する異なる方法を生成します。さらに、微分方程式のファミリーが共通の特性を共有していることを観察したため、さまざまな微分方程式に対して、同じネットワークの表現を追求します。私たちの方法は、予測オペレータとシンボル表現(PROSE)と呼ばれ、多様性の入力から多様性の出力に学習します。これは、数値予測 |
2309.09936 | A Concise Overview of Safety Aspects in Human-Robot Interaction | As of today, robots exhibit impressive agility but also pose potential
hazards to humans using/collaborating with them. Consequently, safety is
considered the most paramount factor in human-robot interaction (HRI). This
paper presents a multi-layered safety architecture, integrating both physical
and cognitive aspects for effective HRI. We outline critical requirements for
physical safety layers as service modules that can be arbitrarily queried.
Further, we showcase an HRI scheme that addresses human factors and perceived
safety as high-level constraints on a validated impact safety paradigm. The aim
is to enable safety certification of human-friendly robots across various HRI
scenarios. | Mazin Hamad, Simone Nertinger, Robin J. Kirschner, Luis Figueredo, Abdeldjallil Naceri, Sami Haddadin | 2023-09-18T16:52:48 | http://arxiv.org/abs/2309.09936v1 | # A Concise Overview of Safety Aspects
###### Abstract
As of today, robots exhibit impressive agility but also pose potential hazards to humans using/collaborating with them. Consequently, safety is considered the most paramount factor in human-robot interaction (HRI). This paper presents a multi-layered safety architecture, integrating both physical and cognitive aspects for effective HRI. We outline critical requirements for physical safety layers as service modules that can be arbitrarily queried. Further, we showcase an HRI scheme that addresses human factors and perceived safety as high-level constraints on a validated impact safety paradigm. The aim is to enable safety certification of human-friendly robots across various HRI scenarios.
Keywords:Human-robot interaction, gracefulness, safety
## 1 Introduction
Human-friendly robots are distinguished by their ability to delicately react and physically interact with the world through compliant hardware and adaptive controllers [1]. However, despite significant advances in their tactile design, robots in the real world are still hardly deployed for close collaborative tasks together with humans. Among the many challenges facing real-world human-robot interaction (HRI), physical safety is often considered the most pressing one. Moreover, in order to be accepted and deployed in close and effective interaction with human users, an intelligent robotic assistant must surpass the mere criteria of being contact-free and stress-free, i.e., physically safe. A human-friendly robot is required to be gracefully safe (GS), which we define as both possessing and exhibiting a (i) feasible, (ii) time-efficient, (iii) comfortable, and (iv) intuitive behaviour (i. e., perceived to be natural by the human user/coworker), while simultaneously being always
human-safe. The concept of _graceful robot behaviour_ was originally introduced in [2] as being safe, fast, comfortable, and intuitive. However, such gracefulness should be further emphasized by ensuring safe robot behaviour in shared and collaborative spaces with humans, ultimately allowing for safety certification. This means the movements of the involved assistive robots should be physically as well as psychologically safe while additionally considering the efficacy of the human-robot team. Robots with such features are hereby termed _gracefully safe robots_.
Graceful robot navigation and reactive motion control strategies have been gaining momentum recently, where they have been shown to directly influence the quality and efficiency of HRI [3, 4, 5, 6]. Nonetheless, to enable physical human-robot interaction (pHRI) in real-world scenarios [7], safety standards are decisive [8]. They govern the mechanical design, motion planning, and low-level control aspects of human-friendly robots in both industrial and domestic/service spaces. To adhere to these standards, a semi-automated, temporal logic-based risk analysis methodology for collaborative robotic applications that relies on formal verification techniques was introduced in [9]. Furthermore, fundamental research about collisions and their consequences has received considerable attention from the robotics community. Concerning the safety of physical contacts, unintended robot-to-human impact scenarios are classified into five main contact scenarios [10]. Besides clamping in the robot structure, these include free, constrained, partially constrained, and secondary impacts. For scenarios involving desired contacts, such as hand-over tasks, smooth minimal-jerk movements on the robot side are known to improve the overall performance of the collaborative task with the human partner [11]. Moreover, jerky/oscillatory motions are typically uncomfortable or even hazardous for people with specific conditions such as spinal cord injuries [2]. In addition to physical integrity, the robot's behaviour plays a critical role in psychological safety. For instance, unexpected robot motion behaviours have been shown to trigger involuntary motions of users as a reaction of startle and surprise [12]. In a similar fashion, any changes to the underlying functional modes of the collaborative robot, and consequently its applied motion commands, should be smooth to ensure that the interaction is executed efficiently and pleasantly [13].
Even though many building blocks and features for safe HRI exist [3, 4, 5, 6], all these solutions still need to be integrated with recent concepts of graceful robot motion. However, little attention is being paid to safety architectures that enable adequate simultaneous treatment of gracefulness and human-friendliness requirements of HRI scenarios. This work aims to fill this fundamental gap hampering real-world HRI deployment. Herein, we propose a framework for gracefully safe robots, which in addition to physical and psychological safety connected to graceful features, addresses additional implementation hurdles [14, 15, 16, 17, 18, 19, 20, 21, 22, 23] and allows for further integration of other critical challenges (such as, e.g., scalable integration, efficient coordination, dynamic mobile manipulation, optimal environment perception/sensing, purposeful communication, risk assessment, and decision making), as well as societal and ethical concerns (including data privacy and personal security [24, 25, 26]).
## 2 Problem Statement and Contribution
As of today, a couple of solutions exist for different physical and cognitive safety aspects of HRI [27, 28, 29, 30, 12, 31, 32]. However, fulfilling the strict safety requirements of collaborative robotic systems while maintaining adequate graceful and
human-friendly behaviour is still a significant challenge that has yet to be fully overcome. To tackle this, we define the _gracefully safe (GS)_ behaviour for human-friendly robots by adopting and reinterpreting the original definition of being graceful in [2] as follows. Firstly, we clarified the safety requirement of the graceful robot behaviour as being related to motion constraints. In other words, by _safe_ in [2] it was rather meant that the robot motion fulfills the governing constraints (i.e., feasible). Secondly, we modified the gracefulness characteristic of being _fast_ to _time-efficient_ since the involvement of human safety aspects may pose different objectives on the human-robot collaborative task execution. Thirdly, as two characteristics of a graceful robot behaviour (namely, being comfortable and intuitive to human users) are inherent to perceived safety and acceptance, an independent comprehensive safety framework can be employed to tackle those requirements. As a quid pro quo, the task execution pipeline of the robot, which includes the motion controllers, motion planners, and task planners, must be reactive and adaptable, (i.e., capable of addressing time constraints and additional costs imposed by human safety requirements).
Frameworks to achieve a GS behaviour, and further enable safety certification of HRI applications, should be suitably designed to simultaneously integrate the most prominent results concerning various physical and cognitive safety aspects from one side with robot motion planning and low-level control on the other. In addition, this synergy must be achieved as prescribed at the task planning and interaction dynamics level, where safe performance trade-offs between being very _conservative_ towards safety or _just-as-needed_ to improve the productivity of the interactive task can also be incorporated. In this paper, we systematically tackle the central missing link to overcome the aforementioned gaps by proposing a _multi-layered architecture for addressing safety aspects of human-friendly robots during HRI scenarios in both industrial and domestic settings_. Overall, the main contributions of this article can be summarized as follows.
* Based on an extensive literature analysis of multiple HRI dimensions, we distinguish between various physical and cognitive safety aspects that must be simultaneously fulfilled by human-friendly robots during GS-HRI;
* We identify instantaneous inputs/outputs and resource requirements of each physical safety layer;
* Further, we detail the impact safety layer, showing how it can be implemented at the robot task planning and motion/control level. For this, we propose the so-called _Safety-as-a-Service_ service concept as an integrated multi-layered architecture for comprehensive safety consideration in HRI;
* Finally, with the help of some initial integration results, we discuss how cognitive safety layers can be implemented on top of the physical ones in the design of GS-HRI.
## 3 Proposed Multi-layered HRI Safety Architecture
For HRI applications, safety and security are among the most critical dimensions to consider. The term'safety' typically refers to potential physical harm, whereas the term'security' broadly refers to many aspects related to health, well-being, and aging [25]. Consequently, investigating safety aspects for graceful HRI requires a multidisciplinary perspective. Typically, HRI safety aspects can be divided into physical and perceived safety, with the latter being an under-addressed topic in the robotics literature [33].
We carried out a focused literature review to identify the following critical physical and cognitive safety aspects, which must be simultaneously considered by human-friendly robots for a GS-HRI
* Impact safety
* Perceived safety
* Musculoskeletal safety
Based on that, we propose a multi-layered architecture for addressing safety aspects in HRI scenarios in both industrial and service/domestic settings, see Fig. 1.
### Identified safety layers for GS-HRI
Following an in-depth, focused literature review process, we identified the following key physical and cognitive safety layers that altogether cover the main aspects to be simultaneously considered for a gracefully safe and human-friendly robotic behaviour during HRI.
#### 3.1.1 Impact safety
Since contact is unavoidable and even desired in many applications, several studies, mostly employing cadavers and other human surrogates in addition to volunteers, have focused on understanding the pain thresholds and injury mechanisms of several human body parts to delimit the injurious conditions [34, 35, 36, 37, 38, 39, 40]. Important to notice is that most of the impact experiments reported in the literature were typically conducted on human cadavers from older adult subjects. For instance, Kent et al. [41] pointed out that overly large confidence intervals are produced on injury risk assessments in impact studies done with cadavers from older adults (as compared to those of young adults). Consequently, several researchers tried to overcome this problem by investigating the effect of age on the injury tolerance of humans and hence, developing some scaling laws [42]. Moreover, previous research has indicated that, on average, males experience less bone loss and slower cortical thinning rate than females as they age [43, 44]. Several
Figure 1: HRI safety layers in industrial and service settings. On top of ethical, legal, and security aspects, we distinguish between physical and cognitive safety layers; both are subject to anthropomorphic personalization and various user-related customizations.
biomechanical limits were proposed for the safety of robotic impact against humans, and the insights from biomechanical injury analysis were already imported into robotics [45]. Furthermore, the theoretical concepts behind the proposed pain/injury biomechanics-based paradigm have influenced many safety requirements stated in standardization documents such as EN ISO 13482 for personal care robots [46], EN ISO 10218-1 and -2, as well as ISO/TS 15066 for industrial collaborative robots [47; 48]. In addition to mitigating the involved human injury risks at the post-contact phase of the collision event pipeline [49], pre-collision strategies are also required for a safe operation around humans in shared workspaces [50]. A comprehensive dummy crash test-based assessment of human injury risks when colliding with personal mobility devices and service robots was recently conducted in [51]. Comparing the risks faced by different pedestrian categories, it was shown that multiple serious injuries due to collisions could occur when the speeds exceed a certain threshold. Additionally, severe head injuries from falling to the ground after the initial impact were predicted from the secondary impact analysis. To reduce the impact injury risks in both cases, the authors suggested using absorbent materials or lowering the differential speed at impact as mitigation strategies.
A well-established injury analysis-based approach for addressing the safety requirements for stationary manipulator arms at the pre-collision phase was previously proposed in [37]. For this injury biomechanics-based and impact data-driven approach, the so-called Safe Motion Unit (SMU) is the core tool for controlling the robot and some of the resulting dynamic collision parameters in a human-safe way. This systematic scheme was recently extended and generalized as a unified safety scheme for all floating-base robotic structures with branched manipulation extremities [27]. An abstraction of a generalized impact safety module is depicted in Fig. 2.
#### 2.0.1 Ergonomics
Typically, neurorehabilitation robots are programmed to interact autonomously with patients under clinician oversight (i. e., occupational and physical therapist) oversight such that safe and proper treatment is ensured [52]. A major advantage of robot-assisted therapeutic treatment is the opportunity for accelerated patient recovery with frequency and duration of treatment being key factors [53]. By precisely performing repetitive and mechanically power-consuming tasks, the robot drives the patient through ergonomically favorable positions during the whole training session. In contrast, any limitation of available degrees of freedom (DOF) during the robotic therapy can lead to changes in muscle activation patterns, negatively influencing its outcome [54].
Domestic or workplace ergonomics are addressed by performing risk assessments and analyzing human comfort during task execution. For this, ergonomists consider the worst posture achieved by taking measurements of the human's posture, either onsite or from video recordings. A comprehensive overview of the current state-of-the-art ergonomic human-robot collaboration in industrial settings was recently provided [55]. In their review, the authors not only investigated ergonomic assessment methodologies and available monitoring technologies for adapting robot control strategies online according to workers' distress and needs, but they also highlighted the most promising research themes and discussed state-of-the-art limitations and standing challenges. The main challenges lie in the cost-effectiveness of ergonomics monitoring, their comprehensive risk assessment methodologies, and the needed level of expertise to implement and maintain them. To handle the above issues, an ergonomically intelligent pHRI framework that includes smart and autonomous posture estimation, postural ergonomics assessment, and postural op
timization was proposed in [56]. Furthermore, to overcome practical problems and risk assessment inaccuracies associated with commonly used discrete ergonomics models in performing postural optimization, differentiable and continuous variants of the famous and scientifically validated RULA and REBA1 ergonomics assessment models were learned via neural network regression [57]. As a result of a comparative study on the employed models and state-of-the-art developments for postural optimization in pHRI and teleoperation (cf. Tab. 1 in [57]), DULA and DEBA2 models were proposed as alternative differential models for improving both gradient-based and gradient-free posture optimizations.
Footnote 1: R(UL/EB)A: Rapid (Upper Limb/Entire Body) Assessment
Footnote 2: D(UL/EB)A: Differentiable (Upper Limb/Entire Body) Assessment
By addressing static postural factors' influence, actions' repeatability, and experts' experience, ergonomic concepts are well-posed for high-level rapid task planning [58]. A human-robot collaboration framework for improving ergonomic aspects of the human co-worker during power tool operations was proposed in [28]. Nonetheless, ergonomic methods fail to address the impact and magnitude of larger forces and dynamic constraints in physical human-robot collaboration, which are better captured through muscular-informed metrics [30]. Building from existing literature, we propose a general abstraction for our ergonomics service module, as depicted in Fig. 3.
Figure 2: Impact safety layer. Relying on human pain and injury information, this layer ensures that all physical robot-human contacts are biomechanically safe.
#### 4.2.2 Musculoskeletal safety
In recent years, rehabilitation robotics has become indispensable for providing patients suffering from nervous system injuries with neurorehabilitation and movement therapy [59, 60]. These injuries include for example spinal cord injury, traumatic brain injury, or stroke. For a recent, comprehensive systematic review on the effectiveness of robot-assisted rehabilitation techniques for patients recovering from musculoskeletal injuries or neurologic impairments, it is recommended that the reader results [61].
The application of robotic technologies in rehabilitation has progressed over the last few years. However, while the demand for medical rehabilitation services has been rapidly increasing [62], the number of rehabilitation care providers continues to decrease annually [63]. Robotic medical devices are helpful for musculoskeletal therapy, where musculoskeletal symptoms such as myalgia, arthritis, postural instability, and fatigue are common disorders [64]. These rehabilitation robots support regaining and improving the functional status, coordination, and independence of older adults [65]. For instance, robot-aided locomotive treatment for stroke survivors and individuals coping with other neurologic impairments such as multiple sclerosis, Parkinson's disease, and spinal cord injury may involve either
Figure 3: Ergonomics and musculoskeletal safety layers (combined). By optimizing the robot motion plans, the ergonomics layer ensures avoiding less ergonomic human postures during pHRI. On the other hand, by optimizing the robot motions and grasping poses, the musculoskeletal safety layer ensures avoiding the user’s muscular discomfort/overloading during pHRI.
stationary, motion-based robots or exoskeletons [66]. Moreover, it was observed that impairments resulting from those diseases are becoming increasingly worrying for people under the age of 65 [67]. Besides walking-aid, typical daily-life activities where older adults or people with locomotive disorders need physical support during the _Sit-to-Stand_ and the _Stand-to-Sit_ transitional movements [68].
Regarding industrial settings, a novel control approach was proposed in [69] to alert and reduce a human partner's static joint torque overloading and consequent injury risks while executing shared tasks with a robot. An online optimization technique was employed for adjusting the robot trajectories to achieve more ergonomic human body poses, considering their stability, different workspaces (of robot and human), and task constraints. Furthermore, the problem of planning a robot configuration and shared object grasp during forceful human-robot collaboration is addressed in [29]. The proposed comfort planning framework aims to identify optimal robot configurations for jointly manipulating objects. This involves positioning the object in a way that minimizes the muscular effort exerted by the human and tailoring their collaborative actions accordingly. Additionally, the framework ensures the stability of the robot coworker during physical interaction. It enables the robot to shape human kinematics and musculoskeletal response while being agnostic to muscular activity estimation paradigms. Building from existing literature, we propose a general abstraction for our musculoskeletal safety service module, as depicted in Fig. 3.
#### 3.2.1 Perceived safety
Although extensive research work has been carried out on physical safety in HRI scenarios, considerations of humans' expectations and affective state are often overlooked. In dynamic co-manipulation tasks, the robot may need to achieve higher velocities even when humans are present. To address the psychological safety of humans working in proximity to or directly with robots, an experimental setup was devised to examine the influence of robot velocity and robot-human distance on involuntary motion occurrence (IMO) caused by startle or surprise [12]. The relative frequency of IMO served as an indicator of potentially unsafe psychological situations for humans. The findings from these experiments were utilized to develop the Expectable Motion Unit (EMU) framework. The EMU ensures that IMO remains within a customizable probability range in typical HRI settings, thereby preserving psychological safety. This EMU is integrated into a comprehensive safety framework that combines psychological safety insights with the physical safety algorithm of the Safe Motion Unit (SMU). In a subsequent study, the efficiency of this psychologically-based safety approach in HRI was further enhanced by simultaneously optimizing both the Cartesian path and speed using Model Predictive Control (MPC) such that the time taken to reach the target pose is minimized [70].
To investigate the impact of robot motion and individual characteristics on users' perceived safety in HRI, a study was conducted involving human participants [71]. The objective was to determine whether significant effects of human factors could be observed on IMO. The results of the study revealed that direct human factors such as gender, age, profession, intention, technology anxiety, or curiosity to use did not significantly influence the occurrence of IMO. However, a noteworthy habituation effect was observed, indicating that participants became accustomed to the robot's motions quickly. In the rather young subject sample which participated in the study of [72], only habituation showed a significant impact. Overall, those studies shed light on the interplay between robot motion, personal traits, and users' perceived safety in HRI, highlighting the importance of habituation and experimental design considerations. In [73], perceived safety in HRI for fixed-path
and real-time motion planning algorithms was investigated based on arbitrary, physiological vital signs such as heart rate. The results emphasized that perceived safety is positively affected by habituation during the experiment and unaffected by previous experience. A comprehensive discussion for increased perceived safety in HRI has been recently given in [33], where the following guidelines are listed
* Instead of seeking for the space of perceived safety, more focus should be put on objective metrics analysing a lack of perceived safety as significant indications for robot control schemes are mainly measurable under unsafe conditions;
* Regarding objective and subjective measures, robot-related and human-related factors should be treated together since the HRI process is bidirectional;
* The key influencing factors of perceived safety that should be considered in designing safe HRI are identified as comfort, experience/familiarity, predictability, sense of control, transparency, and trust;
* The consequences of robot-related factors, see for example [25], should not result in discomfort, lack of control, and user distrust, whereas the robot behaviours should be familiar, predictable, and transparent;
* Besides the interrelationship between the factors, individual human characteristics as well as emotional and physiological reactions should be considered for a better understanding of the source of safety perception.
#### 4.2.1 Acceptance
To improve industrial production tasks such as assembly, manufacturing, and intralogistics, human-robot collaboration (HRC) is instrumental. Even though there are apparent benefits of using robots in industrial workplaces, several barriers limit employing collaborative robots in the industry. These are not only related to strict safety regulations for physical human-robot collaboration (being the key show stopper for the investment from the employers' point of view), but also the workers' acceptance is crucial. In [74], the main factors influencing the workers' acceptance of HRC are examined. In [75], the authors hypothesized that giving human workers partial decision-making authority over a task allocation process for work scheduling maximizes both team efficiency and the desire of human team members to work with semi-autonomous robotic counterparts. Their experimental results indicated that workers prefer to be part of an efficient team rather than have a role in the scheduling process if maintaining such a role decreases their efficiency.
Acceptance is also a crucial factor for utilizing the potential of service robotics in facilitating domestic tasks, including required safety-critical measures. Moreover, meeting user expectations is essential for fostering trust between the human and the robot [76; 77]. For instance, accepting an assistive robot to operate on-site in close physical interaction for medical examinations requires patient trust towards the robot. On the other hand, for human-in-the-loop (HIL) telemedicine, the presence of a human expert that remotely operates the robot can help the person trust the robot more and accept even its risky motions to perform the task. In [31], a service robot was used to understand which outpatient-care tasks may be accepted by the subjects depending on their socio-demographics, beliefs, and level of robot autonomy.
#### 4.2.2 Personalization
Assistive robotics aims at providing users with continuous support and personalized assistance through appropriate interactions. Besides observing and understanding the changes in the environment to react promptly and behave appropriately, an intelligent assistive robot should be easy to handle, intuitive to use, ergonomic, and adaptive to human habits, individual usage profiles, and preferences. A personalized adaptive stiffness controller for pHRI tasks
calibrated for the user's force profile was proposed for industrial applications in [78]. Its performance was validated in an extensive user study with multiple participants on two different tasks against a standard fixed controller. The results showed that the personalized approach was better regarding both performance gain and user preference, clearly pointing out the importance of considering both task-specific and human-specific parameters while designing control modes for pHRI. Furthermore, analyzing users' interaction force profiles, it was further confirmed that human and task parameters could be combined and quantified by considering the manipulability of a simplified human arm model. In [79], a collaborative robotic system that is capable of assisting a human worker despite limited manipulation capabilities, incomplete task model, and partial environment observability was proposed. To achieve that, information from a high-level, hierarchical model is shared between the human and the robot, enabling transparent synchronization between the peers and mutual understanding of each other's plans.
A socially assistive robotic system that can provide affordable personalized physical and cognitive assistance, motivation, and companionship with adaptable behaviour to its human user profile was first proposed in [32]. In subsequent work [80], a fuzzy-based methodology was employed to investigate how matching the human and the robot personalities can influence their interaction. Furthermore, robot head-arm metaphoric gestures were generated automatically under different emotional states based on the prosodic cues of the interacting human. In [81], a novel cognitive approach that integrates ontology-based knowledge reasoning, automated planning, and execution technologies was recently proposed to endow assistive robots with intelligent features for performing personalized assistive tasks. These features include reasoning at different levels of abstraction, understanding specific health-related needs, and the ability to autonomously decide on how to act.
### Additional middle-ware safety considerations
To adequately address the human diversity related to both safety and security, some customization and individualization are necessary. In terms of physical safety, for instance, investigations on scaling issues (age and gender effects on material properties) and statistical methods have been conducted, see, e.g., [41, 44, 43], for estimating the human injury risk curves, using various anthropomorphic test devices (ATDs) and mathematical models of the human body [42]. International anthropometric data for the workplace and machinery design can be found in, e. g. [82], and the corresponding technical report [83]. On the other hand, employing physiological measurements to perform online assessment of operators' mental states is crucial in HRI. To progress towards interactive robotic systems that would dynamically adapt to operators' affective states, in [84] operator's recorded physiological data streams were analyzed to assess the engagement during HRI and the impact of the robot's operative mode (autonomous versus manual). Furthermore, a software framework that is compatible with both laboratory and consumer-grade sensors, while it includes essential tools and processing algorithms for affective state estimation, was recently proposed in [85] to support real-time integration of physiological adaptation in HRI.
## 4 Safety-as-a-Service: Implementation Prospects
The schematics shown in Fig. 4 demonstrate our proposed concept of providing different safety services upon request at different stages of the graceful task execution pipeline. The latter is obtained by redesigning motion controllers, motion
planners, and task planners of the user-defined collaborative robotic task execution pipeline. The aim is to satisfy the reactivity and adaptivity requirements imposed by the safety layers for a GS behaviour. Furthermore, the functionality of each safety layer is encoded as an on-demand service. In contrast, critical safety aspects are ensured via persistent (i. e., always on) services such as emergency braking or safe fault recovery operation modes.
To implement various safety services, the so-called generalized Safe Motion Unit (gSMU) framework can be adopted as the underlying safety-certifiable scheme for providing biomechanically-safe robot motions [27], with the possibility to include additional robot payload [86] and predictable braking strategies [87]. Simultaneous consideration of human factors, especially experience with robots/habituation, which potentially influences the humans' perceived safety for varying robot factors [71], can be achieved by including them in the EMU-SMU framework [37, 12], see Fig. 5.
## 5 Conclusion and Future Work
This work presented an integrated multi-layered architecture to simultaneously tackle safety issues as well as gracefulness requirements of human-friendly robots in HRI scenarios. Based on a focused literature review, we identified various physical and cognitive HRI safety layers and emphasized notable studies discussing each and
Figure 4: Safety-as-a-service concept for GS-HRI.
their corresponding findings. Furthermore, we suggested the _safety-as-a-service_ concept for formalizing how to address the requirements of each HRI safety aspect concurrently while adapting the collaborative task execution pipeline for graceful robot task execution. Then, we discussed an example that shows some promising integration work along the suggested direction.
For future research and developmental work, we will detail some crucial architectural aspects such as prioritization of safety features that generate service requests, smooth switching and management of multiple ones concurrently, hierarchical rules needed to handle conflicts that may arise at the output level from different layers, as well as elaboration of the middleware considerations. We also plan to study the possibility of extending the safety assessment methodology proposed in [9] to cover the cognitive aspects, such that it can be employed for formal verification of the proposed multi-layered HRI safety architecture. Moreover, a comprehensive user study in industrial and service robot settings with a heterogeneous subject sample (including a broad range of persons with different experiences, ages, and genders) is required. Further, as users are able to adjust their expectations of the robot's behaviour quickly (habituation), possible efficiency enhancements of the human-robot teams are feasible. Also, the effect of unfulfilled expectations on following interactions needs to be analyzed by means of subjective and objective measures.
| robots は今日の時点で素晴らしい運動能力を発揮していますが、人間と協力したり共存したりすることで潜在的な危険性も抱えています。そのため、人間とロボットの相互作用 (HRI) に対する安全は最も重要な要素となっています。この論文では、物理的な側面と認知的な側面を統合した多層的な安全アーキテクチャを提案しています。私たちは、物理的安全性層を任意のクエリが可能であるサービスモジュールとして規定しています。さらに、人間要因と安全を高いレベルの制約として、検証された影響安全パラダイムに基づいた HRI SCHEMATIC を示しています。人間の友好的なロボットの安全認証を、さまざまな HRI シナリオで実現することを目的としています。 |
2310.08593 | Data-driven methods for diffusivity prediction in nuclear fuels | The growth rate of structural defects in nuclear fuels under irradiation is
intrinsically related to the diffusion rates of the defects in the fuel
lattice. The generation and growth of atomistic structural defects can
significantly alter the performance characteristics of the fuel. This
alteration of functionality must be accurately captured to qualify a nuclear
fuel for use in reactors. Predicting the diffusion coefficients of defects and
how they impact macroscale properties such as swelling, gas release, and creep
is therefore of significant importance in both the design of new nuclear fuels
and the assessment of current fuel types. In this article, we apply data-driven
methods focusing on machine learning (ML) to determine various diffusion
properties of two nuclear fuels, uranium oxide and uranium nitride. We show
that using ML can increase, often significantly, the accuracy of predicting
diffusivity in nuclear fuels in comparison to current analytical models. We
also illustrate how ML can be used to quickly develop fuel models with
parameter dependencies that are more complex and robust than what is currently
available in the literature. These results suggest there is potential for ML to
accelerate the design, qualification, and implementation of nuclear fuels. | Galen T. Craven, Renai Chen, Michael W. D. Cooper, Christopher Matthews, Jason Rizk, Walter Malone, Landon Johnson, Tammie Gibson, David A. Andersson | 2023-09-07T16:28:50 | http://arxiv.org/abs/2310.08593v1 | # Data-driven methods for diffusivity prediction in nuclear fuels
###### Abstract
The growth rate of structural defects in nuclear fuels under irradiation is intrinsically related to the diffusion rates of the defects in the fuel lattice. The generation and growth of atomistic structural defects can significantly alter the performance characteristics of the fuel. This alteration of functionality must be accurately captured to qualify a nuclear fuel for use in reactors. Predicting the diffusion coefficients of defects and how they impact macroscale properties such as swelling, gas release, and creep is therefore of significant importance in both the design of new nuclear fuels and the assessment of current fuel types. In this article, we apply data-driven methods focusing on machine learning (ML) to determine various diffusion properties of two nuclear fuels--uranium oxide and uranium nitride. We show that using ML can increase, often significantly, the accuracy of predicting diffusivity in nuclear fuels in comparison to current analytical models. We also illustrate how ML can be used to quickly develop fuel models with parameter dependencies that are more complex and robust than what is currently available in the literature. These results suggest there is potential for ML to accelerate the design, qualification, and implementation of nuclear fuels.
## I Introduction
Atomistic structural defects influence and alter the macroscopic properties of nuclear fuels and materials [1]. Macroscopic changes, such as volumetric swelling, gas release, and creep, can in turn give rise to alterations of the functionality of the fuel in a reactor. Therefore, developing theoretical methods to predict the growth rates of atomistic point defects and defect clusters is of significant importance in the design and qualification of fuels and materials that are used in reactors. The growth rates of defect clusters are governed by the diffusivity of their constituent point defects. This is because the rate at which defects move through the fuel lattice strongly influences how quickly those defects combine to form larger defect clusters. Predicting the diffusion properties of defects in nuclear fuels under reactor conditions is, therefore, an important research focus in reactor design and safety and surety analysis.
There are two primary theoretical approaches that are applied to determine diffusion properties in nuclear fuels: (1) deriving empirically-motivated analytical functional forms and fitting those forms to existing experimental data and (2) extracting diffusion coefficients from atomic scale calculations and simulations combined with rate theory approaches. One simulation method that is commonly applied to understand and predict defect growth in irradiated materials is cluster dynamics [2; 3; 4; 5; 6; 7; 8]. Cluster dynamics is a mean-field method that tracks the time evolution of concentrations of point defects and defect clusters [9]. Diffusivity predictions are generated from cluster dynamics simulation data by combining the predicted defect concentrations with mobility data. Empirical analytical models provide ease of use, transferability, interpretability, and are computationally simply to evaluate. However, they typically have a limited range of applicability with respect to variation of reactor conditions and often do not capture the salient features of diffusion processes at a quantitative level. Atomistic calculations combined with rate models for defect evolution, often under nonequilibrium conditions [10; 11], can be used to provide high-level predictions of diffusion coefficients [9; 12; 13] in irradiated materials. However the process to construct, parameterize, calibrate, and test atomistic and cluster dynamics models can be time-consuming.
In this article, we develop a data-driven workflow to predict diffusion properties of nuclear fuels. This workflow focuses on the application of machine learning (ML) methods to predict diffusion coefficients of various chemical species, defect types, and defect clusters. Machine learning is a broad term that typically defines a set of numerical methods that are applied to construct unknown model functions using existing data to train the ML process [14; 15]. ML methods have had a history of success in the fields of chemistry, physics, and material science [16; 17; 18; 19; 20; 21]. In these fields, ML is often performed by training numerical models through the application of a specific learning algorithm to data from high-level electronic structure calculations or experimental data. Because ML uses existing data, it represents a powerful compliment to existing methods and not a replacement. ML methods can greatly reduce the amount of experimental data or simulation data needed to address a problem, allowing the prediction of properties of a material without running a computationally-expensive molecular simulation or performing an experiment. ML also allows for the development of models that capture more complex parameter dependencies than traditional models.
The nuclear engineering/physics community has started to adopt ML methodologies in the study of nuclear reactors and the nuclear fuel cycle [23; 24; 25; 26; 27; 28]. Histori
cally, these are research areas in which both experimental and simulation data is scarce and laborious to generate. Some noteworthy examples of applications of ML in nuclear engineering include: the prediction of properties of light water reactors [24], nuclear criticality safety analysis [26], and thermal conductivity prediction [27]. Given the previous success of ML in chemistry, physics, material science--and now nuclear engineering-- it is promising to expand the utility of ML techniques within the nuclear fuels discipline.
The two specific nuclear fuels examined in this work are uranium oxide \(\mathrm{U}\mathrm{O}_{2}\) and uranium nitride UN--the former is an established nuclear fuel and the latter has been not as widely used, although it is a promising candidate for a variety of advanced reactors due to its high thermal conductivity and U density. [29; 30; 31]. The diffusion datasets used to train ML processes are obtained from three sources: experimental results taken from the literature, atomistically-informed cluster dynamics simulations, and data augmentation methods that are applied to expand small experimental datasets. There are a number of recent studies on the diffusion properties of \(\mathrm{U}\mathrm{O}_{2}\)[32; 33; 9; 12; 34]. The data in these studies, and other experimental work [35; 36; 37; 38], can be used to validate and calibrate the developed models. There is less available information and data for UN in comparison to \(\mathrm{U}\mathrm{O}_{2}\). There are, however, several small datasets available for self-diffusion and fission gas diffusion in UN [39; 40; 41; 42; 43; 44]. We examine these fuels under irradiated and non-irritated conditions.
The primary goal in this paper is to demonstrate the application of ML to predict diffusivity behavior in nuclear fuels. The workflow for this project is shown in Fig. 1. Experimental diffusion data is used to train ML processes directly and is also fed into data augmentation algorithms to expand small datasets. The experimental data and augmented data are coupled with data generated using molecular simulations, specifically cluster dynamics simulations. The baseline parameters in the cluster dynamics model, informed by atomic scale calculations and simulations, are generated based on DFT and empirical potentials [45]. Those baseline parameters are then calibrated self-consistently by creating a feedback loop between the ML program, the molecular simulations and experimental data used for calibration. Note the range within which the parameters are allowed to change is based on the inherent uncertainty of the atomic scale simulations. The target of this process is to generate diffusivity data from cluster dynamics simulations that agree with experimental results. The overall output of this workflow is a set of calibrated diffusivity models built by merging data from simulations and experiments.
The remainder of this article is organized as follows: Section II contains an introduction to the methods used to examine and predict diffusion coefficients in nuclear fuels. An overview of the available experimental data is also given. In Section III, the results of the ML diffusion models are presented. Section III.1 focuses on fission gas diffusion and self-diffusion in uranium oxide and Section III.2 focuses on various diffusion properties of uranium nitride. Conclusions and future directions are discussed in Section IV.
## II Methodologies and data
### Machine Learning
ML methods are typically applied to construct unknown model functions using existing data to train the ML process [15]. The general objective in _supervised_ ML, which is the method used here, is to take a collection of input data (sometimes called features) and corresponding output data (sometimes called labels) and develop a function \(f\) that accurately maps the input data to the output data. The term _supervised_ ML means that for every set of features \(\vec{x}\) there is a corresponding label \(y\). Described in mathematical terms, given a set of \(k\) features and a labelled dataset of size \(n\), the general goal of ML is to take the input data \(\mathbf{X}=\{\vec{x}_{1},\vec{x}_{2},\ldots,\vec{x}_{n}\}\) where \(\vec{x}_{1}=\{x_{1}^{(1)},x_{1}^{(2)},\ldots,x_{1}^{(k)}\}\) and corresponding output data \(\mathbf{y}=\{y_{1},y_{2},\ldots,y_{n}\}\) and develop a function \(f\) that accurately maps the input data to the output data:
\[f:\mathbb{R}^{k}\rightarrow\mathbb{R}. \tag{1}\]
In the context of diffusion models developed here, the feature vector \(\vec{x}\) will consist of properties such as tem
Figure 1: Schematic representation of the data-driven workflow developed in this work.
perature, partial pressure, and fission rate and the output will be a diffusion coefficient for a specific defect or chemical species. The goal of training a ML process is to develop a mathematical model using available data so that when new data is used as input the model generates an accurate output. In most ML applications, the developed function \(f\) takes a highly complex functional form that does not correspond to a simple and sparse analytical expression.
ML is commonly performed by separating available data into two subsets--training data and testing data. The training data is used to build the model, i.e., to train the ML model, and the testing data is used to quantify the predictive accuracy of the model. The testing data is not used in the model construction and is only used to test and quantify the model. A typical split between training and testing data is 80:20 meaning that eightty percent of the data is used for training and twenty percent for testing. An 80:20 training/testing ratio is used in all the ML models presented in this work.
There are multiple ML methods that can be used to construct the function \(f\). Two well-known approaches are kernel methods and neural networks [14]. Neural networks are data-hungry methods that perform best when the size of the dataset used for training is large [46]. In the context of diffusion in nuclear fuels, the available datasets are typically small and therefore kernel methods are expected to perform better for these cases. The specific ML package we use to perform the ML procedures is Scikit-learn[47]. In this work, we illustrate data-driven ML methods to determine diffusion of various defects and species in uranium oxide and uranium nitride. The data used to train the ML processes comes from two primary sources: experimental datasets extracted from the literature and molecular simulations.
### Experimental Data
The experimental diffusion datasets used as training data for the ML processes are taken from literature sources. Table 1 is a list of the datasets. In most cases, the experimental data was given in tabular form in the original papers. If the numerical values of the data were not listed in a table in the original papers, and were only shown in graphical figures, the data was digitally extracted from the figures using the WebPlotDigitizer program. It is important to note that the diffusion data in Table 1 was collected over several decades and that the experimental techniques used to collect the data are varied, therefore the quality and accuracy of the datasets also vary.
### Cluster Dynamics Simulations
Molecular simulation methods can be applied to generate diffusivity data for nuclear fuels [34, 9, 12]. Here, cluster dynamics simulations are used to generate diffusion coefficients for various defect types in the respective fuel. Implementing the cluster dynamics method consists of parameterizing and then solving a typically large system of nonlinear coupled ordinary differential equations, where each equation in the system describes the time evolution of a specific defect type.
The cluster dynamics code CENTIPEDE [12, 9] was applied to incorporate physical parameters for the nuclear fuels and to solve the defect evolution equations. Details about the CENTIPEDE code and the physics behind it can be found in Ref. [9]. In a CENTIPEDE simulation, the concentration \(c_{d}\) of every defect type \(d\) is tracked in time through a differential equation of the form
\[\begin{split}\frac{dc_{d}}{dt}&=\dot{\beta}_{d}+ \sum_{d^{\prime}}\dot{R}_{d,d^{\prime}}(c_{d},c_{d^{\prime}},D_{d},D_{d^{ \prime}},T,G)\\ &\quad-\sum_{s}\dot{S}_{d,s}(c_{d},c_{s},D_{d},T,G),\end{split} \tag{2}\]
where \(\dot{\beta}_{d}\) is the generation rate of defect \(d\) due to irradiation, \(\dot{R}_{d,d^{\prime}}\) is the reaction rate between defect types \(d\) and \(d^{\prime}\), and \(\dot{S}_{d,s}\) is the sink rate between defect type \(d\) and sink type \(s\). The sums in Eq. (2) are taken over all defect types (self-inclusive) and all sink types. The reaction and sink rates depend on the free energy of the system \(G\) and temperature of the system \(T\). The reaction rate between defect types \(d\) and \(d^{\prime}\) also depends on the concentrations of each defect and the diffusion coefficients \(D_{d}\) and \(D_{d^{\prime}}\) of those defects. We are primarily interested in solving for the steady-state concentrations with constant source and sink strengths, which are found when the rate of change of the concentration vanishes (\(\frac{dc_{d}}{dt}=0\)) up to some numerical precision for all defect types (See Refs [12, 45, 9] for more details on CENTIPEDE implementations of UO\({}_{2}\)). Once the concentration of point defects and clusters have been determined, self-diffusion and Xe diffusion can be obtained as the sum over the product of the relative concentration of each defect contributing to diffusion of a species and its mobility.
## III Results
### Uranium Dioxide
The generation and subsequent diffusion of fission gas in UO\({}_{2}\) impacts fission gas release and swelling, which, in turn, impact fuel performance [12]. Turnbull _et al._ have reported measurements for Xe diffusion coefficients in UO\({}_{2}\) under irradiation over the approximate temperature range \(500\mathrm{K}-1700\mathrm{K}\)[35]. Fig. 2 shows the Xe diffusion coefficient in UO\({}_{2}\) as a function of temperature--the red circular markers represent the Turnbull data. Analytical models have been previously developed to predict the Xe diffusion coefficient at various temperatures and under various irradiation conditions. Two analytical models are the Matthews model [12] and the Forsberg
model [49], which is based on the original development by Turnbull _et al._[35]. The details of these models are found in the Appendix. In Fig. 2, the Matthews model is shown as a dashed black curve and the Forsberg model is shown as a dashed red curve. Both analytical models are in agreement with the Turnbull data over most temperatures, but the Matthews model overestimates and the Forsberg model underestimates Xe diffusion at high temperatures. The parameters in the Matthews model are obtained by fitting to the results of cluster dynamics simulations performed using the CENTIPEDE code, not by directly fitting to the Turnbull data. Therefore, some discrepancy between the data and the model is expected.
As proof of concept that ML can be used to improve the predictive accuracy of existing analytical models, we used the Turnbull data to train a ML process to predict the Xe diffusion coefficient. The specific ML method used was kernel ridge regression (KRR), a method which combines ridge regression with the so-called kernel trick[50]. We implemented KRR using the Scikit-learn[47] software package. Ridge regression is a method for approximating the coefficients of several multiple-regression models which works well if the independent variables are highly correlated [51; 52]. Scikit-learn specifically utilizes ridge regression with linear least squares with \(L_{2}\)-norm regularization. In some ML methods and applications, raw data must be transformed via a feature map into a feature vector representation. Kernel methods, through the use of kernel functions, can circumvent the need to directly calculate feature vectors, which can be computationally expensive. These methods achieve this by calculating the inner products between each image pair within the feature space [53].
The predictor function (the unknown function to be approximated) in KRR can be expressed as:
\[f(\vec{x})=\sum_{i}\alpha_{i}\mathcal{K}(\vec{x},\vec{x}_{i}), \tag{3}\]
where \(\mathcal{K}(\vec{x},\vec{x}_{i})\) is the so-called kernel which can be chosen to take various functional forms depending on the properties of the data being analyzed; a polynomial kernel and radial basis function kernel [54] are used in this work. The elements \(\alpha_{i}\) in Eq. 3 are taken from the matrix,
\[\mathbf{\alpha}=(\mathbf{K}+\lambda\mathbf{I})^{-1}\mathbf{y}, \tag{4}\]
where \(\mathbf{K}\) is the kernel matrix with elements given by \(K_{i,j}=\mathcal{K}(\vec{x}_{i},\vec{x}_{j})\), \(\mathbf{I}\) is the identity matrix, \(\lambda\) denotes a regularization parameter in the ridge regression method that puts a penalty on the weights in order to reduce the variance of predictions, and \(\mathbf{y}\) represents the target matrix.
The ML result for Xe diffusivity generated by training a KRR process is shown as a blue curve in Fig. 2. The ML result is in strong agreement with the experimental results over the entire temperature range of the Turnbull data. The dip in the ML model at \(\approx 800\)K is a result of variance in the data used to train the ML model, not a change in the diffusion mechanism. This proof of concept example illustrates the power of ML to quickly generate models that accurately capture important trends in diffusion data. When a ML model is trained using the Turnbull data, there is large variance in the model depending on which points are randomly assigned as testing data and training data. To mitigate this variance, we developed a data augmentation approach to artificially expand the Turnbull dataset, and, therefore, reduce variance in the ML model. The results presented in Fig. 2 use this data augmentation method, which is described in detail below.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Dataset & Fuel & Species & \(T\)(K) & \(p\)(atm) & \# of data points & Ref. \\ \hline Turnbull _et al._ & UO\({}_{2}\) & Xe & \(\approx 500-1700\) & \(p_{\text{O}_{2}}\)= NA & 34 & 35 \\ \hline Miekeley and Felix & UO\({}_{2}\) & Xe & \(\approx 1200-2000\) & \(p_{\text{O}_{2}}\)= NA & 32 & 36 \\ \hline Davies and Long & UO\({}_{2}\) & Xe & fit only & \(p_{\text{O}_{2}}\)= NA & fit only & 37 \\ \hline Matzke & UN & N & \(\approx 1500-2300\) & \(p_{\text{N}_{2}}\)= NA & 36 & 39, 40 \\ \hline Holt and Almassy & UN & N & \(\approx 2050-2300\) & \(p_{\text{N}_{2}}\approx 0.01-0.8\) & 12 & 41 \\ \hline Sturiale and DeCrescente[48] & UN & N & \(\approx 1800-2400\) & \(p_{\text{N}_{2}}\approx 0.13\) & 27 & 42 \\ \hline Reimann _et al._ & UN & U & \(\approx 1875-2150\) & \(p_{\text{N}_{2}}\approx 10^{-4}-0.6\) & 30 & 43 \\ \hline \end{tabular}
\end{table}
Table 1: List of experimental diffusivity datasets used in this work
Figure 2: Xe diffusion coefficient in UO\({}_{2}\) as a function of temperature. The red points are experimental results from Turnbull _et al._[35]. The solid blue curve is the ML result. The dashed red and dashed black curves are, respectively, results of the analytical models of Matthews _et al._[12] and Forsberg and Massih [49].
Large datasets are typically used to train ML models (e.g., neural network models). This highlights the traditional connection between big data and machine learning. However, in context of diffusion properties of nuclear fuels, there is often limited experimental data available. In the situations in which the available data is limited, performing ML can result in models with large variances and uncertainties. To mitigate these problems, a data augmentation technique has been developed to artificially expand the small available datasets. This approach shares similarities with density estimation methods [55] but is more specifically tailored to solve the problem of fission gas diffusivity in nuclear fuels. The core of this approach is to use the geometric size of the experimental dataset used as training data to estimate the variance of the data and the density of the experimental data to estimate the mean of the data, and then to generate augmented data by randomly sampling new datapoints from a distribution that uses the estimated variance and mean.
The first step in the augmentation process is to construct the concave hull (a boundary over the dataset) of the data. The concave hull of the Turnbull data is shown in Fig. 3. The Turnbull data consists of diffusivity data as a function of temperature--the concave hull of the data \(\partial\mathcal{H}\) bounds the data. At a specific temperature \(T\), augmented datapoints are generated by sampling from a normal distribution \(\mathcal{N}(\mu(T),\mathcal{W}(T))\) where \(\mu(T)\) is the distribution mean which is extracted from the hull density and \(\mathcal{W}(T)\) is the width of the hull. The hull width is defined by drawing a vertical line at temperature \(T\), and noting that the line intersects the hull boundary twice, once at a higher diffusion value \(\partial\mathcal{H}_{\mathrm{H}}(T)\) and once at a lower diffusion value \(\partial\mathcal{H}_{\mathrm{L}}(T)\). The width is \(\mathcal{W}(T)=\partial\mathcal{H}_{\mathrm{H}}(T)-\partial\mathcal{H}_{ \mathrm{L}}(T)\). The mean is constructed \(\mu(T)\) from a spline interpolation across the data.
To generate an augmented datapoint, a random temperature is sampled from a uniform distribution over the desired temperature range, here \(300\mathrm{K}-2100\mathrm{K}\). At the sampled temperature, a sample is drawn from the normal distribution \(\mathcal{N}(\mu(T),\mathcal{W}(T))\). That sample is an augmented diffusion value at the specified temperature. We then iterate over this process until the desired number of augmented datapoints are generated. After performing data augmentation and training the ML model on the augmented data, the variance in the model development is reduced. The black dots in Fig. 3 are augmented data. When the variance in the Turnbull data is small, the variance in the augmented data is also small. Similarly, when the variance in the data is large, the variance in the augmented data is also large. This illustrates how developed data augmentation method captures trends in the variance of the training data.
The kernel-based ML approach can be applied to generate accurate models for diffusion data, but it does not explicitly encode any fundamental physics in the fitting process. Explicitly encoding physics into ML models can be advantageous for situations in which the ML model is used to make predictions outside of the boundaries of the training data, i.e., when the ML is used for extrapolation. One way that physics principles can be encoded into an ML model is to use Physics-Informed Machine Leaning (PIML) methods [56]. Fig. 4 shows the result of a PIML model that is developed by reparameterizing the Matthews analytical model using augmented data generated from the Turnbull dataset. The PIML agrees well with the experimental data across all temperatures. One advantage of the PIML method over the kernel-based ML model is that it captures both the high and low temperature trends outside of the boundaries of the experimental data, similar to the Matthews analytical model [12].
The accuracy of ML models can be compared with the analytical models to quantify any improvement in predic
Figure 3: The concave hull of the Turnbull _et al._ data is shown in solid black. The black dots are augmented data and the red points are the Turnbull experimental data.
Figure 4: Xe diffusion coefficient in \(\mathrm{U}\mathrm{O}_{2}\). The red points are experimental results. The dashed black curve is the analytical model of Matthews _et al._[12] and solid red curve is Matthews model parameterized using augmented data. Note that the Matthews model is not fit directly to the Tunbull data, so, here, the primary conclusion is the utility of PIML in the context of fitting to new data sources.
tive accuracy that can be gained using data-driven methods. The root mean square deviation (RMSD) for the four methods/models described previously (taken with respect to the Turnbull data) is shown in Fig. 5. The ML result gives the lowest error and the PIML gives the second lowest error. Both data-driven approaches increase the predictive accuracy for Xe diffusion in comparison to analytical models. Compared to the Matthews model and the Forsberg model, the kernel-based ML approach reduces the error by approximate factors 1.7 and 1.5, respectively. The PIML method reduces the error by a factor 1.3 in comparison to the Matthews model and a factor 1.2 in comparison to the Forsberg model. Note that all the models including the analytical models perform well for Xe diffusion in \(\mathrm{U}\mathrm{O}_{2}\). This can be observed qualitatively in Figs. 2 and 4. Therefore, while the data-driven models do provide improvements in predictive accuracy for well-studied fuels such as \(\mathrm{U}\mathrm{O}_{2}\), we anticipate the primary use of these methods will be when modeling lesser-studied fuels.
#### iii.2.1 Multi-Dimensional Diffusion Models
ML can also be applied to construct multidimensional diffusion models of the form \(D(T,\dot{F})\) which capture the dependence of Xe diffusion on the fission rate \(\dot{F}\) in the material in addition to capturing trends in the temperature dependence. Fig. 6 shows the results of a KRR ML process in comparison to the irradiated Turnbull data and the nonirradiated (\(\dot{F}=0\)) data of Miekeley and Felix [36]. The ML model is trained on augmented data generated from these datasets. The ML prediction is in strong agreement with the experimental datasets over all examined temperatures and accurately captures the irradiation behavior. This illustrates the ability of ML to accurately capture trends in multidimensional diffusion data. We have confirmed that the ML model smoothly interpolates between the fission rate of the Turnbull data (\(\dot{F}=10^{19}\,\mathrm{fissions/m^{3}\,s}\)) and the nonirradiated (\(\dot{F}=0\)) limit. However, because experimental data has not to our knowledge been generated at intermediate values between these limits, the ML model may not scale accurately in the intermediate regime due to lack of training data. Calibration of the ML model at intermediate fission rates could be accomplished using data generated from cluster dynamics calculations, for example, by using CENTIPEDE.
Other experimental datasets for Xe diffusion in \(\mathrm{U}\mathrm{O}_{2}\) besides the Miekeley and Felix data can be found in the literature. For example, the analytical fit of Davies and Long is generally considered to more accurately capture Xe diffusivity at thermal equilibrium. Note that the primary goal in this work is to illustrate the utility of ML in nuclear fuel model development, not to assess the accuracy and validity of the datasets that are available in the literature. So, in the context of this work, the datasets are primarily tools to benchmark the developed ML methods. Shown in Fig. 7(a) is a comparison between the results of Matthews model (which was developed to agree with CENTIPEDE predictions that are close but not identical to the Davies and Long and Turnbull data sets), the Turnbull data, and the data generated using the analytical fit of Davies and Long. The Matthews model is in agreement with both datasets, but overestimates the diffusivity of the nonirradiated data. For comparison, the ML result shown in Fig. 7 (b) is in excellent agreement with both data sets over all irradiation and temperature conditions. Note that the different experimental data sets used in this subsection have different partial oxygen pressures as well as different irradiation conditions.
Using the results of CENTIPEDE cluster dynamics simu
Figure 5: Root mean square deviation of various models for the Xe diffusion coefficient in \(\mathrm{U}\mathrm{O}_{2}\). The experimental data used to calculate the RMSD is from Turnbull _et al._[35].
Figure 6: Xe diffusion coefficient in \(\mathrm{U}\mathrm{O}_{2}\) as a function of temperature \(T\) and fission rate \(\dot{F}\) for the ML model. The solid curve is the result of the ML model for the irradiated case (\(\dot{F}=10^{19}\,\mathrm{fissions/m^{3}\,s}\)) and the dashed curve is the result for the nonirradiated case (\(\dot{F}=0\)). The red points are experimental results from Turnbull _et al._[35] and the blue points are experimental results from Miekeley and Felix [36].
lations, multi-dimensional diffusion models \(D(T,\dot{F},p_{\mathrm{O_{2}}})\) can be developed using ML that include the dependence on the partial pressure of oxygen \(p_{\mathrm{O_{2}}}\) as well as the fission rate and temperature. The training data for the ML process was generated by performing CENTIPEDE simulations at 1000 datapoints on a grid in the \(\{T,\dot{F},p_{\mathrm{O_{2}}}\}\) parameter space. An 80:20 split between training and testing data was used. All features and labels were log-scaled before training and testing except the temperature. To predict the Xe diffusion and U diffusion, we again utilized KRR [50] with the addition of the nearest neighbors (NN) approach [57, 58]. The procedure is performed by taking an input state point \(p=\{T,\dot{F},p_{\mathrm{O_{2}}}\}\) (the point where the diffusion coefficient would like to be predicted) and determining the \(N_{\mathrm{neigh}}=8\) nearest neighbor points in the training data to the input point. The metric used to determine the nearest neighbors to the input point \(p\) was the weighted Euclidean distance with weights obtained using a grid search hyperparameter optimization. After the nearest neighbors are determined, the KRR was performed using only the NN points. The output of the KRR procedure at the target datapoint \(p\) in state space is the predicted diffusion value.
Fig. 8(a) and (b) are plots of predicted values vs. true values for Xe diffusion and U diffusion, respectively. The average percent error of the testing data was approximately 10% for both U and Xe diffusion, illustrating excellent agreement between the ML model and the testing data. The different color points in each plot signify different temperatures. The data spans the temperature range 1200K to 2250K, and excellent agreement is observed between the ML model and the testing data over the entire temperature range. The data spans the fission rate range \(10^{17}\,\mathrm{fissions/m^{3}\,s}\) to \(10^{19}\,\mathrm{fissions/m^{3}\,s}\).
We also used a similar ML procedure to develop a model for the partial pressure of oxygen \(p_{\mathrm{O_{2}}}(T,\dot{F},D)\) that takes diffusivity as an input in addition to fission rate and the temperature. The result of this procedure is shown in Fig. 8(c). Excellent agreement is observed between the predicted values and the true values for the partial pressure. Experimental values for diffusivity are commonly reported at a specific temperature and fission rate. Because the developed partial pressure model takes typically reported quantities as inputs, it allows the determination of the thermodynamic state of a fuel or experiment (qualitatively described using the partial pressure, which controls the \(x\) in \(\mathrm{UO_{2\pm x}}\) linking to the defect concentration).
### Uranium Nitride
The diffusion properties of UN are less studied and less understood than the diffusion properties of \(\mathrm{UO_{2}}\). This provides an opportunity to illustrate the utility and predictive power of data-driven methods in the context of developing diffusion models for emerging nuclear fuel candidates. Training data for diffusivity prediction in UN can be taken from existing experimental datasets, analytical models, new data generated using cluster dynamics simulations, or a combination of these data sources. Given that the experimental and analytical data are limited and uncertain, the use of cluster dynamics informed by atomic scale simulations is key to providing input for the ML models. A list of the UN experimental diffusivity datasets we use is shown in Table 1. The analytical model we compare to our ML model is taken from a collection of analytical models for diffusivity in UN developed by Hayes [59]. The details of this model are found in the Appendix.
Fig. 9 is a plot of true vs. predicted values for U diffusion in UN calculated using three different models: the Hayes analytical model, a PIML model, and a kernel-based ML model. The true values are taken from the experimental measurements of Reimann _et al._[43] which give U diffusion coefficients as a function of temperature \(T\) and the partial pressure of nitrogen \(p_{\mathrm{N_{2}}}\). All of the models we apply are multidimensional diffusion models
Figure 7: Xe diffusion coefficient in \(\mathrm{UO_{2}}\) as a function of temperature \(T\) and fission rate \(\dot{F}\) for (a) the Matthews model (b) a ML model. The solid curve in each panel is result of the respective model for the irradiated case (\(\dot{F}=10^{19}\,\mathrm{fissions/m^{3}\,s}\)) and the dashed curve is the result for the nonirradiated case (\(\dot{F}=0\)). The red points are experimental results from Turnbull _et al._[35] and the green points are experimental results from Davies and Long [37] generated through random sampling inside the error bounds of the fit.
of the general form \(D(T,p_{\rm N_{2}})\) that take temperature and pressure as inputs and return a predicted U diffusion coefficient as an output. The Hayes analytical model systematically underestimates the U diffusion values and also generates several points with significant error. A PIML method, developed by reparameterizing the Hayes model, improves on the Hayes fit and performs well for most datapoints, but there are some points (for lower coefficients end particularly) in which a significant difference exists between the true and predicted results because the reparameterization does not change the oversimplified construction of the empirical Hayes model. The ML results give the best results in terms of accuracy and variance.
Shown in Fig. 10 is a comparison between the error values for each method taken with respect to the experimental data from Reimann [43]. The error is quantified using the RMSD. The kernel-based ML method reduces the error by a factor of approximately 6 in comparison to both the PIML method and Hayes analytical model [59]. This significant error reduction highlights the utility of ML methods for quickly developing accurate diffusion models.
#### iv.2.1 Sensitivity Analysis of UN Diffusion
There is a limited amount of experimental diffusion data available for UN. Therefore, mechanistic cluster dynamics models can (and in general must) be used to augment the experimental data in order to make accurate predictions about diffusivity. In cluster dynamics models, understanding which defect types contribute the most to
Figure 8: Machine learning results for U\(\rm O_{2}\) showing predicted vs. true values for (a) Xe diffusion, (b) U diffusion, and (c) the partial pressure of \(\rm O_{2}\). The data used to train the models is obtained from _CENTIPEDE_ cluster dynamics simulations. The diagonal line in each panel illustrates where the predicted value equals the true value. Different color markers correspond to different temperatures shown in the colorbar to the right.
Figure 10: Root mean square deviation of various models for the uranium diffusion coefficient in UN. The experimental data used to calculate the RMSD is from Reimann _et al._[43].
Figure 9: Predicted vs. true values for uranium diffusion in UN. Results are shown for a ML model (blue), a PIML model (red), and the Hayes analytical model (green). The data used as true values are from Reimann _et al._[43].
the overall diffusion mechanism of a species can be quantified and understood using Sensitivity Analysis (SA)--a collection of mathematical and statistical methodologies that are used to understand what inputs contribute the most to a model output. Sensitivity Analysis is a powerful mathematical tool for building and testing the complex computational models that play a significant role in almost all social and physical scientific disciplines. Some example uses of SA methods in the context of model construction are: (a) identifying the most influential inputs to a model output, (b) improving the understanding of the relations between the inputs and output of a model, (c) calibrating input errors, and (d) assessing the quality and confidence of a model [60].
In this article, we use Global Sensitivity Analysis (GSA) methods to investigate the impact of different parameters in the cluster dynamics models for diffusivity. Some of the advantages of GSA methods are that they are capable of handling and analyzing high-dimensional inputs in a computationally efficient manner and that they can be built to taken into account nonlinear effects in the model [61]. Specifically, we use the well-known Morris method [62, 63] to quantify what defects and parameters are most important for diffusivity in UN. The result of applying the Morris method to a model is a collection of sensitivity indices, one index for each input. The value of each index quantifies the relative importance of the corresponding input. A larger value for a sensitivity index implies a higher importance in the model output. The Morris method is computationally more efficient than other SA methods because it scales linearly with the number input parameters. Therefore, it is well-suited for analyzing cluster dynamics simulations of UN which involve a high (\(>50\)) number of input variables. All the numerical algorithms used to perform SA are taken from the SALib package [64] and implemented using PYTHON scripts.
In our SA results, a sensitivity index is assigned to every defect type that is tracked in the cluster dynamics simulation. The value of each index quantifies the importance of the corresponding defect to the examined diffusion process. SA was performed for N, U, and Xe diffusivity in UN. The notation vU\(x\_\)vN\(y\) denotes a vacancy cluster of \(x\) uranium vacancies and \(y\) nitrogen vacancies. A similar notation is used for interstitials. UN represents perfect UN without defects and N\({}_{2}\) nitrogen gas. Crystal Xe_vU\(x\_\)vN\(y\) denotes that a Xe atom resides in the vU\(x\_\)vN\(y\) cluster. The cluster dynamics model applied here differs from the model used in Ref. [45] because we do not include antisites.
Shown in Fig. 11 are the SA results for UN. The uranium vacancy vU01_vN00 dominates the sensitivity for temperatures below 2000K as shown in Fig. 11(a). Interestingly, U diffusivity is not strongly dependent on the uranium interstitial Ui01_Ni00, however, we have found that in specific temperature, pressure, and irradiation regimes it is the dominant defect. The sensitivity indices for \(D_{\rm N}\) and \(D_{\rm Xe}\) are shown respectively in Figs. 11(b) and 11(c). For N diffusivity, the nitrogen interstitial Ui00_Ni01 defect is almost 10 times more important than all the other defects across the temperature range sampled. This is consistent with interstitials dominating nitrogen diffusion under nitrogen rich conditions. Compare this to the results for Xe diffusion shown in Fig. 11(c) which show that a number of defects contribute over 10% to the model output. All of those defects are linked to Xe diffusing by a vacancy mechanism, which is also the mechanism observed to dominate for most temperatures. UN corresponds to the perfect lattice, which impacts all defect energies and consequently appears in all sensitivities. It is possible that it represents a constant shift in all defect energies. The sensitivity indices for U and N diffusivity do not vary strongly as the temperature is varied. Across all the examined models, only a few defects
Figure 11: Sensitivity indices for various defect types in UN. Results are shown for (a) U diffusion, (b) N diffusion, and (c) Xe diffusion.
make major contributions to the overall diffusivity in the diffusion of the three species (U,N,Xe) we have examined.
#### iii.2.2 Optimization of Centipedge Parameters
The parameters in the UN cluster dynamics model implemented using CENTIPEDE can be calibrated such that the quantities of interest agree with experimental results. A detailed discussion of the specific physical parameters that are used in our model can be found in Ref. [45]. However, the process of calibrating the approximately 50-dimensional parameter space to experimental data by hand can be very time-consuming if not intractable. To accelerate the calibration process, we used a genetic optimization procedure to find the optimal set of parameters that produce CENTIPEDE results that best match the experimental data for N diffusion. As a proof of principle, we perform the optimization on N diffusion only, and do not include U and Xe diffusion values in the optimization. A multi-objective optimization that includes diffusion for all of the species is a target of future work.
The genetic optimization procedure was implemented using a baseline set of CENTIPEDE parameter values obtained from a combination of electronic structure calculations, and molecular dynamics simulations. An error bound for each parameter was also assigned. Each set of parameter values is a solution to the optimization problem. An initial set of solutions was generated by randomly sampling a value for each parameter using the assigned error bounds. This initial set of solutions is called a generation. Genetic optimization of the baseline parameters was performed by generating new sets of solutions (i.e., by generating new generations) using the best solutions from the previous generation. Each generation had 60 solutions and the optimization was performed for 5 generations. The procedure was performed across the temperature range \(1300\text{K}-2500\text{K}\).
The results of the genetic optimization procedure are shown in Fig. 12. The diffusion values generated using the CENTIPEDE parameters chosen as the baseline do not agree with the experimental results. However, after performing optimization in the high-dimensional parameter space, the CENTIPEDE result is in strong agreement with the experimental data. The power of this result is that while the analytical result of Matzke (shown as dashed black curve) accurately captures the temperature dependence of N diffusion in the equilibrium regime, it cannot be used predict the diffusion behavior under different irradiation conditions and at different partial pressures. Compare this to the calibrated CENTIPEDE result which can be used to predict diffusion coefficients at different system conditions. The calibrated cluster dynamics model is therefore more robust than simple analytical models for modeling diffusion under reactor conditions. Overall, the defect parameters that change the most during the optimization procedure are kinetic parameters such as attempt frequencies and migration activation energies, while the enthalpic and entropic properties related to defect formation, in general, exhibit the least difference between the calibrated and baseline sets. Moreover, the Centipede model predicts diffusion of not only N, which is relatively easy to measure experimentally, but also U and Xe, which are much harder to measure experimentally, especially under irradiation conditions. By validating and optimizing the CENTIPEDE model for N diffusion data, the trust in the CENTIPEDE predictions of U and Xe diffusion is also increased. Future work will investigate how these relations can be quantified using uncertainty measures.
## IV Conclusions
A data-driven workflow has been developed to predict diffusion coefficients in nuclear fuels. The developed workflow has been shown to predict transport and thermodynamic properties of the nuclear fuels uranium oxide UO\({}_{2}\) and uranium nitride UN with increased accuracy in comparison to previous models and methods. We have specifically shown that using ML can reduce the predictive error in comparison to previously developed analytical models and to reduce the time it takes to develop diffusion models. We have also shown how data-driven methods can be used to calibrate complex mechanistic models for diffusion properties of nuclear fuels. Machine learning models trained using small experimental datasets were expanded by developing and applying
Figure 12: Nitrogen diffusion coefficient in UN as a function of temperature. The red points are experimental results (see Table 1). The dashed black curve is the analytical fit by Matzke. The green curve is the baseline result from CENTIPEDE and the blue curve is the CENTIPEDE result after performing the genetic optimization procedure.
a data augmentation method. This augmentation technique may be particularly useful in nuclear fuel development and qualification when large amounts of experimental results are not available or are difficult to obtain. Sensitivity analysis methods have been applied to determine the most important structural defects within the mechanistic cluster dynamics models under different reactor conditions. This analysis can be used to improve model development and fuel analysis by giving information about what defect types should be targeted for further study using experiments and/or electronic structure calculations.
In future work, data-driven methods will be applied to enhance the predictive capabilities of mechanistic models for use in nuclear fuel qualification and reactor modeling. Data-driven methods will also be used to generate complex multi-dimensional analytical functions with enhanced transferability and interpretability in comparison to black-box ML models developed here. Another important future focus will be to quantify the uncertainty in the developed ML models.
## V Acknowledgments
This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy. This research was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20220053DR. The computing resources used to perform this research were provided by the LANL Institutional Computing Program.
## Appendix A Analytical Diffusion Models
**Hayes Model:**
The Hayes analytical model for U diffusion in UN in units of m\({}^{2}\)/s is:
\[D_{\text{U}}(T,P)=c_{1}p^{c_{2}}e^{c_{3}/T} \tag{1}\]
with \(c_{1}=2.215\times 10^{-15}\), \(c_{2}=0.6414\), and \(c_{3}=-7989.3\), where \(p\) is the partial pressure of nitrogen (atm) and \(T\) is the temperature (K).
**Matthews Model:**
The Matthews analytical model for Xe diffusion in UO\({}_{2}\) in units of m\({}^{2}\)/s is:
\[D_{\text{Xe}}(T,\dot{F})=\frac{c_{1}e^{c_{2}/k_{\text{B}}T}}{c_{3}+c_{4}e^{c_ {5}/k_{\text{B}}T}}+c_{6}e^{c_{7}/k_{\text{B}}T}\sqrt{\dot{F}}+c_{8}\dot{F} \tag{2}\]
with \(c_{1}=2.216\times 10^{-7}\), \(c_{2}=-3.26\), \(c_{3}=1.0\), \(c_{4}=29.03\), \(c_{5}=-1.84\times 10^{-4}\), \(c_{6}=2.821\times 10^{-22}\), \(c_{7}=-2.0\), and \(c_{8}=8.5\times 10^{-40}\) where \(\dot{F}=1.0\times 10^{19}\) is the fission rate (fissions/m\({}^{3}\) s), \(T\) is the temperature (K), and \(k_{\text{B}}\) is the Boltzmann constant (eV/K).
**Forsberg Model:**
The Forsberg analytical model for Xe diffusion in UO\({}_{2}\) in units of m\({}^{2}\)/s is:
\[D_{\text{Xe}}(T,\dot{F})=\frac{v_{g}(T,\dot{F})D_{\text{eff}}(T,\dot{F})}{v_{g }(T,\dot{F})+g(T,\dot{F})} \tag{3}\]
with
\[D_{\text{eff}}(T,\dot{F}) =c_{1}e^{c_{2}/T}+4c_{3}e^{c_{4}/T}\sqrt{\dot{F}}+4c_{5}\dot{F}, \tag{4}\] \[v_{g}(T,\dot{F}) =c_{6}\pi l\dot{F}(c_{7}e^{c_{8}T}+\delta)^{2},\] (5) \[g(T,\dot{F}) =4\pi c_{7}e^{c_{8}T}(c_{9}/T-c_{10})D_{\text{eff}}(T,\dot{F}), \tag{6}\]
where \(\dot{F}=1.72\times 10^{19}\) is the fission rate (fissions/m\({}^{3}\) s) and \(T\) is the temperature (K). The parameters are \(c_{1}=7.6\times 10^{-10}\), \(c_{2}=-35247\), \(c_{3}=1.41\times 10^{-25}\), \(c_{4}=-13800\), \(c_{5}=2.0\times 10^{-40}\), \(c_{6}=3.03\), \(l=6.0\times 10^{-6}\), \(c_{7}=1.453\times 10^{-10}\), \(c_{8}=1.023\times 10^{-3}\), \(\delta=1.0\times 10^{-9}\), \(c_{9}=1.52\times 10^{27}\), \(c_{10}=3.3\times 10^{23}\).
| 原子燃料の照射による構造欠陥の成長率は、欠陥の拡散速度に内在的に関連しています。原子構造欠陥の生成と成長は、燃料の性能特性を大きく変化させる可能性があります。この機能の変化を正確に捉えることは、原子核燃料を原子炉で使用できるよう qualify するために必要です。欠陥の拡散係数予測とその影響を、膨張、ガス放出、クリープなどのマクロスケール特性に及ぼすことは、新規原子核燃料の設計と既存燃料の評価において重要な意味を持ちます。この論文では、機械学習(ML)を適用したデータ駆動の方法を用いて、ウラン酸化物とウラン nitrideの二つの原子核燃料の様々な拡散プロパティを決定しました。私たちは、MLを用いることで、原子核燃料における拡散率の予測の精度を従来の解析モデルに比べて大幅に向上させることができると示 |
2310.20579 | Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks | We analytically investigate how over-parameterization of models in randomized
machine learning algorithms impacts the information leakage about their
training data. Specifically, we prove a privacy bound for the KL divergence
between model distributions on worst-case neighboring datasets, and explore its
dependence on the initialization, width, and depth of fully connected neural
networks. We find that this KL privacy bound is largely determined by the
expected squared gradient norm relative to model parameters during training.
Notably, for the special setting of linearized network, our analysis indicates
that the squared gradient norm (and therefore the escalation of privacy loss)
is tied directly to the per-layer variance of the initialization distribution.
By using this analysis, we demonstrate that privacy bound improves with
increasing depth under certain initializations (LeCun and Xavier), while
degrades with increasing depth under other initializations (He and NTK). Our
work reveals a complex interplay between privacy and depth that depends on the
chosen initialization distribution. We further prove excess empirical risk
bounds under a fixed KL privacy budget, and show that the interplay between
privacy utility trade-off and depth is similarly affected by the
initialization. | Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, Volkan Cevher | 2023-10-31T16:13:22 | http://arxiv.org/abs/2310.20579v1 | # Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
###### Abstract
We analytically investigate how over-parameterization of models in randomized machine learning algorithms impacts the information leakage about their training data. Specifically, we prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets, and explore its dependence on the initialization, width, and depth of fully connected neural networks. We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training. Notably, for the special setting of linearized network, our analysis indicates that the squared gradient norm (and therefore the escalation of privacy loss) is tied directly to the per-layer variance of the initialization distribution. By using this analysis, we demonstrate that privacy bound improves with increasing depth under certain initializations (LeCun and Xavier), while degrades with increasing depth under other initializations (He and NTK). Our work reveals a complex interplay between privacy and depth that depends on the chosen initialization distribution. We further prove excess empirical risk bounds under a fixed KL privacy budget, and show that the interplay between privacy utility trade-off and depth is similarly affected by the initialization.
## 1 Introduction
Deep neural networks (DNNs) in the over-parameterized regime (i.e., more parameters than data) perform well in practice but the model predictions can easily leak private information about the training data under inference attacks such as membership inference attacks [44] and reconstruction attacks [17; 7; 29]. This leakage can be mathematically measured by the extent to which the algorithm's output distribution changes if DNNs are trained on a neighboring dataset (differing only in a one record), following the differential privacy (DP) framework [23].
To train differential private model, a typical way is to randomly perturb each gradient update in the training process, such as stochastic gradient descent (SGD), which leads to the most widely applied DP training algorithm in the literature: DP-SGD [2]. To be specific, in each step, DP-SGD employs gradient clipping, adds calibrated Gaussian noise, and yields differential privacy guarantee that scales with the noise multiplier (i.e., per-dimensional Gaussian noise standard deviation divided by the clipping threshold) and number of training epochs. However, this privacy bound [2] is overly general as its gradient clipping artificially neglects the network properties (e.g., width and depth) and training schemes (e.g., initializations). Accordingly, a natural question arises in the community:
_How does the over-parameterization of neural networks (under different initializations) affect the privacy bound of the training algorithm over_ worst-case _datasets?_
To answer this question, we circumvent the difficulties of analyzing gradient clipping, and instead _algorithmically_ focus on analyzing privacy for the Langevin diffusion algorithm _without_ gradient clipping nor Lipschitz assumption on loss function. 2 It avoids an artificial setting in DP-SGD [2] where a constant sensitivity constraint is enforced for each gradient update and thus makes the privacy bound insensitive to the network over-parameterization. _Theoretically_, we prove that the KL privacy loss for Langevin diffusion scales with the expected gradient difference between the training on any two worst-case neighboring datasets (Theorem 3.1). 3 By proving precise upper bounds on the expected \(\ell_{2}\)-norm of this gradient difference, we thus obtain KL privacy bounds for fully connected neural network (Lemma 3.2) and its linearized variant (Corollary 4.2) that changes with the network width, depth and per-layer variance for the initialization distribution. We summarized the details of our KL privacy bounds in Table 1, and highlight our key observations below.
Footnote 2: A key difference between this paper and existing privacy utility analysis of Langevin diffusion [26] is that we analyze in the absence of gradient clipping or Lipschitz assumption on loss function. Our results also readily extend to discretized noisy GD with constant step-size (as discussed in Appendix E).
Footnote 3: We focus on KL privacy loss because it is a more relaxed distinguishability notion than standard \((\varepsilon,\delta)\)-DP, and therefore could be upper bounded even without gradient clipping. Moreover, KL divergence enables upper bound for the advantage (relative success) of various inference attacks, as studied in recent works [39; 28].
* Width always worsen privacy, under all the considered initialization schemes. Meanwhile, the interplay between network depth and privacy is much more complex and crucially depends on which initialization scheme is used and how long the training time is.
* Regarding the specific initialization schemes, under small per-layer variance in initialization (e.g. in LeCun and Xavier), if the depth is large enough, our KL privacy bound for training fully connected network (with a small amount of time) as well as linearized network (with finite time) decays exponentially with increasing depth. To the best of our knowledge, this is the first time that an improvement of privacy bound under over-parameterization is observed.
We further perform numerical experiments (Section 5) on deep neural network trained via noisy gradient descent to validate our privacy analyses. Finally, we analyze the privacy utility trade-off for training linearized network, and prove that the excess empirical risk bound (given any fixed KL privacy budget) scales with a lazy training distance bound \(R\) (i.e., how close is the initialization to a minimizer of the empirical risk) and a gradient norm constant \(B\) throughout training (Corollary 6.4). By analyzing these two terms precisely, we prove that under certain initialization distributions (such as LeCun and Xavier), the privacy utility trade-off strictly improves with increasing depth for linearized network (Table 1). To our best knowledge, this is the first time that such a gain in privacy-utility trade-off due to over-parameterization (increasing depth) is shown. Meanwhile, prior results only prove (nearly) dimension-independent privacy utility trade-off for such linear models in the literature [45; 32; 37]. Our improvement demonstrates the unique benefits of our algorithmic framework and privacy-utility analysis in understanding the effect of over-parameterization.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Initialization} & \begin{tabular}{c} Variance \(\beta_{l}\) \\ for layer \(l\) \\ \end{tabular} & \begin{tabular}{c} Gradient norm \\ constant \(B\) (7) \\ \end{tabular} & \begin{tabular}{c} Approximate lazy \\ training distance \\ \(R\) (9) \\ \end{tabular} &
\begin{tabular}{c} Excess Empirical risk \\ under \(\varepsilon\)-KL privacy \\ (Corollary 6.4) \\ \end{tabular} \\ \hline LeCun [34] & \(1/m_{l-1}\) & \(\frac{\text{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \begin{array}{c}{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny }}}}}} \left[{\left[{{\tiny{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left}}}}}}}}{{{\tiny{{ \tiny{\tiny{{\tiny{\tiny{\tiny{\tiny{\{\left}}}}}}{{{\tiny{{\tiny{\tiny{\tiny{{ \tiny{\left}}}}}}{{{\tiny{{\tiny{{\tiny{\tiny{\{}}}}}}{{ \tiny{{\tiny{{\tiny{\tiny{\left}}}}}{{{\tiny{{\tiny{\tiny{\tiny{\left}}}}{{{\tiny{{ \tiny{\left}}}}{{{\tiny{{\tiny{\tiny{{\left}}}}{{{\tiny{\tiny{\tiny{{\left}}}}{{{ \tiny{\tiny{\left}}}{{{\tiny{{\tiny{\left}}}}{{{\tiny{{\tiny{\left}}}{{{ \tiny{\tiny{\left}}}{{{\tiny{{\left{\left}}}}{{{\tiny{{\tiny{\left{}}}}{{ \tiny{{\tiny{\left{\left}}}{{{\tiny{\left{{\left}}}}{{{\tiny{{\tiny{\left{{\left}}} }}{{{{\tiny{{\left{\left}}}}{{{\tiny{{\tiny{\left{\left}}}}{{{\tiny{{\left{{ \left}}}}{{{\tiny{{\left{\left}}}{{{\tiny{\left{{\left}}}}{{{ \tiny{{\left{\left{{\left}}}}{{{\tiny{{\left{\left{\left{}}}}{{\tiny{{\left{{ \left}}}}{{{\left{{\left{\left{\left}}}{{{\left{\left{{{}}} {\left{\left{{\left{\left}}}}{{{\left{\left{{\left{}}}}{{\left{{\left{{ }}}}{{\left{\left{{\left{\left{{\left}}}}{{{\left{\left{{{ }}}}{{\left{\left{{\left{\left}}}{{{\left{{{}}} {\left{{\left{\left{\left}}}}{{{\left{{\left{\left{{\left}}}}{{\left{{{{ }}}}{\left{\left{{\left{{\left}}}{{\left{{{{{{{}}}}} {\left{\left{{\left}}}}{{\left{\left{\left{{{{\left}}}}}{{\left{{ \left{{\left{{{\left}}}}}{{\left{\left{\left{{{{\left}}}}{{\left{{{{{{{{{{{}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\}\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
### Related Works
over-parameterization in DNNs and NTK.Theoretical demonstration on the benefit of over-parameterization in DNNs occurs in global convergence [3; 21] and generalization [4; 16]. Under proper initialization, the training dynamics of over-parameterized DNNs can be described by a kernel function, termed as neural tangent kernel (NTK) [31], which stimulates a series of analysis in DNNs. Accordingly, over-parameterization has been demonstrated to be beneficial/harmful to several topics in deep learning, e.g., robustness [15; 54], covariate shift [50]. However, the relationship between over-parameterization and privacy (based on the differential privacy framework) remains largely an unsolved problem, as the training dynamics typically change [14] after adding new components in the privacy-preserving learning algorithm (such as DP-SGD [2]) to enforce privacy constraints.
Membership inference privacy risk under over-parameterization.A recent line of works [47; 48] investigates how over-parameterization affects the theoretical and empirical privacy in terms of membership inference advantage, and proves novel trade-off between privacy and generalization error. These literature are closet to our objective of investigating the interplay between privacy and over-parameterization. However, Tan et al. [47; 48] focus on proving upper bounds for an average-case privacy risk defined by the advantage (relative success) of membership inference attack on models trained from randomly sampled training dataset from a population distribution. By contrast, our KL privacy bound is heavily based on the strongest adversary model in the differential privacy definition, and holds under an arbitrary _worst-case_ pair of neighboring datasets, differing only in one record. Our model setting (e.g., fully connected neural networks) is also quite different from that of Tan et al. [47; 48]. The employed analysis tools are accordingly different.
Differentially private learning in high dimensionStandard results for private empirical risk minimization [9; 46] and private stochastic convex optimization [11; 12; 5] prove that there is an unavoidable factor \(d\) in the empirical risk and population risk that depends on the model dimension. However, for unconstrained optimization, it is possible to seek for the dimension-dependency in proving risk bounds for certain class of problems (such as generalized linear model [45]). Recently, there is a growing line of works that proves dimension-independent excess risk bounds for differentially private learning, by utilizing the low-rank structure of data features [45] or gradient matrices [32; 37] during training. Several follow-up works [33; 13] further explore techniques to enforce the low-rank property (via random projection) and boost privacy utility trade-off. However, all the works focus on investigating a general high-dimensional problem for private learning, rather than separating the study for different network choices such as width, depth and initialization. Instead, our study focuses on the fully connected neural network and its linearized variant, which enables us to prove more precise privacy utility trade-off bounds for these particular networks under over-parameterization.
## 2 Problem and Methodology
We consider the following standard multi-class supervised learning setting. Let \(\mathcal{D}=(\mathbf{z}_{1},\cdots,\mathbf{z}_{n})\) be an input dataset of size \(n\), where each data record \(\mathbf{z}_{i}=(\mathbf{x}_{i},\mathbf{y}_{i})\) contains a \(d\)-dimensional feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and a label vector \(\mathbf{y}_{i}\in\mathcal{Y}=\{-1,1\}^{o}\) on \(o\) classes. We aim to learn a neural network output function \(\mathbf{f}_{\mathbf{W}}(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\) parameterized by \(\mathbf{W}\) via empirical risk minimization (ERM)
\[\min_{\mathbf{W}}\mathcal{L}(\mathbf{W};\mathcal{D}):=\frac{1}{n}\sum_{i=1}^{n}\ell( \mathbf{f}_{\mathbf{W}}(\mathbf{x}_{i});\mathbf{y}_{i})\,, \tag{1}\]
where \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x}_{i});\mathbf{y}_{i})\) is a loss function that reflects the approximation quality of model prediction \(f_{\mathbf{W}}(\mathbf{x}_{i})\) compared to the ground truth label \(\mathbf{y}_{i}\). For simplicity, throughout our analysis, we employ the cross-entropy loss \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x});\mathbf{y})=-\langle\mathbf{y},\log\text{softmax}(\mathbf{ f}_{\mathbf{W}}(\mathbf{x}))\rangle\) for multi-class network with \(o\geq 2\), and \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x});\mathbf{y})=\log(1+\exp(-\mathbf{y}\mathbf{f}_{\mathbf{W}}(\mathbf{x}))\) for single-output network with \(o=1\).
Fully Connected Neural NetworksWe consider the \(L\)-layer, multi-output, fully connected, deep neural network (DNN) with ReLU activation. Denote the width of hidden layer \(l\) as \(m_{l}\) for \(l=1,\cdots,L-1\). For consistency, we also denote \(m_{0}=d\) and \(m_{L}=o\). The network output \(f_{\mathbf{W}}(\mathbf{x})\coloneqq\mathbf{h}_{L}(\mathbf{x})\) is defined recursively as follows.
\[\mathbf{h}_{0}(\mathbf{x})=\mathbf{x};\quad\mathbf{h}_{l}(\mathbf{x})=\phi(\mathbf{W}_{l}\mathbf{x})\text{ for }l=1,\cdots,L-1;\quad\mathbf{h}_{L}(\mathbf{x})=\mathbf{W}_{L}\mathbf{h}_{L-1}(\mathbf{x})\,, \tag{2}\]
where \(h_{l}(\mathbf{x})\) denotes the post-activation output at \(l\)-th layer, and \(\{\mathbf{W}_{l}\in\mathbb{R}^{m_{l}\times m_{l-1}}:l=1,\ldots,L\}\) denotes the set of per-layer weight matrices of DNN. For brevity, we denote the vector \(\mathbf{W}\coloneqq(\text{Vec}(\mathbf{W}_{1}),\ldots,\text{Vec}(\mathbf{W}_{L}))\in \mathbb{R}^{m_{1}\cdot d+m_{2}\cdot m_{1}+\cdots+o\cdot m_{L-1}}\), i.e., the the concatenation of vectorizations for weight matrices of all layers, as the model parameter.
Linearized NetworkWe also analyze the following _linearized network_, which is used in prior works [35, 3, 41] as an important tool to (approximately and qualitatively) analyze the training dynamics of DNNs. Formally, the linearized network \(\mathbf{f}_{\mathbf{W}}^{lin,0}(\mathbf{x})\) is a first-order Taylor expansion of the fully connected ReLU network at initialization parameter \(\mathbf{W}_{0}^{lin}\), as follows.
\[\mathbf{f}_{\mathbf{W}}^{lin,0}(\mathbf{x})\equiv\mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x})+\frac{ \partial\mathbf{f}_{\mathbf{W}}(\mathbf{x})}{\partial\mathbf{W}}\Big{|}_{\mathbf{W}=\mathbf{W}_{0}^{ lin}}\left(\mathbf{W}-\mathbf{W}_{0}^{lin}\right), \tag{3}\]
where \(\mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x})\) is the output function of the fully connected ReLU network (2) at initialization \(\mathbf{W}_{0}^{lin}\). We denote \(\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})=\frac{1}{n}\sum_{i=1}^{n}\ell\left( \mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x}_{i})+\frac{\partial\mathbf{f}_{\mathbf{W}}(\mathbf{x})}{ \partial\mathbf{W}}|_{\mathbf{W}=\mathbf{W}_{0}^{lin}}(\mathbf{W}-\mathbf{W}_{0}^{lin});\mathbf{y}_{i}\right)\) as the empirical loss function for training linearized network, by plugging (3) into (1).
Langevin DiffusionRegarding the optimization algorithm, we focus on the _Langevin diffusion_ algorithm [36] with per-dimensional noise variance \(\sigma^{2}\). Note that we aim to _avoid gradient clipping_ while still proving KL privacy bounds. After initializing the model parameters \(\mathbf{W}_{0}\) at time zero, the model parameters \(\mathbf{W}_{t}\) at subsequent time \(t\) evolves as the following stochastic differential equation.
\[\mathrm{d}\mathbf{W}_{t}=-\,\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})\mathrm{d}t+ \sqrt{2\sigma^{2}}\mathrm{d}\mathbf{B}_{t}\,. \tag{4}\]
Initialization DistributionThe initialization of parameters \(\mathbf{W}_{0}\) crucially affects the convergence of Langevin diffusion, as observed in prior literatures [52, 25, 24]. In this work, we investigate the following general class of Gaussian initialization distributions with different (possibly depth-dependent) variances for the parameters in each layer. For any layer \(l=1,\cdots,L\), we have
\[[\mathbf{W}^{l}]_{ij}\sim\mathcal{N}(0,\beta_{l})\text{, for }(i,j)\in[m_{l}]\times[m_{l-1}]\,, \tag{5}\]
where \(\beta_{1},\cdots,\beta_{L}>0\) are the per-layer variance for Gaussian initialization. By choosing different variances, we recover many common initialization schemes in the literature, as summarized in Table 1.
### Our objective and methodology
We aim to understand the relation between privacy, utility and over-parameterization (depth and width) for the Langevin diffusion algorithm (under different initialization distributions). For privacy analysis, we prove a KL privacy bound for running Langevin diffusion on any two _worst-case_ neighboring datasets. Below we first give the definition for neighboring datasets.
**Definition 2.1**.: We denote \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) as neighboring datasets if they are of same size and only differ in one record. For brevity, we also denote the differing records as \((\mathbf{x},\mathbf{y})\in\mathcal{D}\) and \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\in\mathcal{D}^{\prime}\).
**Assumption 2.2** (Bounded Data).: For simplicity, we assume bounded data, i.e., \(\|\mathbf{x}\|_{2}\leq\sqrt{d}\).
We now give the definition for KL privacy, which is a more relaxed, yet closely connected privacy notion to the standard \((\varepsilon,\delta)\) differential privacy [22], see Appendix A.2 for more discussions. KL privacy and its relaxed variants are commonly used in previous literature [8, 10, 53].
**Definition 2.3** (KL privacy).: A randomized algorithm \(\mathcal{A}\) satisfies \(\varepsilon\)-KL privacy if for any neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\), we have that the KL divergence \(\mathrm{KL}(\mathcal{A}(\mathcal{D})\|\mathcal{A}(\mathcal{D}^{\prime}))\leq\varepsilon\), where \(\mathcal{A}(\mathcal{D})\) denotes the algorithm's output distribution on dataset \(\mathcal{D}\).
In this paper, we prove KL privacy upper bound for \(\max_{\mathcal{D},\mathcal{D}^{\prime}}\mathrm{KL}(\mathbf{W}_{[0:T]}\|\mathbf{W}_{[0:T]} ^{\prime})\) when running Langevin diffusion on any _worst-case_ neighboring datasets. For brevity, here (and in the remaining paper), we abuse the notations and denote \(\mathbf{W}_{[0:T]}\) and \(\mathbf{W}_{[0:T]}^{\prime}\) as the distributions of model parameters trajectory during Langevin diffusion processes Eq. (4) with time \(T\) on \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) respectively.
For utility analysis, we prove the upper bound for the excess empirical risk given any fixed KL divergence privacy budget for a single-output neural network under the following additional assumption (it is only required for utility analysis and not needed for our privacy bound).
**Assumption 2.4** ([40; 20; 21]).: The training data \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\) are i.i.d. samples from a distribution \(P_{x}\) that satisfies \(\mathbb{E}[\mathbf{x}]=0,\|\mathbf{x}\|_{2}=\sqrt{d}\) for \(\mathbf{x}\sim P_{x}\), and with probability one for any \(i\neq j\), \(\mathbf{x}_{i}\nparallel\mathbf{x}_{j}\).
Our ultimate goal is to precisely understand how the excess empirical risk bounds (given a fixed KL privacy budget) are affected by increasing width and depth under different initialization distributions.
## 3 KL Privacy for Training Fully Connected ReLU Neural Networks
In this section, we perform the composition-based KL privacy analysis for Langevin Diffusion given random Gaussian initialization distribution under Eq. (5) for fully connected ReLU network. More specifically, we prove upper bound for the KL divergence between distribution of output model parameters when running Langevin diffusion on an arbitrary pair of neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\).
Our first insight is that by a Bayes rule decomposition for density function, KL privacy under a relaxed gradient sensitivity condition can be proved (that could hold _without_ gradient clipping).
**Theorem 3.1** (KL composition under possibly unbounded gradient difference).: _The KL divergence between running Langevin diffusion (4) for DNN (2) on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) satisfies_
\[\mathrm{KL}(\mathbf{W}_{[0:T]}\|\mathbf{W}_{[0:T]}^{\prime})=\frac{1}{2\sigma^{2}} \int_{0}^{T}\mathbb{E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})- \nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D}^{\prime})\|_{2}^{2}\right]\mathrm{d}t\,. \tag{6}\]
Proof sketch.: We compute the partial derivative of KL divergence with regard to time \(t\), and then integrate it over \(t\in[0,T]\) to compute the KL divergence during training with time \(T\). For computing the limit of differentiation, we use Girsanov's theorem to compute the KL divergence between the trajectory of Langevin diffusion processes on \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). The complete proof is in Appendix B.1.
Theorem 3.1 is an extension of the standard additivity [51] of KL divergence (also known as chain rule [1]) for a finite sequence of distributions to continuous time processes with (possibly) unbounded drift difference. The key extension is that Theorem 3.1 does not require bounded sensitivity between the drifts of Langevin Diffusion on neighboring datasets. Instead, it only requires finite second-order moment of drift difference (in the \(\ell_{2}\)-norm sense) between neighboring datasets \(\mathcal{D},\mathcal{D}^{\prime}\), which can be proved by the following Lemma. We prove that this expectation of squared gradient difference incurs closed-form upper bound under deep neural network (under mild assumptions), for running Langevin diffusion (without gradient clipping) on any neighboring dataset \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\).
**Lemma 3.2** (Drift Difference in Noisy Training).: _Let \(M_{T}\) be the subspace spanned by gradients \(\{\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x}_{i};\mathbf{y}_{i}):(\mathbf{x}_{i},\mathbf{y}_{i})\in \mathcal{D},t\in[0,T]\}_{i=1}^{n}\) throughout Langevin diffusion \((\mathbf{W}_{t})_{t\in[0,T]}\). Denote \(\|\cdot\|_{M_{T}}\) as the \(\ell_{2}\) norm of the projected input vector onto \(M_{T}\). Suppose that there exists constants \(c,\beta>0\) such that for any \(\mathbf{W}\), \(\mathbf{W}^{\prime}\) and \((\mathbf{x},\mathbf{y})\), we have \(\|\nabla\ell(f_{\mathbf{W}}(\mathbf{x});\mathbf{y})\rangle-\nabla\ell(f_{\mathbf{W}^{\prime}} (\mathbf{x});\mathbf{y})\|_{2}<\max\{c,\beta\|\mathbf{W}-\mathbf{W}^{\prime}\|_{M_{T}}\}\). Then running Langevin diffusion Eq. (4) with Gaussian initialization distribution (5) satisfies \(\varepsilon\)-KL privacy with \(\varepsilon=\frac{\max_{\mathcal{D},\mathcal{D}^{\prime}}\int_{0}^{T}\mathbb{ E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})-\nabla\mathcal{L}(\mathbf{W}_{t}; \mathcal{D}^{\prime})\|_{2}^{2}\right]\mathrm{d}t}{2\sigma^{2}}\) where_
\[+\underbrace{\frac{2\beta^{2}}{n^{2}(2+\beta^{2})}\left(\frac{ \epsilon^{(2+\beta^{2})T}-1}{2+\beta^{2}}-T\right)\cdot\left(\mathbb{E}\left[ \|\nabla\mathcal{L}(\mathbf{W}_{0};\mathcal{D})\|_{2}^{2}\right]+2\sigma^{2} \text{rank}(M_{T})+c^{2}\right)}_{\text{gradient difference fluctuation during training}}+\underbrace{\frac{2c^{2}T}{n^{2}}}_{\text{non- smoothness}}.\]
Proof sketch.: The key is to reduce the problem of upper bounding the gradient difference at any training time \(T\), to analyzing its two subcomponents: \(\|\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x});\mathbf{y}))-\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x}^{\prime});\mathbf{y}^{\prime}) \|_{2}^{2}\leq\underbrace{2\left\|\nabla\ell(f_{\mathbf{W}_{0}}(\mathbf{x});\mathbf{y}) \right\|-\nabla\ell(f_{\mathbf{W}_{0}}(\mathbf{x}^{\prime});\mathbf{y}^{\prime})\right\|_ {2}^{2}}_{\text{gradient difference at initialization}}+2\beta^{2}\underbrace{\left\|\mathbf{W}_{t}- \mathbf{W}_{0}\right\|_{M_{T}}^{2}}_{\text{parameters' change after time $T$}}+2c^{2}\), where \((\mathbf{x},\mathbf{y})\) and \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) are the differing data between neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). This inequality is by the Cauchy-Schwartz inequality. In this way, the second term in Lemma 3.2 uses the change of parameters
to bound the gradient difference between datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at time \(T\), via the relaxed smoothness assumption of loss function (that is explained in Remark 3.5 in details). The complete proof is in Appendix B.2.
_Remark 3.3_ (Gradient difference at initialization).: The first term and in our upper bound linearly scales with the difference between gradients on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at initialization. Under different initialization schemes, this gradient difference exhibits different dependency on the network depth and width, as we will prove theoretically in Theorem 4.1.
_Remark 3.4_ (Gradient difference fluctuation during training).: The second term in Lemma 3.2 bounds the change of gradient difference during training, and is proportional to the the rank of a subspace \(M_{T}\) spanned by gradients of all training data. Intuitively, this fluctuation is because Langevin diffusion adds per-dimensional noise with variance \(\sigma^{2}\), thus perturbing the training parameters away from the initialization at a scale of \(O(\sigma\sqrt{\text{rank}(M_{T})})\) in the expected \(\ell_{2}\) distance.
_Remark 3.5_ (Relaxed smoothness of loss function).: The third term in Lemma 3.2 is due to the assumption \(\|\nabla\ell(f_{\mathbf{W}}(\mathbf{x});\mathbf{y})\|_{2}<\max\{c,\beta\|\mathbf{W}-\mathbf{W}^{ \prime}\|_{M_{T}}\}.\) This assumption is similar to smoothness of loss function, but is more relaxed as it allows non-smoothness at places where the gradient is bounded by \(c\). Therefore, this assumption is general to cover commonly-used smooth, non-smooth activation functions, e.g., sigmoid, ReLU.
_Growth of KL privacy bound with increasing training time \(T\)._ The first and third terms in our upper bound Lemma 3.2 grow linearly with the training time \(T\), while the second term grows exponentially with regard to \(T\). Consequently, for learning tasks that requires a long training time to converge, the second term will become the dominating term and the KL privacy bound suffers from exponential growth with regard to the training time. Nevertheless, observe that for small \(T\to 0\), the second component in Lemma 3.2 contains a small factor \(\frac{e^{(2+\beta^{2})T}-1}{2+\beta^{2}}-T=o(T)\) by Taylor expansion. Therefore, for small training time, the second component is smaller than the first and the third components in Lemma 3.2 that linearly scale with \(T\), and thus does not dominate the privacy bound. Intuitively, this phenomenon is related to lazy training [19]. In Section 5 and Figure 2, we also numerically validate that the second component does not have a high effect on the KL privacy loss in the case of small training time.
_Dependence of KL privacy bound on network over-parameterization_. Under a fixed training time \(T\) and noise scale \(\sigma^{2}\), Lemma 3.2 predicts that the KL divergence upper bound in Theorem 3.1 is dependent on the gradient difference and gradient norm at initialization, and the rank of gradient subspace \(\text{rank}(M_{T})\) throughout training. We now discuss the how these two terms change under increasing width and depth, and whether there are possibilities to improve them under over-parameterization.
1. The gradient norm at initialization crucially depends on how the per-layer variance in the Gaussian initialization distribution scales with the network width and depth. Therefore, it is possible to reduce the gradient difference at initialization (and thus improve the KL privacy bound) by using specific initialization schemes, as we later show in Section 4 and Section 5.
2. Regarding the rank of gradient subspace \(\text{rank}(M_{T})\): when the gradients along the training trajectory span the whole optimization space, \(\text{rank}(M_{T})\) would equal the dimension of the learning problem. Consequently, the gradient fluctuation upper bound (and thus the KL privacy bound) worsens with increasing number of model parameters (over-parameterization) in the worst-case. However, if the gradients are low-dimensional [45; 32; 43] or sparse [37], \(\text{rank}(M_{T})\) could be dimension-independent and thus enables better bound for gradient fluctuation (and KL privacy bound). We leave this as an interesting open problem.
## 4 KL privacy bound for Linearized Network under over-parameterization
In this section, we focus on the training of linearized networks (3), which fosters a refined analysis on the interplay between KL privacy and over-parameterization (increasing width and depth). Analysis of DNNs via linearization is a commonly used technique in both theory [19] and practice [43; 41]. We hope our analysis for linearized network serves as an initial attempt that would open a door to theoretically understanding the relationship between over-parameterization and privacy.
To derive a composition-based KL privacy bound for training a linearized network, we apply Theorem 3.1 which requires an upper bound for the norm of gradient difference between the training
processes on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at any time \(t\). Note that the empirical risk function for training linearized models enjoys convexity, and thus a relatively short amount of training time is enough for convergence. In this case, intuitively, the gradient difference between neighboring datasets does not change a lot during training, which allows for a tighter upper bound for the gradient difference norm for linearized networks (than Lemma 3.2).
In the following theorem, we prove that for a linearized network, the gradient difference throughout training has a uniform upper bound that only depends on the network width, depth and initialization.
**Theorem 4.1** (Gradient Difference throughout training linearized network).: _Under Assumption 2.2, taking over the randomness of the random initialization and the Brownian motion, for any \(t\in[0,T]\), running Langevin diffusion on a linearized network in Eq. (3) satisfies that_
\[\mathbb{E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t}^{lin};\mathcal{D})-\mathcal{ L}(\mathbf{W}_{t}^{lin};\mathcal{D}^{\prime})\|_{2}^{2}\right]\leq\frac{4B}{n^{2}} \,,\text{ where }B\coloneqq d\cdot o\cdot\left(\prod_{i=1}^{L-1}\frac{\beta_{i}m_{i}}{2} \right)\sum_{l=1}^{L}\frac{\beta_{L}}{\beta_{l}}\,, \tag{7}\]
_where \(n\) is the training dataset size, and \(B\) is a constant that only depends on the data dimension \(d\), the number of classes \(o\), the network depth \(L\), the per-layer network width \(\{m_{i}\}_{i=1}^{L}\), and the per-layer variances \(\{\beta_{i}\}_{i=1}^{L}\) of the Gaussian initialization distribution._
Theorem 4.1 provides a precise analytical upper bound for the gradient difference during training linearized network, by tracking the gradient distribution for fully connected feed-forward ReLU network with Gaussian weight matrices. Our proof borrows some techniques from [3, 54] for computing the gradient distribution, refer to Appendix C.1 and C.2 for the full proofs. By plugging Eq. (7) into Theorem 3.1, we obtain the following KL privacy bound for training a linearized network.
**Corollary 4.2** (KL privacy bound for training linearized network).: _Under Assumption 2.2 and neural networks (3) initialized by Gaussian distribution with per-layer variance \(\{\beta_{i}\}_{i=1}^{L}\), running Langevin diffusion for linearized network with time \(T\) on any neighboring datasets satisfies that_
\[\mathrm{KL}(\mathbf{W}_{[0:T]}^{lin}\|\mathbf{W}_{[0:T]}^{lin})\leq\frac{2BT}{n^{2} \sigma^{2}}\,, \tag{8}\]
_where \(B\) is the constant that specifies the gradient norm upper bound, given by Eq. (7)._
Over-parameterization affects privacy differently under different initialization.Corollary 4.2 and Theorem 4.1 prove the role of over-parameterization in our KL privacy bound, crucially depending on how the per-layer Gaussian initialization variance \(\beta_{i}\) scales with the per-layer network width \(m_{i}\) and depth \(L\). We summarize our KL privacy bound for the linearized network under different width, depth and initialization schemes in Table 1, and elaborate the comparison below.
**(1) LeCun initialization** uses small, width-independent variance for initializing the first layer \(\beta_{1}=\frac{1}{d}\) (where \(d\) is the number of input features), and width-dependent variance \(\beta_{2}=\cdots=\beta_{L}=\frac{1}{m}\) for initializing all the subsequent layers. Therefore, the second term \(\sum_{l=1}^{L}\frac{\beta_{L}}{\beta_{l}}\) in the constant \(B\) of Eq. (7) increases linearly with the width \(m\) and depth \(L\). However, due to \(\frac{m_{l}\cdot\beta_{l}}{2}<1\) for all \(l=2,\cdots,L\), the first product term \(\prod_{l=1}^{L-1}\frac{\beta_{l}m_{l}}{2}\) in constant \(B\) decays with the increasing depth. Therefore, by combining the two terms, we prove that the KL privacy bound worsens with increasing width, but improves with increasing depth (as long as the depth is large enough). Similarly, under **Xavier initialization**\(\beta_{l}=\frac{\mathcal{Z}}{m_{l-1}+m_{l}}\), we prove that the KL privacy bound (especially the constant \(B\) (7)) improves with increasing depth as long as the depth is large enough.
**(2) NTK and He initializations** use large per-layer variance \(\beta_{l}=\begin{cases}\frac{2}{m_{l}}&l=1,\cdots,L-1\\ \frac{1}{o}&l=L\end{cases}\) (for NTK) and \(\beta_{l}=\frac{2}{m_{l-1}}\) (for He). Consequently, the gradient difference under NTK or He initialization is significantly larger than that under LeCun initialization. Specifically, the gradient norm constant \(B\) in Eq. (7) grows linearly with the width \(m\) and the depth \(L\) under He and NTK initializations, thus indicating a worsening of KL privacy bound under increasing width and depth.
## 5 Numerical validation of our KL privacy bounds
To understand the relation between privacy and over-parameterization in _practical_ DNNs training (and to validate our KL privacy bounds Lemma 3.2 and Corollary 4.2), we perform experiments for
DNNs training via noisy GD to numerically estimate the KL privacy loss. We will show that if the total training time is small, it is indeed possible to obtain numerical KL privacy bound estimates that does not grow with the total number of parameter (under carefully chosen initialization distributions).
_Numerical estimation procedure_. Theorem 3.1 proves that the exact KL privacy loss scales with the expectation of squared gradient norm during training. This could be estimated by empirically average of gradient norm across training runs. For training dataset \(\mathcal{D}\), we consider all 'car' and 'plane' images of the CIFAR-10. For neighboring dataset, we consider all possible \(\mathcal{D}^{\prime}\) that removes a record from \(\mathcal{D}\), or adds a test record to \(\mathcal{D}\), i.e., the standard "add-or remove-one" neighboring notion [2]. We run noisy gradient descent with constant step-size \(0.01\) for \(50\) epochs on both datasets.
_Numerically validate the growth of KL privacy loss with regard to training time_. Figure 1 shows numerical KL privacy loss under different initializations, for fully connected networks with width \(1024\) and depth \(10\). We observe that the KL privacy loss grows linearly at the beginning of training (\(<10\) epochs), which validates the first and third term in the KL privacy bound Lemma 3.2. Moreover, the KL privacy loss under LeCun and Xavier initialization is close to zero at the beginning of training (\(<10\) epochs). This shows LeCun and Xavier initialization induce small gradient norm at small training time, which is consistent with Theorem 4.1. However, when the number of epochs is large, the numerical KL privacy loss grows faster than linear accumulation under all initializations, thus validating the second term in Lemma 3.2.
_Numerically validate the dependency of KL privacy loss on network width, depth and initializations_. Figure 2 shows the numerical KL privacy loss under different network depth, width and initializations, for a fixed training time. In Figure 1(c), we observe that increasing width and training time always increases KL privacy loss. This is consistent with Theorem 4.1, which shows that increasing width worsens the gradient norm at initialization (given fixed depth), thus harming KL privacy bound Lemma 3.2 at the beginning of training. We also observe that the relationship between KL privacy
Figure 1: Numerically estimated KL privacy loss for noisy GD with constant step-size \(0.001\) on deep neural network with width \(1024\) and depth \(10\). We report the mean and standard deviation across \(6\) training runs, taking worst-case over all neighboring datasets. The numerical KL privacy loss grows with the number of training epochs under all initializations. The growth rate is close to linear at beginning of training (epochs \(<10\)) and is faster than linear at epochs \(\geq 10\).
Figure 2: Numerically estimated KL privacy loss for noisy GD with constant step-size on fully connected ReLU network with different width, depth and initializations. We report the mean and standard deviation across \(6\) training runs, taking worst-case over all neighboring datasets. Under increasing width, the KL privacy loss always grows under all evaluated initializations. Under increasing depth, at the beginning of training (20 epochs), the KL privacy loss worsens with depth under He initialization, but first worsens with depth (\(\leq 8\)) and then improves with depth (\(\geq 8\)) under Xavier and LeCun initializations. At later phases of the training (50 epochs), KL privacy worsens (increases) with depth under all evaluated initializations.
and network depth depends on the initialization distributions and the training time. Specifically, in Figure (a)a, when the training time is small (20 epochs), for LeCun and Xavier initializations, the numerical KL privacy loss improves with increasing depth when depth \(>8\). Meanwhile, when the training time is large (50 epochs) in Figure (b)b, KL privacy loss worsens with increasing depth under all initializations. This shows that given small training time, the choice of initialization distribution affects the dependency of KL privacy loss on increasing depth, thus validating Lemma 3.2 and Theorem 4.1.
## 6 Utility guarantees for Training Linearized Network
Our privacy analysis suggests that training linearized network under certain initialization schemes (such as LeCun initialization) allows for significantly better privacy bounds under over-parameterization by increasing depth. In this section, we further prove utility bounds for Langevin diffusion under initialization schemes and investigate the effect of over-parameterization on the privacy utility trade-off. In other words, we aim to understand whether there is any utility degradation for training linearized networks when using the more privacy-preserving initialization schemes.
Convergence of training linearized networkWe now prove convergence of the excess empirical risk in training linearized network via Langevin diffusion. This is a well-studied problem in the literature for noisy gradient descent. We extend the convergence theorem to continuous-time Langevin diffusion below and investigate factors that affect the convergence under over-parameterization. The proof is deferred to Appendix D.1.
**Lemma 6.1** (Extension of [42, Theorem 2] and [45, Theorem 3.1]).: _Let \(\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\) be the empirical risk function of a linearized network in Eq. (3) expanded at initialization vector \(\mathbf{W}_{0}^{lin}\). Let \(\mathbf{W}_{0}^{*}\) be an \(\alpha\)-near-optimal solution for the ERM problem such that \(\mathcal{L}_{0}^{lin}(\mathbf{W}_{0}^{*};\mathcal{D})-\min_{\mathbf{W}}\mathcal{L}_{0 }^{lin}(\mathbf{W};\mathcal{D})\leq\alpha\). Let \(\mathcal{D}=\{\mathbf{x}_{i}\}_{i=1}^{n}\) be an arbitrary training dataset of size \(n\), and denote \(M_{0}=\left(\nabla f_{\mathbf{W}_{0}^{lin}}(\mathbf{x}_{1}),\cdots,\nabla f_{\mathbf{W}_{0 }^{lin}}(\mathbf{x}_{n})\right)^{\top}\) as the NTK feature matrix at initialization. Then running Langevin diffusion (4) on \(\mathcal{L}_{0}^{lin}(\mathbf{W})\) with time \(T\) and initialization vector \(\mathbf{W}_{0}^{lin}\) satisfies_
\[\mathbb{E}[\mathcal{L}_{0}^{lin}(\tilde{\mathbf{W}}_{T}^{lin})]-\min_{\mathbf{W}} \mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\leq\alpha+\frac{R}{2T}+\frac{1}{2} \sigma^{2}rank(M_{0})\,,\]
_where the expectation is over Brownian motion \(B_{T}\) in Langevin diffusion in Eq. (4), \(\bar{\mathbf{W}}_{T}^{lin}=\frac{1}{T}\int\tilde{\mathbf{W}}_{t}^{lin}\mathrm{d}t\) is the average of all iterates, and \(R=\|\mathbf{W}_{0}^{lin}-\mathbf{W}_{0}^{*}\|_{M_{0}}^{2}\) is the gap between initialization parameters \(\mathbf{W}_{0}^{lin}\) and solution \(\mathbf{W}_{0}^{*}\)._
Remark 6.2.: The excess empirical risk bound in Lemma 6.1 is smaller if data is low-rank, e.g., image data, then \(\text{rank}(M_{0})\) is small. This is consistent with the prior dimension-independent private learning literature [32, 33, 37] and shows the benefit of low-dimensional gradients on private learning.
Lemma 6.1 highlights that the excess empirical risk scales with the gap \(R\) between initialization and solution (denoted as lazy training distance), the rank of the gradient subspace, and the constant \(B\) that specifies upper bound for expected gradient norm during training. Specifically, the smaller the lazy training distance \(R\) is, the better is the excess risk bound given fixed training time \(T\) and noise variance \(\sigma^{2}\). We have discussed how over-parameterization affects the gradient norm constant \(B\) and the gradient subspace rank \(\text{rank}(M_{0})\) in Section 3. Therefore, we only still need to investigate how the lazy training distance \(R\) changes with the network width, depth, and initialization, as follows.
Lazy training distance \(R\) decreases with model over-parameterizationIt is widely observed in the literature [19, 55, 38] that under appropriate choices of initializations, gradient descent on fully connected neural network falls under a lazy training regime. That is, with high probability, there exists a (nearly) optimal solution for the ERM problem that is close to the initialization parameters in terms of \(\ell_{2}\) distance. Moreover, this lazy training distance \(R\) is closely related to the smallest eigenvalue of the NTK matrix, and generally decreases as the model becomes increasingly overparameterized. In the following proposition, we compute a near-optimal solution via the pseudo inverse of the NTK matrix, and prove that it has small distance to the initialization parameters via existing lower bounds for the smallest eigenvalue of the NTK matrix [40].
**Lemma 6.3** (Bounding lazy training distance via smallest eigenvalue of the NTK matrix).: _Under Assumption 2.4 and single-output linearized network Eq. (3) with \(o=1\), assume that the per-layer network widths \(m_{0},\cdots,m_{L}=\tilde{\Omega}(n)\) are large. Let \(\mathcal{L}_{0}^{lin}(\mathbf{W})\) be the empirical risk Eq. (1) for
linearized network expanded at initialization vector \(\mathbf{W}_{0}^{lin}\). Then for any \(\mathbf{W}_{0}^{lin}\), there exists a corresponding solution \(\mathbf{W}_{0}^{\frac{1}{n^{2}}}\), s.t. \(\mathcal{L}_{0}^{lin}(\mathbf{W}_{0}^{\frac{1}{n^{2}}})-\min_{\mathbf{W}}\mathcal{L}_{0 }^{lin}(\mathbf{W};\mathcal{D})\leq\frac{1}{n^{2}}\), \(\text{rank}(M_{0})=n\) and_
\[R\leq\tilde{\mathcal{O}}\left(\max\left\{\frac{1}{d\beta_{L}\left(\prod_{i=1} ^{L-1}\beta_{i}m_{i}\right)},1\right\}\frac{n}{\sum_{l=1}^{L}\beta_{l}^{-1}} \right)\,, \tag{9}\]
_with high probability over training data sampling and random initialization Eq. (5), where \(\tilde{\mathcal{O}}\) ignores logarithmic factors with regard to \(n\), \(m\), \(L\), and tail probability \(\delta\)._
The full proof is deferred to Appendix D.2. By using Lemma 6.3, we provide a summary of bounds for \(R\) under different initializations in Table 1. We observe that the lazy training distance \(R\) decreases with increasing width and depth under LeCun, He and NTK initializations, while under Xavier initialization \(R\) only decreases with increasing depth.
_Privacy & Excess empirical risk tradeoffs for Langevin diffusion under linearized network_. We now use the lazy training distance \(R\) to prove empirical risk bound and combine it with our KL privacy bound Section 4 to show the privacy utility trade-off under over-parameterization.
**Corollary 6.4** (Privacy utility trade-off for linearized network).: _Assume that all conditions in Lemma 6.3 holds. Let \(B\) be the gradient norm constant in Eq. (7), and let \(R\) be the lazy training distance bound in Lemma 6.3. Then for \(\sigma^{2}=\frac{2BT}{\varepsilon n^{2}}\) and \(T=\sqrt{\frac{\varepsilon nR}{2B}}\), releasing all iterates of Langevin diffusion with time \(T\) satisfies \(\varepsilon\)-KL privacy, and has empirical excess risk upper bounded by_
\[\mathbb{E}[\mathcal{L}_{0}^{lin}(\bar{\mathbf{W}}_{T}^{lin})] -\min_{\mathbf{W}}\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\leq \tilde{\mathcal{O}}\left(\frac{1}{n^{2}}+\sqrt{\frac{BR}{\varepsilon n}}\right) \tag{10}\] \[=\tilde{\mathcal{O}}\left(\frac{1}{n^{2}}+\sqrt{\frac{\max\{1,d \beta_{L}\prod_{l=1}^{L-1}\beta_{l}m_{l}\}}{2^{L-1}\varepsilon}}\right) \tag{11}\]
_with high probability over random initialization Eq. (5), where the expectation is over Brownian motion \(B_{T}\) in Langevin diffusion, and \(\tilde{O}\) ignores logarithmic factors with regard to width \(m\), depth \(L\), number of training data \(n\) and tail probability \(\delta\)._
See Appendix D.3 for the full proof. Corollary 6.4 proves that the excess empirical risk worsens in the presence of a stronger privacy constraint, i.e., a small privacy budget \(\varepsilon\), thus contributing to a trade-off between privacy and utility. However, the excess empirical risk also scales with the lazy training distance \(R\) and the gradient norm constant \(B\). These constants depend on network width, depth and initialization distributions, and we prove privacy utility trade-offs for training linearized network under commonly used initialization distributions, as summarized in Table 1.
We would like to highlight that our privacy utility trade-off bound under LeCun and Xavier initialization strictly improves with increasing depth as long as the data satisfy Assumption 2.4 and the hidden-layer width is large enough. To our best knowledge, this is the first time that a strictly improving privacy utility trade-off under over-parameterization is shown in literature. This shows the benefits of precisely bounding the gradient norm (Appendix C.1) in our privacy and utility analysis.
## 7 Conclusion
We prove new KL privacy bound for training fully connected ReLU network (and its linearized variant) using the Langevin diffusion algorithm, and investigate how privacy is affected by the network width, depth and initialization. Our results suggest that there is a complex interplay between privacy and over-parameterization (width and depth) that crucially relies on what initialization distribution is used and the how much the gradient fluctuates during training. Moreover, for a linearized variant of fully connected network, we prove KL privacy bounds that improve with increasing depth under certain initialization distributions (such as LeCun and Xavier). We further prove excess empirical risk bounds for linearized network under KL privacy, which similarly improve as depth increases under LeCun and Xavier initialization. This shows the gain of our new privacy analysis for capturing the effect of over-parameterization. We leave it as an important open problem as to whether our privacy utility trade-off results for linearized network could be generalized to deep neural networks.
## Acknowledgments and Disclosure of Funding
The authors would like to thank Yaxi Hu and anonymous reviewers for helpful discussions on drafts of this paper. This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043), and the Swiss National Science Foundation (SNSF) under grant number 200021_205011, Google PDPO faculty research award, Intel within the www.private-ai.org center, Meta faculty research award, the NUS Early Career Research Award (NUS ECRA award number NUS ECRA FY19 P16), and the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
| モデルのオーバーパラメータ化がランダム化された機械学習アルゴリズムにおけるモデルのパフォーマンスに及ぼす情報漏洩について、解析的に調査します。具体的には、モデル分布間のKLダイバージェンスの最 Worst-caseneighboringデータセットにおける最小値の値を証明し、初期化、幅、深さの決定に依存します。完全結合ニューラルネットワーク。このKLプライバシーの界限は、トレーニング中にモデルパラメータに対する期待の平方勾配ノルムによって大きく影響を受けることが判明しました。特に、線形化されたネットワークの設定では、この平方勾配ノルムは初期化分布の層ごとの分散と密接に関連しています。この分析を用いて、プライバシーの界限は、特定の初期化(LeCunとXavier)で深さを増大させることで改善し、他の初期化(HeとNTK)では深さを増大させることでプライバシーの |
2309.14967 | A novel approach for holographic 3D content generation without depth map | In preparation for observing holographic 3D content, acquiring a set of RGB
color and depth map images per scene is necessary to generate
computer-generated holograms (CGHs) when using the fast Fourier transform (FFT)
algorithm. However, in real-world situations, these paired formats of RGB color
and depth map images are not always fully available. We propose a deep
learning-based method to synthesize the volumetric digital holograms using only
the given RGB image, so that we can overcome environments where RGB color and
depth map images are partially provided. The proposed method uses only the
input of RGB image to estimate its depth map and then generate its CGH
sequentially. Through experiments, we demonstrate that the volumetric hologram
generated through our proposed model is more accurate than that of competitive
models, under the situation that only RGB color data can be provided. | Hakdong Kim, Minkyu Jee, Yurim Lee, Kyudam Choi, MinSung Yoon, Cheongwon Kim | 2023-09-26T14:37:31 | http://arxiv.org/abs/2309.14967v1 | # A Novel Approach for Holographic 3D Content Generation Without Depth Map
###### Abstract
In preparation for observing holographic 3D content, acquiring a set of RGB color and depth map images per scene is necessary to generate computer-generated holograms (CGHs) when using the fast Fourier transform (FFT) algorithm. However, in real-world situations, these paired formats of RGB color and depth map images are not always fully available. We propose a deep learning-based method to synthesize the volumetric digital holograms using only the given RGB image, so that we can overcome environments where RGB color and depth map images are partially provided. The proposed method uses only the input of RGB image to estimate its depth map and then generate its CGH sequentially. Through experiments, we demonstrate that the volumetric hologram generated through our proposed model is more accurate than that of competitive models, under the situation that only RGB color data can be provided.
Hakdong Kim\({}^{1}\), Minkyu Jee\({}^{2}\), Yurim Lee\({}^{3}\), Kyudam Choi\({}^{4}\), MinSung Yoon\({}^{5,*}\), Cheongwon Kim\({}^{3,*}\)\({}^{1}\)Department of Digital Contents, Sejong University, Seoul, 05006, Korea
\({}^{2}\)Department of Algorithm Development, Selvers, Seoul, 04594, Korea
\({}^{3}\)Department of Software, Sejong University, Seoul, 05006, Korea
\({}^{4}\)Department of Software Convergence, Sejong University, Seoul, 05006, Korea
\({}^{5}\)Communication & Media Research Laboratory,
Electronics and Telecommunications Research Institute, Daejeon, 34129, Korea
Computer-generated hologram, Depth map estimation, Deep learning
## 1 Introduction
Computer-Generated Holograms (CGH) are generated from RGB images coupled with their corresponding depth maps. While such input data can be acquired by specific camera products, there exists pairwise inconsistency in their resolution. This leads to CGH generation methods requiring a pre-processing step that involves aligning the resolution of a given RGB image and depth map. Moreover, providing realistic 3D holographic content demands high resolution (2K (\(1920\times 1080\)) and 4K (\(3840\times 2160\))) RGB image-depth map pair. Such processes incur large computational costs. To relieve such a burden, we propose an approach that circumvents this preprocessing step by generating holographic 3D content only from RGB images.
Figure 1 shows the differences between the conventional CGH generation methods (a) and our newly proposed method (b). Our proposed method consists of an Embedded Depth map Estimation module and a CGH Generation module. The Embedded Depth map Estimation module first estimates a depth map using an RGB image while the CGH Generation module subsequently generates the volumetric CGH. Experimental results show that the quality of volumetric CGHs generated by our approach does not fall behind those of other state-of-the-art models, which
Figure 1: Overview of the proposed method. Conventional CGH generation methods (a): Both RGB image and depth map are required to generate holographic 3D content. Proposed method (b): Only RGB image is required to generate holographic 3D content once depth map learning is complete.
## 2 Related Work
### Assistance module in computer vision
While our Embedded Depth map Estimation module estimates a depth map used for generating a volumetric 3D hologram, previous works have employed similar approaches by using assistance modules. Zhang et al. used a guidance module to predict segmentation images [1]. Nazeri et al. used assistance modules in image inpainting to generate a complete image [2] with a Generative Adversarial Network [3]-based model. Huang et al. and Jiao et al. proposed methods that intermediately predict depth maps to achieve their purpose[4, 5]. Zhang et al. used a U-Net-based [6] model that jointly output segmentation and depth map estimation [7]. Wang et al. augmented depth map prediction process with a semantic segmentation module [8]. Kumar et al. proposed a framework for classifying dynamic objects in driving situations which features acquiring the segmentation model and utilizing them as guidance features [9].
### Digital Hologram Generation
Studies related to digital hologram generation can be categorized into generating holograms for 2D and 3D scenes. Long at el. utilized FCNs (Fully Convolution Networks) and GAN to generate 2D holograms [10]. Khan et al. proposed a GAN-based model that generates CGHs quickly [11]. Lee et al. presented the method to generate holograms using distance information [12]. Shi et al. introduced an approach for generating volumetric 3D holograms using an RGB image and depth image [13]. The novelty of our approach is based on utilizing only RGB images to generate volumetric 3D digital holograms which is a distinguishable feature from other previous works.
## 3 Proposed Method
### Model Architecture
The proposed model architecture consists of two modules which are the Embedded Depth map Estimation module and the CGH Generation module. The Embedded Depth map Estimation module generates a depth map using an RGB image as input. The intermediate latent feature maps are propagated to the CGH Generation module. The CGH Generation module first fuses each layer-wise latent feature maps in Feature Fusion block and uses them to generate amplitude and phase through Cascaded Convolutional blocks.
Figure 2 illustrates the Embedded Depth Map Estimation module. For the Encoding block which performs feature extraction and down-sampling, we imported DenseNet161 [14]. Given a \(384\times 384\) RGB image with 3 channels as input, the Encoding block repeatedly propagates it through 6 convolution filters to create a \(192\times 192\) feature map. The number of channels for each feature map is 96, 192, 384, 768, 1536, and 2208 respectively while all convolution filters' size and stride are identically set to \(3\times 3\) and 1 respectively.
The Decoding block performs up-sampling and estimates a depth map using the lastly extracted feature map as input. The Embedded Depth map Estimation module uses skip connections, which reduces the incurring spatial information loss in each successive layer. We connected 4 encoding layers' feature maps to 4 decoding layers' feature maps. The Encoding block-Decoding block's latent feature maps (\(Ef_{n}\), \(Df_{n}\), \(1\leq n\leq 4\) in Figure 2) are used as inputs for the Feature Fusion blocks. The Embedded Depth Map Estimation module is defined as
\[Ef_{n+1}=\sum_{x=0}^{h-1}\sum_{y=0}^{w-1}Ef_{n}(x,y)*k \tag{1}\]
\[Df_{n+1}=Bilinear(Up(Skip(Ef_{n},Df_{n}))) \tag{2}\]
where \(Ef\) is Encoding block's feature map, \(h,w\) are size of width and height, \(k\) is filter, and \(Df\) is Decoding block's feature map. We used bilinear interpolation method in consideration of both computational cost and performance.
The CGH Generation module consists of the Feature Fusion block and Cascaded Convolutional blocks. Figure 3 illustrates the Feature Fusion block. We imported Geometry-Aware Propagation (GAP) module [5] as the Feature Fusion block. The GAP module generates features using a value at the same pixel location in two images to fuse two different pieces of information. The Feature Fusion block is defined as
\[\begin{split} Ff_{n}=Skip(Ef_{n},C_{n,4}(C_{n,1}(Ef_{n})\times \\ (C_{n,2}(Df_{n})\times C_{n,3}(Df_{n}))))\\ \\ C_{n,m}(j)=BN(\sum_{x=0}^{h-1}\sum_{y=0}^{w-1}j(x,y)*(1,1))\end{split} \tag{3}\]
where \(C\) is convolution operation with batch normalization (BN), \(n,m\) are feature maps' step number and \(Cs\)' number (\(1\leq n,m\leq 4\)), \(j\) is input feature map, \((1,1)\) is 1 by 1 convolution operation. The multiplication symbol is an element-wise product and the asterisk is a vector product. The Feature Fusion block creates new features for generating a hologram. Fused feature maps (\(Ff_{n}\) in Figure 3) are used as input of the Cascaded Convolutional blocks.
Figure 4 illustrates the Cascaded Convolutional blocks. The fused feature maps and RGB images are used as input. 3 Cascaded Convolutional blocks are used to generate the amplitude while the 3 others are used to generate the phase. Each Cascaded Convolutional block consists of a \(3\times 3\) convolution layer, batch normalization, and an up-sampling layer, followed by a \(1\times 1\) convolution layer and a non-linear activation function LeakyReLU [15].
### Model Optimization
The optimization strategy for the proposed model consists of two phases which are depth map estimation and volumetric CGH generation respectively. In phase 1, the Embedded Depth Map Estimation module is trained under a loss criterion comprising MSE (Mean Squared Error) and SSIM (Structural Similarity) that minimizes the discrepancy between estimated and ground truth depth maps. In phase 2, the CGH Generation module is trained under a loss criterion comprising MSE and L1 Norm that minimizes the discrepancy between generated and ground truth amplitude-phase pairs. To determine loss criteria, we experimented by changing the coefficients \(a1\) for SSIM and the coefficients \(a2\) for L1 Norm, as a result, we found that the optimal values of \(a1\) and \(a2\) were 0.01.
## 4 Experiments
We performed two experiments to evaluate our proposed approach. The first experiment is constrained to generating volumetric holograms using only RGB images. Since other competitive models used in the experiment[13, 12, 11] require RGB-Depth pair, we fed them RGB images coupled with depth maps with pixels filled with zero depth values whereas we fed the proposed model only RGB images. The second experiment eliminates this constraint, granting other models[13, 12, 11] used in the experiment to fully use both sides of the input.
### Dataset and Hyperparameter Setting
We used the dataset containing 4,000 image sets provided by Shi et al. [13] for our experiments where each image set consists of an RGB, depth map, amplitude, and phase image. We partitioned the 4,000 image sets into training, validation, and testing purposes of which amounts are 3,800, 100, and 100
Figure 4: Cascaded Convolutional blocks. Using fused feature maps and an RGB image as input, 6 Cascaded Convolutional blocks, \(1\times 1\) convolution, and activation function are repeatedly used to generate amplitude and phase.
Figure 3: Feature Fusion block. Using the encoder-decoder’s latent feature maps (\(Ef_{n}\), \(Df_{n}\), \(1\leq n\leq 4\)) as inputs, a fused feature (\(Ff_{n}\)) is generated through \(1\times 1\) convolution, batch normalization (BN), element-wise product, and skip connection.
Figure 2: Embedded Depth map Estimation module. The Encoding block performs feature extraction and down-sampling, and the Decoding block performs up-sampling, and generates a depth map.
respectively. Each model was trained for a total of 20 epochs with a fixed batch size of 4. The learning rate in the optimizer was set to 0.0001.
### Results
Figure 5 shows the PSNR (Peak Signal-to-noise ratio) and SSIM of the four models. We found that the proposed model's results are more accurate than those of other models when every model used only an RGB image whereas the proposed model's results don't fall behind much than those of other models when other models fully use both RGB image and depth map. Table 1 shows the measure of time required to train models when the proposed method uses only an RGB image whereas other models use fully use both an RGB image and depth map. The proposed method needs a significantly small amount of time than other models. This is due to the reduction of the number of parameters caused by the elimination of the depth map channel as input and the use of the Cascaded Convolutional blocks. Summarizing the numerical results, performances of the proposed method are slightly (about 1.57%) lower than those of Shi et al[13]'s method, which is the best case in the experiment, however significantly (about 286%) better in terms of time efficiency. Figure 6 shows visually qualitative differences, which are to use amplitude, phase, and reconstructed holographic 3D image, between the proposed model and other models when using only an RGB image. In the enlarged part including edge region information, it is visually confirmed that the edge part of the phase image from the proposed model is more similar to the ground truth than that of other models, and this behavior is maintained in the reconstructed holographic 3D image. | ホログラフィック3Dコンテンツを観察するために準備するにあたって、シーン毎にRGBカラーと深度マップ画像を収集することは、FFTアルゴリズムを用いる際のコンピュータ生成ホログラム(CGH)の生成には必要です。ただし、現実世界の状況では、RGBカラーと深度マップ画像のペアフォーマットは必ずしも完全に揃っているわけではありません。私たちは、RGB画像のみで立体的デジタルホログラムを合成する深層学習ベースの方法を提案しました。これにより、RGBカラーと深度マップ画像が一部提供されている環境でも対応できます。提案された方法は、RGB画像の入力に基づいてその深度マップを推定し、CGHを順次生成します。実験の結果、提案モデルによって生成された立体的ホログラムは、RGBカラーデータのみが提供される状況において競合モデルよりも精度が高いことが示されました。 |
2301.07687 | Maybe, Maybe Not: A Survey on Uncertainty in Visualization | Understanding and evaluating uncertainty play a key role in decision-making.
When a viewer studies a visualization that demands inference, it is necessary
that uncertainty is portrayed in it. This paper showcases the importance of
representing uncertainty in visualizations. It provides an overview of
uncertainty visualization and the challenges authors and viewers face when
working with such charts. I divide the visualization pipeline into four parts,
namely data collection, preprocessing, visualization, and inference, to
evaluate how uncertainty impacts them. Next, I investigate the authors'
methodologies to process and design uncertainty. Finally, I contribute by
exploring future paths for uncertainty visualization. | Krisha Mehta | 2022-12-14T00:07:06 | http://arxiv.org/abs/2301.07687v1 | # Maybe, Maybe Not: A Survey on Uncertainty in Visualization
###### Abstract
Understanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands inference, it is necessary that uncertainty is portrayed in it. This paper showcases the importance of representing uncertainty in visualizations. It provides an overview of uncertainty visualization and the challenges authors and viewers face when working with such charts. I divide the visualization pipeline into four parts, namely data collection, preprocessing, visualization, and inference, to evaluate how uncertainty impacts them. Next, I investigate the authors' methodologies to process and design uncertainty. Finally, I contribute by exploring future paths for uncertainty visualization.
## 1 Introduction
With a rise in the complexity and dimensionality of data, analyzing and modeling data becomes more challenging. When most of our decisions are data-driven, it becomes imperative that we know the nature of the data and the patterns it contains. As a result, analyzing the inherent uncertainty in the data is gaining more significance. In various fields, uncertainty can signify different things. For instance, data bias, random or systematic error, and statistical variance are all factors that contribute to data uncertainty. Without understanding the underlying uncertainty in our data, we cannot make accurate predictions. Similarly, to observe the true structure of our data and as well as identify patterns in it, we need to visualize it. Today, we can no longer undermine the significance of uncertainty nor ignore the importance of visualizations for data analysis.
As mentioned before, uncertainty is bound to exist whenever there is data. Therefore representation of uncertainty in data visualizations is crucial. Consider the example of hurricane path maps, as shown in Figure 1. The increase in the width of the predicted path with time is not due to an increase in the size of the hurricane. Instead, it is representing the inherent uncertainty in the data. In other words, the visualization indicates that compared to Friday, Sunday's hurricane path is more difficult to predict with any degree of accuracy.
Information tends to be withheld from the viewer when one does not portray uncertainty in the visualization. Therefore the viewer might occasionally be ignorant of this exclusion. This breach of trust can have significant consequences for both the author and the viewer. Given this significance, it is reasonable to assume that visualizations frequently include uncertainty. But how often do we encounter charts that represent uncertainty? How frequently do we check for bias in graphs that represent public surveys? As it turns out, not frequently.
In a recent study [9], 121 journalism articles, social science surveys, and economic estimates were examined. Out of 449 visualizations created for inference, the study demonstrates that only 14 accurately depict uncertainty. "What's Going on in This Graph?" is a New York Times (NYT) initiative to increase graphical literacy, especially among students. Different categories of charts, such as maps, parts-to-whole, and associations, are published for students to explore and analyze. When I looked into the distribution of these charts, I found that only 6 out of the 136 charts show uncertainty.
The question I ask is, do we actually examine uncertainty representations when we come across them in order to make decisions, or do we simply ignore them? Does uncertainty offer value or just clutter these visualizations? I try to investigate these questions in this paper. Visualizations are an integral part of newspapers, government bills, and business earnings reports to name a few. The public uses them to gain insights, spot trends, and make decisions.
Hence, when we visualize data, it becomes critical to support those visualizations with information about uncertainty. People frequently use visualizations to examine data and make observations. A lack of uncertainty representation could result in incorrect and erroneous interpretations. However, it can be challenging to visualize uncertainty. There are limited standard guidelines or protocols that authors can follow when they create such charts. Given these drawbacks, uncertainty visualization is considered one of the top research problems in data visualization [13]. With the help of a few uncertainty visualization examples, this survey studies how uncertainty contributes to every phase in visualization. Most research in this area focuses on creating charts with uncertainty and how viewers may perceive them. However, uncertainty is also influential in the other parts of the data visualization process, such as during data collection and preprocessing.
**The objectives of this paper are as follows:**
* Provide an entry point for anyone who wants to learn about uncertainty visualization
* Delineate the significance of uncertainty visualizations
* Explore how uncertainty influences every phase of the data visualization process
Figure 1: An example chart for Matthew showing its five-day forecast track [5]
* Understand the challenges authors and viewers face when interacting with it
* Discuss the open problems and future research directions in the field
This work is divided into the following sections. Section 2 defines uncertainty and describes the relationship between uncertainty and visualization. In Section 3, I classify the data visualization pipeline into four phases, analyzing the involvement of uncertainty in each phase. The classification helps look at each phase individually, focusing on the challenges and bottlenecks authors and viewers face when working with uncertainty visualization. Finally, I study some state-of-the-art methods to visualize uncertainty and discuss future directions for research. I conclude the paper in Section 4.
## 2 Uncertainty and Visualization
Visualizations are incredibly important for examining, analyzing, and interpreting data in the era of big data. Visualizations are evidence that a picture really does say a thousand words. They aid viewers in seeing trends, background noise, and outliers. Asking the correct questions can be quite challenging when there is an abundance of data. Through visualizations, viewers can determine what questions the data can help answer. With improvements in hardware, software, and graphics theory, data visualizations are adopted more frequently and widely [26]. Viewers use visualizations to make decisions. However, making decisions and drawing observations by looking at visualizations can be complex due to the statistical variance and uncertainty present in these visualizations.
As mentioned previously, uncertainty can have different definitions based on different scenarios [3]. Broadly speaking, uncertainty is classified into two types, aleatory and epistemic. Aleatory uncertainty rises from random fluctuation and unknown outcomes when an experiment is run multiple times in a consistent environment. For example, in a drug trial, a participant's blood pressure can vary due to stress and anxiety. There might also be measurement errors in the sphygmomanometer. Aleatory uncertainty can be minimized by controlling individual factors and increasing the number of readings. Epistemic uncertainty, on the other hand, rises from a lack of knowledge, like predicting the outcome of the same experiment in a completely different, unknown environment. For example, predicting the effect of a drug on a new disease. Uncertainty can be measured, like risks but can also be unquantified, like bias. While aleatory uncertainty is more widely represented in the visualizations [25], both types can be represented with distribution graphs.
Uncertainty and visualizations are interweaved, and working with one often requires working with the other. In 1644, Michael Florent van Langren was one of the first researchers to use visualization for statistical analysis [25]. He used a 1D line graph to present the 12 known estimated longitudinal distances between Toledo and Rome, as shown in Figure 2. Instead of using a table to show this data, Langren used this graph to showcase the wide range of variation. Even though all the distances were over-estimated (actual distance, in longitude, is shown using the arrow), the graph remains classic in demonstrating the power of visualization.
The popular Anscombe's quartet [1] is a perfect example of how data with similar statistics might have a very different distribution which is observed when visualized. The quartet consists of four datasets with 11 points having nearly the same mean, sample variance, correlation, linear regression, and coefficient of determination. The four datasets may appear very similar to viewers looking at the data and the descriptive statistics. However, when one visualizes them, the difference in their distribution is very evident, as shown in Figure 3. Looking at data in tabular form may hide insightful observations and can lead to erroneous conclusions. Today, researchers across all domains use extensive libraries such as [12, 19, 22, 4, 11] to analyze data uncertainty.
Using visualizations to represent and study uncertainty in data is widely adopted. However, uncertainty in visualizations is often not communicated [9]. One of the earliest instances of uncertainty being presented can be traced back to the 18th century. Joseph Priestley, a British scientist, created "A Chart of Biography" to present the lifespans of famous people as shown in Figure 4. He used horizontal lines to portray the lifetime of about 2000 people and used dots before or after the lines to communicate uncertainty.
Visualizations of uncertainty, however, are not common. Numerous factors influence why authors decide against visualizing uncertainty. Since they do not know all the information about the dataset, viewers may draw inaccurate conclusions in the absence of uncertainty representation. Nevertheless, introducing more uncertainty could also make the audience feel too overwhelmed to pay attention to it. The study of why visualizing uncertainty is rare is
Figure 4: Priestley’s Chart of Biography [21]
Figure 3: Anscombe’s quartet represents four datasets with similar statistics but very different distributions.
Figure 2: Langren’s line graph is one of the first visualizations to present uncertainty
still in its early stages. In the section that follows, I go through each of these issues in more detail and look at how uncertainty affects every stage of data visualization.
## 3 Uncertainty in Visualization
Previous works in the field have attempted to classify the data visualization process differently. [14] considers sampling, modeling, visualization, and decision-making as the primary sources of uncertainty. This paper follows a similar classification. I divide the visualization pipeline into **data collection, preprocessing, visualization and inference** as shown in Figure 5. Pang et al. [18] classify the process into data collection, derivation, and visualization and discuss how uncertainty is introduced in each stage.
Under the data collection phase, the paper mainly discusses the uncertainty added due to measurement errors. However, there are other sources, such as bias and sampling error, that the paper fails to describe. I investigate these uncertainties in Section 3.3.1. The authors then discuss the change data undergoes when it is preprocessed. These changes include converting one unit to another, rescaling, and resampling. However, they do not mention other vital issues such as missing data, approximation, and interpolation that I examine in Section 3.3.2. Next, the authors highlight how uncertainty also influences the data visualization stage itself. They mainly focus on radiosity and volume rendering, while this paper delves more into 2D visualizations. Finally, I explore how viewers infer these visualizations and the challenges they face while making a decision from these charts.
Uncertainty is presented at every phase of this classification. However, understanding and evaluating uncertainty in each of these phases is unique. Therefore, authors are required to approach these uncertainties based on their type and complexity, understand their abstraction, and then present them in visualizations in a way that is easy to grasp.
Given the interdisciplinary nature of visualizations, the format, quantity, and type of data used to create them vary immensely. Different data implies different data collection processes and uncertainties. Uncertainty is intertwined with data acquisition and can arise from random variables and modeling errors [14]. Pang et al. [18] explain how almost all acquired data has statistical variation. Collected data can have errors, bias, and variance. [23] study how bias can be introduced during the process of collecting data. Datasets are prone to various biases that include but are not limited to selection bias, volunteer bias, admission bias, survivor bias, and misclassification bias.
It is imperative that datasets resemble the true population as closely as possible. Data can also contain different types of errors, such as coverage error, sampling error, nonresponse error, and measurement error [7]. Missing data points is another common challenge researchers face during data collection.
Correcting these errors is not always possible, but they can be mentioned in the visualization to inform the viewer. However, uncertainty is often ignored when authors create visualizations. Other times this uncertainty in data is not communicated to them [9]. For example, when I analyze a piece called "Free Speech" (as shown in Figure 6) published in the What's Going On in This Graph section of the NYT. [16], we can see how information about uncertainty from the data source is not mentioned directly in the graph. The bars of the graph do not sum to 100 percent since they are missing the no-response segment. The article mentions that the margin of error for the sample is +/- 3.1%, but the graph makes no mention of it.
Efforts are being made by researchers to improve the way uncertainty in the data collection phase is captured, processed, and communicated. Athawale et al. [2] propose using statistical summary maps to represent uncertainty in scalar field data caused by data acquisition.
### _Data Preprocessing_
Raw data is imperfect and can consist of noise and error. Once data is collected, it undergoes processing for accuracy and standardization. However, this phase adds uncertainty to the data that may not be immediately evident. For example, fundamental transformations like rounding off values, converting data from one unit to another, rescaling, resampling, and quantizing can add uncertainty [1]. Even though this might seem minor, the impact can be significant. For example, based on whether we take the value of pi as 22/7(3.14285) or 3.14159, the area of the Sun can vary by a difference of 239x106 sq. miles.
A significant setback that most datasets suffer from is missing data. Data can have missing values for many reasons, such as instrument malfunction, incomplete observations, and lost data. Missing values leave a gap in the dataset, which makes room for uncertainty. Working with such uncertainty requires the authors to take extra measures during preprocessing. Authors attempt to find close estimates of the missing values to provide the viewers with a complete picture. One way to tackle this problem is by deleting the complete entry that has the missing value. This leads to a loss of data and insights. Another option is to make an educated guess about the missing value. However, this is highly unreliable and often not recommended. Using interpolation, imputation, or other techniques can induce errors [3].
Sometimes, authors choose to encode these estimated values differently in their designs to inform the viewer about the gap in the dataset. However, how authors choose to visualize this encoding becomes very influential in how viewers perceive these graphs. Whether authors highlight, downplay, annotate or remove the missing values determines how much confidence and credibility the
Figure 5: The data visualization process divided into four stages to show how uncertainty affects each stage
Figure 6: Free Speech, a graph by the New York Times based on a national poll including 1,507 U.S residents [16]
viewer shows in the visualization [24].
### Visualization Creation
Since uncertainty isgrained in different parts of the data collection process, it is not easy to identify and control it. However, once the data is cleaned and processed, the authors face a new problem. Creating visualizations requires authors to make various decisions on behalf of the viewer. Authors are expected to choose the type of visualization based on data type, which may lead them to choose the scaling, sorting, ordering, and aesthetics [27]. Compelling visualizations are accurate and suggest an understanding and interpretation of data. Hence, it is the author's responsibility to analyze data correctly before creating any visualizations. Midway [15] describes ten design principles authors can follow to create charts. However, none of those principles discuss how uncertainty can be presented. Creating effective visualizations is hard. However, when we add uncertainty representation, the task becomes much more complex [17]. The data visualization community of researchers, designers, journalists, etc., has been reluctant to add uncertainty to their charts. Authors are aware of how significant uncertainty visualization is. Yet, they choose to exclude uncertainty when they design their charts for various reasons discussed below.
#### 3.2.1 Uncertainty is hard to represent
Though data is replete with uncertainty, the difficulty lies in determining if it should be represented and how. If the uncertainty has no direct relationship to the goal of the visualization, then it may not be included in the visualization. But this is not a conclusion that authors can quickly draw. The rise in techniques of visualizing uncertainty can make it harder for authors to decide which one to choose from. One of the biggest challenges in visualizing uncertainty is discovering and communicating the relationship and impact that the uncertainty has on the data. Data visualization is often a preferred choice for analysis due to its ability to present high-dimensional data. However, uncertainty also has dimensions, generally classified into scalar, vector, and tensor [20]. While scalar and vector fields of uncertainty are depicted in charts, tensor fields are often avoided. Mapping these dimensions of uncertainty along with the dimensions of data is challenging and often overlooked when creating charts. Instead, authors tend to simplify uncertainty to align with the dimensionality of the data.
#### 3.2.2 Uncertainty is hard to calculate and verify
Another reason why authors choose to exclude uncertainty from their charts is that calculating uncertainty is complex [9]. It is well known that even mathematicians and statisticians sometimes find it challenging to calculate the error or variance in a dataset. Verifying if the presented uncertainty is correct is challenging. Moreover, if the authors make an error while designing their charts, they end up providing wrong information to the viewers and losing their trust.
#### 3.2.3 Viewers may be overwhelmed
[9] explains why the inclusion of uncertainty in graphs is not widely adopted. Authors believe that uncertainty can be challenging for the viewers to perceive and understand. As a result, viewers may choose to either look at an alternative graph that does not contain any uncertainty representation or overlook the uncertainty in their graph altogether.
#### 3.2.4 Uncertainty can add clutter to the visualization
Authors can be unsure of how effective communicating uncertainty is. They also worry about adding more information to an already visually complex visualization. For many authors, the goal of a chart is to express a signal [9] that can be useful to their viewers. This signal tends to present a single point or a single source of truth. Uncertainty tends to challenge that notion by obfuscating the signal. Additionally, expressing the intricacy of uncertainty through a visual abstraction is challenging. The dimensionality of the data also plays a vital role in deciding whether uncertainty should be represented or not. An increase in the dimensionality of data makes it harder for the human visual system to perceive it effectively. Sometimes even two-dimensional charts can be overwhelming for the viewer. In such a case, representing uncertainty adds visual overload [20].
### Visualization Inference
Uncertainty is hard to understand and analyze. When faced with perceiving an uncertain visualization, viewers can get confused or derive inaccurate information from it. One easy method viewers tend to use is to ignore the uncertainty in the graph altogether. Another way is to substitute tricky calculations with easy ones or use heuristics to make decisions. However, this may not always give a correct observation. The most common approach to show uncertainty is by using box plots and error bars. Though widely used, viewers may find them challenging to analyze [6]. Sometimes visualizing uncertainty as frequency instead of distribution provide a better understanding.
Currently, research is being done to create visualizations that help understand uncertainty more intuitively. For example, hypothetical outcome plots (HOPs) represent uncertainty by animating a finite set of individual draws [10]. This approach expects no prior knowledge of the domain from the viewer. However, using HOPs in physical media might be challenging. Bubble treemaps [8] are another approach for visualizing uncertainty. These circular treemaps encode additional information about uncertainty by allocating additional space for visuals.
While uncertainty is still underrepresented in visualizations, more researchers are slowly adding it to their designs. One of the significant setbacks in uncertainty visualizations for authors is calculating uncertainty, while for viewers, it is graphical literacy. Efforts can be taken to increase this literacy through different programs gradually. Furthermore, work should be done to understand what visualization type best suits a given uncertainty type. This relationship can also depend on the type of data being represented and the target audience viewing the graph. For example, it is necessary for graphs published in newspapers and reports to be easily understandable by the public. Hence, studies focusing on visualizing uncertainty with no prior knowledge or information can be very insightful.
## 4 Conclusion
Uncertainty visualization is one of the most complex research areas in data visualization today. This work provided an overview of uncertainty visualization and the relationship between uncertainty and visualization. I divided the visualization pipeline into four phases and surveyed papers to study how uncertainty interacts with each phase of the process. The work also investigated why the representation of uncertainty is not widely practiced by the data visualization community and the challenges viewers face when inferring from such a graph. Lastly, I discussed a few state-of-the-art methods to design uncertainty visualization and offered a glance into the interesting future research this field has to offer.
| ## 不確実性を理解して評価することは、意思決定にとって重要な役割を果たす。
視覚化データで推論を必要とする場合、不確実性を表現することが必要だ。この論文では、視覚化における不確実性の重要性について説明する。これは不確実性視覚化の基礎と、作者や視聴者に不確実性の扱いに伴う課題について概説している。私は視覚化のパイプラインを4つの部分に分割し、データ収集、前処理、可視化、推論などの要素について、不確実性の影響を評価する。次に、作者が不確実性を処理して設計するための方法論を調査する。最後に、不確実性視覚化の可能性を探るための貢献を行う。 |
2306.17433 | Genus one $H$-surfaces with $k$-ends in $\mathbb{H}^2\times\mathbb{R}$ | We construct two different families of properly Alexandrov-immersed surfaces
in $\mathbb{H}^2\times \mathbb{R}$ with constant mean curvature $0<H\leq \frac
1 2$, genus one and $k\geq2$ ends ($k=2$ only for one of these families). These
ends are asymptotic to vertical $H$-cylinders for $0<H<\frac 1 2$. This shows
that there is not a Schoen-type theorem for immersed surfaces with positive
constant mean curvature in $\mathbb{H}^2\times\mathbb{R}$. These surfaces are
obtained by means of a conjugate construction. | Jesús Castro-Infantes, José S. Santiago | 2023-06-30T07:11:44 | http://arxiv.org/abs/2306.17433v1 | # Genus one \(H\)-surfaces with \(k\)-ends in \(\mathbb{H}^{2}\times\mathbb{R}\)
###### Abstract.
We construct two different families of properly Alexandrov-immersed surfaces in \(\mathbb{H}^{2}\times\mathbb{R}\) with constant mean curvature \(0<H\leq\frac{1}{2}\), genus one and \(k\geq 2\) ends (\(k=2\) only for one of these families). These ends are asymptotic to vertical \(H\)-cylinders for \(0<H<\frac{1}{2}\). This shows that there is not a Schoen-type theorem for immersed surfaces with positive constant mean curvature in \(\mathbb{H}^{2}\times\mathbb{R}\). These surfaces are obtained by means of a conjugate construction.
## 1. Introduction
In 1983, R. Schoen [33] proved that the unique complete immersed minimal surfaces in \(\mathbb{R}^{3}\) with finite total curvature and two embedded ends are the catenoids. Concerning surfaces with constant mean curvature \(H>0\) (\(H\)-surfaces in the sequel), Koreever, Kusner and Solomon [13] proved that any complete properly embedded \(H\)-surface in \(\mathbb{R}^{3}\) with two ends is a rotationally invariant Delaunay surface. If we drop the hypothesis of being properly embedded, we have that Kapouleas [12] constructed immersed \(H\)-surfaces in \(\mathbb{R}^{3}\) with two ends and genus \(g\geq 2\).
Korevaar, Kusner, Meeks and Solomon [14] proved analogous results in the hyperbolic space \(\mathbb{H}^{3}\) showing that the only properly embedded \(H\)-surfaces in \(\mathbb{H}^{3}\) with two ends and \(H>1\) are the hyperbolic Delaunay surfaces. In \(\mathbb{H}^{3}\), \(H\)-surfaces with \(H=1\) are known as Bryant surfaces and the value \(H=1\) is known as _critical_ in the literature since surfaces with subcritical, critical, supercritical mean curvature usually have different geometric features. Levitt and Rosenberg [15] proved that a complete Bryant surface in \(\mathbb{H}^{3}\) with asymptotic boundary consisting of at most two points must be a surface of revolution. Again, if we remove the hypothesis of being properly embedded, Rossman and Sato [32] have constructed properly immersed Bryant surfaces with genus one and two ends, each of them asymptotic to a point in the ideal boundary of \(\mathbb{H}^{3}\).
In the product space \(\mathbb{H}^{2}\times\mathbb{R}\), Hauswirth, Nelli, Sa Earp and Toubiana proved in [10] a Schoen-type theorem, showing that the horizontal catenoids are the unique properly immersed minimal surfaces with finite total curvature and two embedded ends, each of them asymptotic to a vertical plane. Later, Hauswirth, Menezes and Rodriguez [9] removed the hypothesis of having finite total curvature. Manzano and Torralbo [17] showed that there are not properly immersed surfaces with \(0\leq H\leq\frac{1}{2}\) at bounded distance from a horizontal geodesic. For supercritical \(H\)-surfaces in \(\mathbb{H}^{2}\times\mathbb{R}\), that is, \(H>\frac{1}{2}\), Mazet [21] proved that a properly embedded \(H\)-surface with \(H>\frac{1}{2}\), finite topology and cylindrically bounded (with respect to a vertical geodesic) must be a rotational Delaunay surface. In this article, we prove the following result concerning the subcritical and critical case (\(0<H\leq\frac{1}{2}\)), showing that there is not a Schoen-type theorem for immersed \(H\)-surfaces in \(\mathbb{H}^{2}\times\mathbb{R}\).
**Theorem 1.1**.: _There exists a family of properly immersed genus-one \(H\)-surfaces with \(0<H\leq\frac{1}{2}\) and two ends. If \(H<\frac{1}{2}\), each of these ends is asymptotic to a vertical \(H\)-cylinder from the convex side._
It seems reasonable that, if we replace the property of being properly immersed by properly embedded, the unique complete \(H\)-surfaces in \(\mathbb{H}^{2}\times\mathbb{R}\) with two ends asymptotic to a vertical \(H\)-cylinder (\(H<\frac{1}{2}\)) should be the \(H\)-catenoids and the embedded \(H\)-catenodoids constructed in [27, 3].
Our genus-one \(H\)-surfaces with two ends belong to a larger family of examples. In fact, we construct two different families of highly symmetric properly Alexandro-embedded surfaces with genus one. The first family is called \((H,k)\)-noids with genus one and they have \(k\geq 3\) ends, each of them asymptotic to a vertical \(H\)-cylinder from the concave side (only for \(H<\frac{1}{2}\)), see Theorem 3.7. Moreover, we prove that the \((H,k)\)-noids are embedded for \(H>\frac{1}{2}\cos(\frac{\pi}{k})\), see Proposition 3.8. The second family is called \((H,k)\)-nodoids with genus one and they have \(k\geq 2\) ends, each of them asymptotic to a vertical \(H\)-cylinder
## 1. Introduction
Let \(H\) be a compact Riemannian manifold with \(H\)-surfaces \(\mathbb{H}^{2}\times\mathbb{R}\) and \(\mathbb{H}^{2}\times\mathbb{R}\) be a compact Riemannian manifold with \(H\)-surfaces \(\mathbb{H}^{2}\times\mathbb{R}\) and \(\mathbb{H}^{2}\times\mathbb{R}\). We denote by
consider the product compactification for \(\mathbb{H}^{2}\times\mathbb{R}\). Then, the asymptotic boundary of \(\widetilde{\mathrm{SL}}_{2}(\mathbb{R})\), denoted by \(\partial_{\infty}\widetilde{\mathrm{SL}}_{2}(\mathbb{R})\), is homeomorphic to the vertical asymptotic boundary \(\partial_{\infty}\mathbb{H}^{2}(\kappa)\times\mathbb{R}\) joint with the horizontal asymptotic boundaries \(\mathbb{H}^{2}\times\{\pm\infty\}\). In this setting, we will say that a point \(p\in\partial_{\infty}\widetilde{\mathrm{SL}}_{2}(\mathbb{R})\) belongs to the asymptotic boundary of a surface \(\Sigma\subset\widetilde{\mathrm{SL}}_{2}(\mathbb{R})\) if there exists a divergent sequence \(\{p_{n}\}\) in \(\Sigma\) that converges to \(p\) in the product compactification. Eventually, in the case of \(\mathrm{Nil}_{3}\) (\(\kappa=0\)), we will refer to the ideal horizontal boundaries as \(\mathbb{R}^{2}\times\{\pm\infty\}\); in this case, the ideal vertical boundary is not well defined.
### Minimal graphs in \(\mathbb{E}(\kappa,\tau)\)
A vertical graph is a section of the submersion \(\pi:\mathbb{E}(\kappa,\tau)\to\mathbb{M}^{2}(\kappa)\) defined over a domain \(U\) in \(\mathbb{M}^{2}(\kappa)\). If we consider the model (2.1) and the zero section \(F_{0}(x,y)=(x,y,0)\), we can parameterize this section in terms of a function \(u:U\to\mathbb{R}\) as
\[F_{u}(x,y)=(x,y,u(x,y)),\ \ (x,y)\in U. \tag{2.3}\]
If \(u\in C^{2}(U)\), the mean curvature of this vertical graph is computed as
\[2H=\mathrm{div}\left(\frac{Gu}{\sqrt{1+|Gu|^{2}}}\right), \tag{2.4}\]
where \(\mathrm{div}(\cdot)\) and \(|\cdot|\) are the divergence and the norm in \(\mathbb{M}^{2}(\kappa)\), respectively, and \(Gu\) is the generalized gradient (see also [8]) given by
\[Gu=(u_{x}\lambda^{-2}+\tau y\lambda^{-1})\partial_{x}+(u_{y}\lambda^{-2}-\tau x \lambda^{-1})\partial_{y}.\]
Let \(\Sigma\) be the minimal graph of \(u:U\subset\mathbb{M}^{2}(\kappa)\to\mathbb{R}\). We define the _flux along an arc_\(c\subset U\) as
\[\mathcal{F}(\Sigma,c)=\int_{c}\left\langle\frac{Gu}{\sqrt{1+|Gu|^{2}}},-J_{ \mathbb{M}^{2}}\frac{c^{\prime}}{|c^{\prime}|}\right\rangle_{\mathbb{M}^{2}}, \tag{2.5}\]
where \(J_{\mathbb{M}^{2}}\) represents the counter clock-wise rotation of angle \(\frac{\pi}{2}\) in \(T\mathbb{M}^{2}(\kappa)\) and therefore \(-J_{\mathbb{M}^{2}}\frac{c^{\prime}}{|c^{\prime}|}\) is an unitary normal vector to \(c\). Observe that, by the Divergence Theorem, if \(c\) is a simple and closed curve such that \(F_{u}(c)\) encloses a minimal disk in \(\Sigma\), then \(\mathcal{F}(\Sigma,c)=0\), see also [22, 34, 23]. If \(\Sigma\) has boundary, \(c\) is a convex arc of this boundary and \(u\) is continuous on \(c\), we can also define the flux in \(c\subset\partial U\), see [34, Lemma 6.2], in that case \(-J_{\mathbb{M}^{2}}\frac{c^{\prime}}{|c^{\prime}|}\) is a unitary conormal. Moreover if \(c\) is a geodesic arc of \(\mathbb{M}^{2}\) and \(u\) takes the asymptotic values \(\pm\infty\) over \(c\), then \(\mathcal{F}(\Sigma,c)=\pm|c|\) depending if \(-J_{\mathbb{M}^{2}}\frac{c^{\prime}}{|c^{\prime}|}\) is an inward or an outer conormal vector, see [34, Lemma 6.3].
**Proposition 2.1**.: _Let \(\Sigma\) be a minimal graph with boundary in \(\mathbb{E}(\kappa,\tau)\) and let \(\gamma\subset\partial\Sigma\) be a curve parameterized by arc-length, then_
\[\mathcal{F}(\Sigma,\pi(\gamma))=\int_{\gamma}\langle-J\gamma^{\prime},\xi\rangle, \tag{2.6}\]
_where \(J\) is the rotation of angle \(\frac{\pi}{2}\) in \(T\Sigma\), such that \(\{\gamma^{\prime},J\gamma^{\prime},N\}\) is a positively oriented orthonormal frame._
Proof.: Let \(u:U\to\mathbb{R}\) be the function defining the vertical graph \(\Sigma\). Assume that \(\gamma:[a,b]\to\Sigma\), \(\gamma(s)=(x(s),y(s),u(x(s),y(s)))\) is parameterized by arc-length. The upward normal along \(\gamma\) is
\[N=\frac{1}{\alpha}\left(-(\tau\lambda y+u_{x})E_{1}+(\tau\lambda x-u_{y})E_{2} +\lambda\xi\right),\]
where \(\alpha^{2}=\lambda^{2}+(\tau\lambda y+u_{x})^{2}+(\tau\lambda x-u_{y})^{2}\). By a straightforward computation, we get that
\[\langle-J\gamma^{\prime},\xi\rangle=\langle N\wedge\gamma^{\prime},\xi\rangle= \frac{\lambda}{\alpha}\left(x^{\prime}(\tau\lambda x-u_{y})+y^{\prime}(\tau \lambda y+u_{x})\right).\]
On the other hand, denoting \(c=\pi(\gamma)=(x,y)\) and \(X_{u}=\frac{G_{u}}{\sqrt{1+|Gu|^{2}}}\), we easily obtain that
\[\left\langle X_{u},-J_{\mathbb{M}^{2}}\frac{c^{\prime}}{|c^{ \prime}|}\right\rangle_{\mathbb{M}^{2}} =\frac{1}{|c^{\prime}|}\langle X_{u},-y^{\prime}\partial x+x^{ \prime}\partial_{y}\rangle_{\mathbb{M}^{2}}\] \[=\frac{\lambda}{|c^{\prime}|\alpha}\left(x^{\prime}(\tau\lambda x -u_{y})+y^{\prime}(\tau\lambda y+u_{x})\right).\]
Therefore we get the desired equation (2.6).
Again we have that Proposition 2.1 holds true when \(\gamma\subset\partial\Sigma\) and \(u\) is continuous in \(\gamma\) or when \(\gamma\) is a horizontal geodesic in \(\partial_{\infty}\Sigma\) (we are identifying \(-J\gamma^{\prime}\) with \(\pm\xi\) in the limit and the sign depends on the orientation and the asymptotic value that we take). We will also call \(\mathcal{F}(\Sigma,\gamma)=\int_{\gamma}(-J\gamma^{\prime},\xi)\) for a curve \(\gamma\subset\partial\Sigma\). It allows us to define the flux along a curve for minimal surfaces in general and not only for minimal graphs since Equation (2.5) only depends on the normal of the curve. Moreover, observe that, if the angle function \(\nu\) vanishes along \(\gamma\), then \(\mathcal{F}(\Sigma,\gamma)=\pm|\pi(\gamma)|\).
Assume that \(\kappa<0\) and let \(\gamma_{1}\) and \(\gamma_{2}\) be two convex embedded arcs in \(\mathbb{H}^{2}(\kappa)\) with vertex on the same ideal point \(q_{0}\in\partial_{\infty}\mathbb{H}^{2}(\kappa)\). We say these arcs are _asymptotic_ at \(q_{0}\) if \(\operatorname{dist}(q,\gamma_{i})\to 0\) as \(q\to q_{0}\) with \(q\in\gamma_{j}\) and \(j\neq i\). We will show in the next proposition that the Generalized Maximum Principle [5, Theorem 2] easily extends to \(\widetilde{\operatorname{SL}}_{2}(\mathbb{R})\).
**Proposition 2.2** (Generalized Maximum Principle).: _Let \(\Omega\subset\mathbb{H}^{2}(\kappa)\) be an unbounded piecewise regular domain such that \(\partial\Omega\cap\partial_{\infty}\mathbb{H}^{2}(\kappa)\) is finite and every \(q\in\partial\Omega\cap\partial_{\infty}\mathbb{H}^{2}(\kappa)\) is the endpoint of exactly two asymptotic arcs in \(\partial\Omega\). Let \(U\subset\Omega\) be a domain and \(u,v\in C^{0}(\overline{U})\) functions that define minimal graphs over \(U\), \(\Sigma_{u}\) and \(\Sigma_{v}\), respectively. If \(\Sigma_{u}\) is below \(\Sigma_{v}\) on \(\partial U\), i.e., \(u\leq v\) on \(\partial U\), then \(\Sigma_{u}\) is below \(\Sigma_{v}\) on \(U\)._
Proof.: We will argue by contradiction. Let us suppose the set \(A=\{p\in U:\;u(p)>v(p)\}\) is not empty. By the maximum principle for compact domains proved in [34], we have that \(\overline{A}\) cannot be compact. Without loss of generality, we consider that \(A\) is a connected component since we can reason in the same way in each connected component. So we know that \(\partial A\) is composed of arcs going into ideal points of \(\Omega\), i.e., \(\partial A\) has \(n\geq 1\) ideal points. By hypothesis, each ideal point \(q_{i}\), \(i=1,\ldots,n\), is the endpoint of two asymptotic arcs. Each small horocycle \(\mathcal{H}_{i}\) containing \(q_{i}\) intersects once each asymptotic arc with vertex \(q_{i}\). Define a cycle \(\alpha:[a,b]\to\overline{A}\) and we consider a partition \(a=t_{0}<t_{1}<...<t_{2n}=b\) where \(\alpha(a)=\alpha(b)\), see Figure 1. We define \(\alpha\) as a piecewise continuous closed curve where \(h_{i}=\alpha|_{[t_{2i},t_{2i+1}]}\) are curves in \(\mathcal{H}_{i}\) joining the two points contained in \(\mathcal{H}_{i}\cap\partial A\) and \(c_{i}=\alpha|_{[t_{2i+1},t_{2i+2}]}\) are arcs in \(\partial A\) for \(0\leq i\leq n-1\), see Figure 1. By the Divergence Theorem, the flux of the function \(u-v\) over the cycle \(\alpha\) is \(0\). By Proposition 2.1, the flux is bounded by the length of \(h_{i}\) on each arc \(h_{i}\). As every point in \(\partial A\cap\partial_{\infty}\mathbb{H}^{2}(\kappa)\) has exactly two asymptotic arcs, for all \(\epsilon>0\), we can choose small horocycles such that \(\sum_{i}|h_{i}|<\epsilon\).
On the arcs belonging to \(\partial A\), we know that \(u=v\) and we have \(G(u-v)=\lambda\eta\) as far as \(u-v>0\) on \(A\), where \(\eta\) is a vector field perpendicular to the arcs on \(\partial A\). If \(\lambda=0\) at any point, then \(\nabla u=\nabla v\) and \(u=v\) at this point and, by the maximum principle at the boundary, both surfaces should be the same. Consequently, \(\lambda\) has sign and we can assume that \(\eta=-J\alpha^{\prime}\).
By [23, Lemma 5.1], we have that the inequality
\[\left\langle G(u-v),\frac{Gu}{\sqrt{1+|Gu|^{2}}}-\frac{Gv}{\sqrt{1+|Gv|^{2}}} \right\rangle\geq 0, \tag{2.7}\]
is satisfied, with equality, if and only if \(\nabla u=\nabla v\). We get that
\[\left\langle G(u-v),\frac{Gu}{\sqrt{1+|Gu|^{2}}}-\frac{Gv}{\sqrt{1+|Gv|^{2}}} \right\rangle=\lambda\left\langle\eta,\frac{Gu}{\sqrt{1+|Gu|^{2}}}-\frac{Gv}{ \sqrt{1+|Gv|^{2}}}\right\rangle\]
and we deduce that \(\mathcal{F}(\Sigma,\alpha_{k})\) has sign for every \(k=0,\ldots,n-1\). Therefore,
\[0=\mathcal{F}(\Sigma,\alpha)=\sum_{i=0}^{n-1}\mathcal{F}(\Sigma,c_{i})+\sum_{i =0}^{n-1}\mathcal{F}(\Sigma,h_{i})<\sum_{i=0}^{n-1}\mathcal{F}(\Sigma,c_{i})+\epsilon.\]
Choosing \(\epsilon>0\) small enough, we obtain that every flux \(\mathcal{F}(\Sigma,\alpha_{i})\) must vanish and we have that \(\left\langle\eta,\frac{Gu}{\sqrt{1+|Gu|^{2}}}-\frac{Gv}{\sqrt{1+|Gv|^{2}}} \right\rangle=0\) along \(h_{i}\). We have that the equality in (2.7) holds, and we get a contradiction with the maximum principle in the boundary since \(u=v\) and \(\nabla u=\nabla v\) along the curves \(h_{i}\).
The umbrella, the surface \(\mathcal{I}\) and the helicoids \(\mathcal{H}_{\infty,a_{2}}\) and \(\mathcal{H}_{a_{1},\infty}\)
* The umbrella \(\mathcal{U}_{p}\) is the minimal surface composed of all horizontal geodesics starting at \(p\). The umbrella's angle function \(\nu\) only takes the value \(1\) at \(p\). For \(\kappa\leq 0\), the umbrella centered at the origin \((0,0,0)\) is the graph of the function \(z=0\) in the cylinder. For \(\tau>0\), the graph of the
umbrella centered in \((0,y_{0},0)\) with \(y_{0}>0\), is positive in \(\{x<0\}\) and negative in \(\{x>0\}\), see Figure 2.
* The surface \(\mathcal{I}\) is the minimal surface composed of all horizontal geodesics perpendicular to a horizontal geodesic, called the axis of \(\mathcal{I}\). The angle function \(\nu\) of \(\mathcal{I}\) is only equal to \(1\) along the axis. For \(\kappa\leq 0\), the surface \(\mathcal{I}\) with axis in \(\{y=0,z=0\}\) in the cylinder model is the graph of the function \[z(x,y)=\left\{\begin{array}{ll}\tau xy&\mbox{if }\kappa=0,\\ \frac{2\pi}{\kappa}\arctan\frac{2xy}{\frac{4}{\kappa}+x^{2}-y^{2}}&\mbox{if } \kappa<0.\end{array}\right.\] see Figure 2. Figure 2. We can express \(\mathcal{I}\) in polar coordinates \((r,\theta)\), that is, \(x=r\cos(\theta)\) and \(y=r\sin(\theta)\), where \(r\) is the distance to the origin in \(\mathbb{M}^{2}(\kappa)\) and \(\theta\) is the angle formed with the axis \(\{y=0\}\). \[z(r,\theta)=\left\{\begin{array}{ll}\frac{\tau}{2}r^{2}\sin(2\theta)&\mbox{ if }\kappa=0,\\ \frac{2\pi}{\kappa}\arctan\frac{\tanh^{2}\left(\frac{\pi}{2}\right)\sin(2 \theta)}{-\kappa(1-\tanh^{2}(r)\cos(2\theta))}&\mbox{if }\kappa<0,\end{array}\right.\] (2.8)
Figure 1. The domain \(U\) (blue) and the domain \(A\) (black) in the proof of Proposition 2.2.
Figure 2. The umbrella centered at \((0,y_{0},0)\) and its projection (left) and the surface \(\mathcal{I}\) and its projection (right) for \(\kappa<0\) and \(\tau>0\).
Notice that, when \(\kappa<0\), the limit \(\lim_{r\to\infty}z(r,\theta)=\frac{2r}{\kappa}(\frac{\pi}{2}-\theta)\) exists for all \(\theta\in(0,\pi)\). When \(\kappa=0\), this limit is \(\pm\infty\) for all \(\theta\neq\frac{k\pi}{2}\), \(k\in\mathbb{Z}\).
* The horizontal helicoids \(\mathcal{H}_{\infty,a_{2}}\) and \(\mathcal{H}_{a_{1},\infty}\) with \(a_{1},a_{2}>0\) are two family of complete properly embedded minimal surfaces in \(\mathrm{Nil}_{3}\) foliated by straight lines orthogonal to a horizontal geodesic \(\Gamma\), see [3, 7]. Let \(S_{2}\) (resp. \(S_{1}\)) be the strip of width \(a_{2}\) (resp. \(a_{1}\)) with edges \(l_{i}\), \(i=1,2,3\), being \(l_{2}\) the edge of length \(a_{2}\) (resp. \(l_{1}\) the edge of length \(a_{1}\)) and \(l_{2}\) (resp. \(l_{1}\)) and \(l_{3}\) parallel edges orthogonal to \(l_{2}\) (resp. \(l_{1}\)) such that the boundary \(\partial S_{1}=l_{1}\cup l_{2}\cup l_{3}\) (resp. \(\partial S_{2}=l_{3}\cup l_{1}\cup l_{2}\)) is traveled in a negative sense, where the union \(\cup\) is written down in the same order that we travel the boundary. The fundamental piece of \(\mathcal{H}_{\infty,a_{2}}\) (resp. \(\mathcal{H}_{a_{1},\infty}\)) can be seen as a solution of a Jenkins-Serrin problem over \(S_{2}\) (resp. \(S_{1}\)) with boundary values \(0\) over \(l_{1}\) and \(l_{2}\) and \(+\infty\) over \(l_{3}\). These helicoids correspond with the family \(\mathcal{H}_{\mu}\) for \(|\mu|>\frac{1}{2}\) of [3, Section 3.2].
### Preliminaries about conjugation
We mention here a brief introduction about the conjugate technique that we will use to construct the \(H\)-surfaces in \(\mathbb{H}^{2}\times\mathbb{R}\) of Section 3. We refer to [4] and the references therein for more detailed description of the technique.
Daniel [6], and Hauswirth, Sa Earp and Toubiana [11] discovered a Lawson-type isometric correspondence between a simply connected minimal immersion \(\widetilde{\phi}:\Sigma\to\mathbb{E}(4H^{2}-1,H)\) and a \(H\)-immersion \(\phi:\Sigma\to\mathbb{H}^{2}\times\mathbb{R}\). The fundamental data \((A,T,\nu)\) of the \(H\)-immersion \(\phi:\Sigma\to\mathbb{H}^{2}\times\mathbb{R}\), being \(A\) the shape operator, \(\nu=\langle N,\xi\rangle\) the angle function and \(T\) the tangent part of the Killing vector field \(\xi\), are related with the fundamental data \((\widetilde{A},\widetilde{T},\widetilde{\nu})\) of the minimal-immersion \(\widetilde{\phi}:\Sigma\to E(4H^{2}-1,H)\) by
\[(A,T,\nu)=(J\widetilde{A}+H\cdot\mathrm{id},J\widetilde{T},\widetilde{\nu}). \tag{2.9}\]
Throughout the text we will write \(\Sigma\) and \(\widetilde{\Sigma}\) to refer to the conjugate (possibly non-embedded) surfaces. Our initial minimal piece \(\widetilde{\Sigma}\subset\mathbb{E}(4H^{2}-1,H)\) is going to be a solution to a Jenkins-Serrin problem, that is, a Dirichlet problem with possibly asymptotic values \(\pm\infty\) over geodesics in \(\mathbb{M}^{2}(4H^{2}-1)\). Moreover, the boundary of \(\widetilde{\Sigma}\) will be composed of horizontal and vertical geodesics of \(\mathbb{E}(4H^{2}-1,H)\) and the asymptotic boundary will be composed of vertical ideal geodesics (only in the case of \(0\leq H<\frac{1}{2}\)) and horizontal ideal geodesics in \(\mathbb{M}^{2}(4H^{2}-1)\times\{\pm\infty\}\). In the following lemmas, we describe the conjugate curves of horizontal and vertical geodesics.
**Lemma 2.3**.: _[_4_, Lemma 3.6]_ _If \(\widetilde{\gamma}\subset\partial\widetilde{\Sigma}\) is a horizontal geodesic, then the conjugate curve \(\gamma\subset\partial\Sigma\) is a symmetry curve of \(\Sigma\) contained in a vertical plane of \(\mathbb{H}^{2}\times\mathbb{R}\). Moreover, if \(\gamma=(\beta,z)\subset\mathbb{H}^{2}\times\mathbb{R}\), then \(|\beta^{\prime}|=|\nu|\) and \(|z^{\prime}|=\sqrt{1-\nu^{2}}\)._
Assume now that \(\widetilde{\gamma}:I\to\partial\widetilde{\Sigma}\) is a vertical geodesic parameterized such that \(\widetilde{\gamma}^{\prime}=\xi\) and write the normal \(\widetilde{N}_{\widetilde{\gamma}}=\cos(\theta)E_{1}+\sin(\theta)E_{2}\) for some function \(\theta\in C^{\infty}(I)\) called _the angle of rotation of \(\widetilde{N}\) along \(\widetilde{\gamma}\)_.
**Lemma 2.4**.: _[_4_, Lemma 3.7]_ _If \(\widetilde{\gamma}\subset\partial\widetilde{\Sigma}\) is a vertical geodesic, then the conjugate curve \(\gamma\subset\partial\Sigma\) is a symmetry curve of \(\Sigma\) contained in a horizontal plane \(\mathbb{H}^{2}\times\{z_{0}\}\)._
1. _The curve_ \(\gamma\) _has geodesic curvature_ \(k_{g}=2H-\theta^{\prime}\) _with respect to_ \(N\)_._
2. _Assume that_ \(\nu>0\) _in the interior of_ \(\Sigma\) _and let_ \(\widetilde{\Omega}\) _and_ \(\Omega\) _be the (possibly non-embedded) domains over which_ \(\widetilde{\Sigma}\) _and_ \(\Sigma\) _project as multigraphs, then:_ * _If_ \(\theta^{\prime}>0\)_, then_ \(J\widetilde{\gamma}^{\prime}\) _(resp._ \(J\gamma\)_) is a unit outer conormal to_ \(\widetilde{\Sigma}\) _along_ \(\widetilde{\gamma}\) _(resp._ \(\gamma\)_),_ \(N\) _points to the interior of_ \(\Omega\) _along_ \(\gamma\) _and_ \(\Sigma\) _lies in_ \(\mathbb{H}^{2}\times(-\infty,z_{0}]\) _locally around_ \(\gamma\)_._ * _If_ \(\theta^{\prime}<0\)_, then_ \(J\widetilde{\gamma}^{\prime}\) _(resp._ \(J\gamma^{\prime}\)_) is a unit inner conormal to_ \(\widetilde{\Sigma}\) _along_ \(\widetilde{\gamma}\) _(resp._ \(\gamma\)_),_ \(N\) _points to the exterior of_ \(\Omega\) _along_ \(\gamma\) _and_ \(\Sigma\) _lies in_ \(\mathbb{H}^{2}\times[z_{0},+\infty)\) _locally around_ \(\gamma\)_._
Let us consider the half-space model for \(\mathbb{H}^{2}\times\mathbb{R}\), whose metric is given by \(ds^{2}=y^{-2}(dx^{2}+dy^{2})+dz^{2}\) and also consider the positively oriented orthonormal frame \(\{E_{1}=y\partial_{x},E_{2}=y\partial_{y},E_{3}=\partial_{z}\}\). Let \(\gamma:I\to\mathbb{H}^{2}\times\{z_{0}\}\) be the conjugate curve of a vertical geodesic \(\widetilde{\gamma}\) in \(\mathbb{E}(4H^{2}-1,H)\) parameterized as \(\widetilde{\gamma}^{\prime}=\xi\). Since \(\gamma\) is contained in a horizontal plane there exists a smooth function \(\psi\in C^{\infty}(I)\) such that \(\gamma^{\prime}(t)=\cos(\psi(t))E_{1}+\sin(\psi(t))E_{2}\). The function \(\psi\) is called _the angle of rotation of \(\gamma\) with respect to a foliation by horocycles_, it is related with the function \(\theta\) by the next Equation (see [4, Remark 3.8]):
\[\psi^{\prime}+\cos(\psi)=\theta^{\prime}-2H. \tag{2.10}\]
_Remark 1_.: In Formula (2.10) we are assuming that the curve \(\gamma\) is parameterized in the direction such that \(\widetilde{\gamma}^{\prime}=\xi\) and the angle of rotation with respect to the horocycle foliation is measured with regard to the orientation given by \(\gamma^{\prime}\) (the unit tangent of the conjugate curve). However, if we measure the angle of rotation with respect to the horocycle foliation using the contrary orientation for \(\gamma\), formula (2.10) changes to \(-\psi^{\prime}-\cos(\psi)=\theta^{\prime}-2H\).
The following result describes the conjugate curves of the asymptotic boundary of a graph solution to a Jenkins-Serrin problem.
**Lemma 2.5**.: _[_4_, Proposition 4.6]_ _Assume that \(4H^{2}-1<0\) and let \(\widetilde{\Sigma}\subset\mathbb{E}(4H^{2}-1,H)\) be a solution of a Jenkins-Serrin problem with asymptotic boundary consisting of vertical and horizontal ideal geodesics. Let \(\Sigma\subset\mathbb{H}^{2}\times\mathbb{R}\) be the conjugate \(H\)-multigraph._
* _Ideal vertical geodesics in_ \(\partial_{\infty}\widetilde{\Sigma}\) _become ideal horizontal curves in_ \(\partial_{\infty}\Sigma\) _with constant curvature_ \(\pm 2H\) _in_ \(\mathbb{H}^{2}\times\{\pm\infty\}\)_._
* _Ideal horizontal geodesics in_ \(\partial_{\infty}\widetilde{\Sigma}\) _become ideal vertical geodesics of_ \(\partial_{\infty}\Sigma\)_._
## 3. Conjugate construction of \((H,k)\)-noids and \((H,k)\)-nodoids with genus one.
### The initial minimal graph of the conjugate construction
We will describe a family of minimal graphs in \(\mathbb{E}(4H^{2}-1,H)\) for \(0<H\leq\frac{1}{2}\) and use this family in our conjugate construction to obtain the desired \((H,k)\)-noids and \((H,k)\)-nodoids with genus one inspired by the results of [2].
We consider the geodesic triangle \(\widetilde{\Delta}(a_{1},a_{2},\varphi)\subset\mathbb{M}^{2}(4H^{2}-1)\) with vertexes \(p_{0}\), \(p_{1}\) and \(p_{2}\). We assume that \(p_{0}=(0,0)\) and the sides \(l_{1}=\overline{p_{0}p_{1}}\), \(l_{2}=\overline{p_{0}p_{2}}\) have lengths \(a_{1}\in(0,\infty]\) and \(a_{2}\in(0,\infty]\) respectively (not both equal to \(\infty\)). The angle \(0<\varphi<\frac{\pi}{2}\) is the counter-clockwise oriented angle in \(p_{0}\) going from the side \(l_{1}\) to \(l_{2}\), see Figure 3 (down). We call \(l_{3}=\overline{p_{1}p_{2}}\).
In the case \(0<H<\frac{1}{2}\), if \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)), we have that \(\widetilde{\Delta}(a_{1},a_{2},\varphi)\subset\mathbb{H}^{2}(4H^{2}-1)\) is a semi-ideal triangle with an ideal vertex \(p_{1}\) (resp. \(p_{2}\)). In the case \(H=\frac{1}{2}\), the domain \(\widetilde{\Delta}(a_{1},a_{2},\varphi)\) is contained in \(\mathbb{R}^{2}\) and we do not have an asymptotic boundary. Consequently, if \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)) the point \(p_{1}\) (resp. \(p_{2}\)) disappears and \(\widetilde{\Delta}(a_{1},a_{2},\varphi)\) is a strip defined by the geodesic \(l_{2}\) (resp. \(l_{1}\)) and the parallel rays \(l_{1}\) (resp. \(l_{2}\)) and \(l_{3}\). The latter rays are defined as the limits of the sides \(l_{1}\) (resp. \(l_{2}\)) and \(l_{3}\) where \(a_{1}\to\infty\) (resp. \(a_{2}\to\infty\)), see Figures 4 and 5 for the cases \(a_{1}=\infty\) and \(a_{2}=\infty\) respectively.
**Lemma 3.1**.: _There exists a unique minimal graph \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\subset\mathbb{E}(4H^{2}-1,H)\) solution to the Jenkins-Serrin problem in \(\mathbb{E}(4H^{2}-1,H)\) over \(\widetilde{\Delta}(a_{1},a_{2},\varphi)\) with boundary values \(0\) over \(l_{1}\), \(b\in\mathbb{R}\) over \(l_{2}\) and \(+\infty\) over \(l_{3}\)._
Proof.: If \(a_{1}\) and \(a_{2}\) are both finite then the results follow from [34] for \(H<\frac{1}{2}\) and from [28] for \(H=\frac{1}{2}\) (see also [8] for more general results in a Killing submersion).
We deal with the case \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)). We consider a sequence \(\{q_{n}\}_{n\in\mathbb{N}}\subset l_{1}\) (resp. \(\{q_{n}\}_{n\in\mathbb{N}}\subset l_{2}\)) with \(n\) being the distance of \(q_{n}\) to \(p_{0}\). By [28, 34, 8], for any \(q_{n}\in l_{1}\) (resp. \(q_{n}\in l_{2}\)), there exists a minimal graph \(\widetilde{\Sigma}_{n}:=\widetilde{\Sigma}_{\varphi}(n,a_{2},b)\) (resp. \(\widetilde{\Sigma}_{n}:=\widetilde{\Sigma}_{\varphi}(a_{1},n,b)\)) over the triangle with vertexes \(q_{n}\), \(p_{0}\) and \(p_{2}\) (resp. \(p_{1}\)) with boundary data \(0\) over \(\overline{p_{0}q_{n}}\) (resp. \(\overline{p_{0}p_{1}}\)), \(b\) over \(\overline{p_{0}p_{2}}\) (resp. \(\overline{p_{0}q_{n}}\)) and \(+\infty\) over \(\overline{p_{2}q_{n}}\) (resp. \(\overline{q_{n}p_{1}}\)). Comparing their boundary values, Proposition 2.2 ensures, that the graphs defining \(\widetilde{\Sigma}_{n}\) form a decreasing sequence of functions.
To prove the existence of these surfaces, we find upper and lower bounds (in each case) that ensure that the limit surface takes the desired limit values. We consider \(\widetilde{\Sigma}^{\prime}\) the minimal graph with boundary values \(\min\{0,b\}\) over \(l_{1}\cup l_{2}\) and \(+\infty\) over \(l_{3}\), this surface exists by [3, Lemma 3.2] for \(H<\frac{1}{2}\) and by [3, Lemma 3.6] for \(H=\frac{1}{2}\) adapting the argument for an angle \(\varphi\) instead of \(\frac{\pi}{k}\). The surface \(\widetilde{\Sigma}^{\prime}\) takes the value \(+\infty\) over \(l_{3}\) and it is below every \(\widetilde{\Sigma}_{n}\) by Proposition 2.2. Therefore \(\widetilde{\Sigma}_{n}\) converges to a minimal graph \(\widetilde{\Sigma}_{\infty}\) that takes the desired boundary values since we have a lower bound \(\widetilde{\Sigma}^{\prime}\) taking the asymptotic value \(+\infty\) over \(l_{3}\) and an upper bound given by any graph \(\widetilde{\Sigma}_{n}\).
Finally, we notice that if \(0<H<\frac{1}{2}\), the uniqueness is a consequence of Proposition 2.2. Uniqueness for \(H=\frac{1}{2}\) is understood as the uniqueness of limit of the sequence \(\widetilde{\Sigma}_{n}\)
_Remark 2_.: Observe that the surfaces \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) and \(\widetilde{\Sigma}_{\varphi}(a_{2},a_{1},b)\) are only congruent for \(H=0\). In general we have that \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) is congruent to the minimal graph over \(\widetilde{\Delta}(a_{2},a_{1},\varphi)\) that takes the asymptotic boundary values \(0\) over \(l_{1}\), \(b\) over \(l_{2}\) and \(-\infty\) over \(l_{3}\). We will denote this surface as \(\widetilde{\Sigma}_{\varphi}^{-}(a_{1},a_{2},b)\).
The boundary of the minimal graph \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) consists of:
* The horizontal geodesics \(\widetilde{h}_{1}\) and \(\widetilde{h}_{2}\) projecting onto the sides \(l_{1}\) and \(l_{2}\) and the asymptotic horizontal geodesic \(\widetilde{h}_{3}\subset\mathbb{M}^{2}(4H^{2}-1)\times\{+\infty\}\) projecting onto \(l_{3}\), see Figure 3.
* The vertical geodesics \(\widetilde{v}_{i}\) contained in \(\pi^{-1}(p_{i})\) for \(i=0,1,2\), see Figure 3. Observe that if \(H<\frac{1}{2}\) and \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)) then \(\widetilde{v}_{1}\) (resp. \(\widetilde{v}_{2}\)) is a semi-ideal vertical geodesic in the asymptotic boundary of \(\mathbb{E}(4H^{2}-1,H)\). If \(H=\frac{1}{2}\) and \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)), we do not have the boundary component \(\widetilde{v}_{1}\) (resp. \(\widetilde{v}_{2}\)), see Figures 4 and 5.
The following Lemma gives a description of the angle function \(\nu\) of \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\).
We define \(b^{i}_{\mathcal{I}}(a,\varphi)\) for \(i=1,2\) as the height of the surface \(\mathcal{I}\) with axis in \(\tilde{h}_{i}\) in the cylinder model, i.e., \(b^{i}_{\mathcal{I}}(a,\varphi)=z(a,\varphi)\) in polar coordinates see Equation (2.8). The following lemma follows standard intersection comparison arguments (see [4, Section 3.1.2]).
**Lemma 3.2**.: _Let \(\nu\geq 0\) be the angle function of \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\). Then:_
1. _The function_ \(\nu\) _only takes the value_ \(0\) _along the vertical geodesics_ \(\widetilde{v}_{0}\)_,_ \(\widetilde{v}_{1}\) _(if_ \(a_{1}\) _finite) and_ \(\widetilde{v}_{2}\) _(if_ \(a_{2}\) _finite)._
2. _Let_ \(\theta_{i}\) _be the angles of rotation of the normal along_ \(\widetilde{v}_{i}\) _for_ \(i=0,1,2\)_. Then,_ \(\frac{b}{[b]}\theta^{\prime}_{0}>0\)_,_ \(\theta^{\prime}_{1}<0\) _(if_ \(a_{1}\) _finite) and_ \(\theta^{\prime}_{2}>0\) _(if_ \(a_{2}\) _finite)._
3. \(\bullet\) _If_ \(b>0\)_, there is exactly one interior point_ \(q^{*}\) _in_ \(\widetilde{h}_{2}\) _such that_ \(\nu(q^{*})=1\)_._ \(\bullet\) _If_ \(b\leq 0\)_, there are not interior points in_ \(\widetilde{h}_{2}\) _with_ \(\nu=1\)
1. _If_ \(b>b^{1}_{\mathcal{I}}(a_{2},\varphi)\)_, there are no points with_ \(\nu=1\) _in_ \(\widetilde{h}_{1}\)_._
2. _If_ \(0<b\leq b^{1}_{\mathcal{I}}(a_{2},\varphi)\)_, there are, at most, two interior points_ \(q^{*}_{1}\) _and_ \(q^{*}_{2}\) _in_ \(\widetilde{h}_{1}\) _such that_ \(\nu(q^{*}_{i})=1\)_,_ \(i=1,2\)_._
3. _If_ \(b\leq 0\)_, there is exactly one interior point_ \(q^{*}\) _in_ \(\widetilde{h}_{1}\) _such that_ \(\nu(q^{*})=1\)_._
Proof.:
1. We have \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) is a multigraph up to the boundary and \(\nu\) cannot be equal to zero in the horizontal geodesics \(\widetilde{h}_{1}\) and \(\widetilde{h}_{2}\) by the boundary maximum principle with respect to vertical planes. Then \(\nu\) only takes the value zero along the vertical geodesics of the boundary.
2. It follows easily by looking at the normal in the intersection between the horizontal and vertical geodesics of the boundary (see Figure 3) and taking into account that the normal along the vertical segments rotates monotonically.
3. Assume that \(b>0\). As \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) is a vertical graph and we are assuming that the angle function \(\nu\) is positive in the interior, we deduce that the horizontal normal in \(\widetilde{h}_{2}\cap\widetilde{v}_{0}\) points to the interior of \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\), i.e., if we divide the space by the vertical plane that contains \(\widetilde{h}_{2}\), the horizontal normal points to the region that contains the surface, see Figure 3. In the same way, we deduce that the horizontal normal at \(\widetilde{h}_{2}\cap\widetilde{v}_{2}\) points to the exterior of \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\), i.e., if we divide the space by the vertical plane that contains \(\widetilde{h}_{2}\), the horizontal normal points to the region that does not contain the surface. Then, by continuity, there exists a point \(q^{*}\) in \(\widetilde{h}_{2}\) where \(\nu(q^{*})=1\). We will show that there is at most one point of \(\widetilde{h}_{2}\) where \(\nu=1\). Assume by contradiction that there exist \(q_{1}\), \(q_{2}\in\widetilde{h}_{2}\) where \(\nu(q_{1})=\nu(q_{2})=1\). We consider the surface \(\mathcal{I}\) with axis containing \(\widetilde{h}_{2}\). The surface \(\mathcal{I}\) is tangent to \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) at each \(q_{i}\). Then, for each \(q_{i}\) there is a curve \(c_{i}\) through \(q_{i}\) in \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\cap\mathcal{I}\) different from \(\widetilde{h}_{2}\). Since \(c_{i}\) can only end in \(\widetilde{h}_{2}\), and at only one point in \((\widetilde{h}_{1}\cup\widetilde{v}_{1})\cap\mathcal{I}\) (the surface \(\mathcal{I}\) is radially decreasing in this region), then the curves \(c_{i}\) joint with \(\widetilde{h}_{1}\) necessarily enclose a compact loop or a domain in the hypothesis of Proposition 2.2. Applying Proposition 2.2 to \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) and \(\mathcal{I}\) in that region, we achieve a contradiction. Suppose that \(b\leq 0\) and there is a point \(q^{*}\in\widetilde{h}_{2}\) such that \(\nu(q^{*})=1\). In that case, we have that the surface \(\mathcal{I}\) with axis in \(\widetilde{h}_{2}\) is below \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) and tangent at \(q^{*}\), which contradicts the maximum principle at the boundary.
4. Assume that \(b>b^{1}_{\mathcal{I}}(a_{2},\varphi)\) (if \(a_{2}=\infty\), the argument is only valid for \(H<\frac{1}{2}\)) and there is one point \(q\in\widetilde{h}_{1}\) where \(\nu(q)=1\). The surface \(\mathcal{I}\) with axis containing \(\widetilde{h}_{1}\) is tangent to \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) at \(q\) and \(\mathcal{I}\cap\partial\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)=\widetilde{ h}_{1}\) since \(b>b^{1}_{\mathcal{I}}(a_{2},\varphi)\), by the maximum principle, we know there is a curve \(c\) on \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\cap\mathcal{I}\) different from \(\widetilde{h}_{1}\). This curve must enclose a compact loop or a domain in the hypothesis of Proposition 2.2. Assume now that \(0<b<b^{1}_{\mathcal{I}}(a_{2},\varphi)\), the surface \(\mathcal{I}\) with axis in \(\widetilde{h}_{1}\) intersects exactly twice \(\partial\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) for \(0<b<b^{1}_{\mathcal{I}}(a_{2},\varphi)\) and once for \(b=b^{1}_{\mathcal{I}}(a_{2},\varphi)\) because \(\mathcal{I}\) is radially increasing along \(l_{2}\). Assume by contradiction that there are three points \(q_{1}\), \(q_{2}\) and \(q_{3}\) in \(\widetilde{h}_{1}\) such that \(\nu(q_{i})=1\). Then there are three curves in \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\cap\mathcal{I}\) that start at each point \(q_{i}\). If there exists a curve that encloses a loop or a domain in the hypothesis of Proposition 2.2, we have a contradiction. If such a curve does not exist, then two of these three curves end up at the same point of the boundary of \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) so we have a contradiction again by the maximum principle. If \(b\leq 0\), the surface \(\mathcal{I}\) with axis in \(\widetilde{h}_{1}\) intersects once \(\partial\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\). Reasoning as before, we find a contradiction if we assume that there are two points \(q_{1}\) and \(q_{2}\) in \(\widetilde{h}_{1}\) such that \(\nu(q_{i})=1\) for \(i=1,2\). Either there exists a curve that encloses a loop or a domain in the hypothesis of Proposition 2.2, so we have a contradiction. On the other hand, the normal points to opposite directions in the extremes of \(\widetilde{h}_{1}\), see Figure 3. By continuity, there exists exactly one point \(q^{*}\) in \(\widetilde{h}_{1}\) such that \(\nu(q^{*})=1\). All the results hold true for \(H=\frac{1}{2}\) using the maximum principle for bounded domains for \(a_{1},a_{2}<\infty\) and using a limit argument for \(a_{1}=\infty\) or \(a_{2}=\infty\).
Let \(\mathcal{F}_{\widetilde{h}_{i}}^{\cdot}(b):=\mathcal{F}(\widetilde{\Sigma}_{ \varphi}(a_{1},a_{2},b),\widetilde{h}_{i})\) be the flux with \(\widetilde{h}_{i}\) parameterized so that \(-J\widetilde{h}_{i}\) is the inward conormal vector for \(i=1,2\). In the sequel, we are going to assume that \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)), and \((a_{2},\varphi)\in\Omega_{2}\)
(resp. \((a_{1},\varphi)\in\Omega_{1}\)) where
\[\Omega_{i}=\{(a_{i},\varphi)\in\mathbb{R}^{2}:0<\varphi<\tfrac{\pi}{2},0<a_{i}<a_ {\max}(\varphi)\}, \tag{3.1}\]
and \(a_{\max}(\varphi)=2\operatorname{arctanh}(\cos(\varphi))\) for \(0<H<\tfrac{1}{2}\) and \(a_{\max}(\varphi)=+\infty\) for \(H=\tfrac{1}{2}\). This condition means that for \(0<H<\tfrac{1}{2}\) the angle in \(p_{2}\) (resp. \(p_{1}\)) is greater than \(\varphi\) while \(0<a_{1}<a_{\max}(\varphi)\) (resp. \(0<a_{2}<a_{\max}(\varphi)\)). This is a necessary condition in the proof of Lemma 3.4. For \(0<H<\tfrac{1}{2}\), we will define also \(a_{\operatorname{emb}}(\varphi)=\operatorname{arcsinh}(\cot(\varphi))\), that is, the value of the parameter \(a_{1}\) (resp. \(a_{2}\)) for which the angle in \(p_{2}\) (resp. \(p_{1}\)) is exactly \(\tfrac{\pi}{2}\). This value is related with the embeddedness of the conjugate surface, see Remark 5.
**Lemma 3.3**.: _The following statements hold true:_
* _If_ \(a_{1}=\infty\) _and_ \(a_{2}\in\Omega_{2}\) _the function_ \(\mathcal{F}_{\widetilde{h}_{2}}\) _is strictly decreasing. Moreover, for all_ \(0<H\leq\tfrac{1}{2}\)_, we have_ \(\mathcal{F}_{\widetilde{h}_{2}}(0)>0\) _and_ \(\mathcal{F}_{\widetilde{h}_{2}}(b)<0\) _for_ \(b>0\) _large enough._
* _If_ \(a_{2}=\infty\) _and_ \(a_{1}\in\Omega_{1}\) _the function_ \(\mathcal{F}_{\widetilde{h}_{1}}\) _is strictly increasing. Moreover, for_ \(0<H<\tfrac{1}{2}\) _we have_ \(\mathcal{F}_{\widetilde{h}_{1}}(b_{\mathcal{I}}^{\prime}(\infty,\varphi))>0\) _and_ \(\mathcal{F}_{\widetilde{h}_{1}}(-b)<0\) _for_ \(b>0\) _large enough and for_ \(H=\tfrac{1}{2}\) _we have_ \(\mathcal{F}_{\widetilde{h}_{1}}(b)>0\) _and_ \(\mathcal{F}_{\widetilde{h}_{1}}(-b)<0\) _for_ \(b>0\) _large enough._
Proof.: Let us prove that the function \(\mathcal{F}_{\widetilde{h}_{2}}\) (resp. \(\mathcal{F}_{\widetilde{h}_{1}}\)) is strictly decreasing (resp. increasing). Set \(b_{1}<b_{2}\) and translate \(\widetilde{\Sigma}_{k}:=\widetilde{\Sigma}_{\varphi}(\infty,a_{2},b_{k})\) (resp. \(\widetilde{\Sigma}_{k}:=\widetilde{\Sigma}_{\varphi}(a_{1},\infty,b_{k})\)), \(k=1,2\), until the graphs take the value \(-b_{k}\) over \(l_{1}\) (resp. \(l_{2}\)) and \(0\) over \(l_{2}\) (resp. \(l_{1}\)), then we have, by Proposition 2.2, that \(\widetilde{\Sigma}_{1}\) is above (resp. below) \(\widetilde{\Sigma}_{2}\) after this translation. We parameterize \(\widetilde{h}_{2}^{k}:[0,a_{2}]\to\mathbb{E}(4H^{2}-1,H)\) (resp. \(\widetilde{h}_{1}^{k}:[0,a_{1}]\to\mathbb{E}(4H^{2}-1,H)\)) by unit speed with \(\widetilde{h}_{2}^{k}(0)\in\widetilde{v}_{0}^{k}\) (resp. \(\widetilde{h}_{1}^{k}(0)\in\widetilde{v}_{1}^{k}\)) and \(\widetilde{h}_{2}^{k}(a_{2})\in\widetilde{v}_{2}^{k}\) (resp. \(\widetilde{h}_{1}^{k}(a_{1})\in\widetilde{v}_{0}^{k}\)). The maximum principle in the common boundary allows us to compare the vertical part of the inward conormal vectors \(-J(\widetilde{h}_{1}^{k})^{\prime}\) (resp. \(-J(\widetilde{h}_{2}^{k})^{\prime}\)) obtaining \(\langle-J(\widetilde{h}_{2}^{k})^{\prime},\xi\rangle>-J(\widetilde{h}_{2}^{k}) ^{\prime},\xi\rangle\). (resp. \(\langle-J(\widetilde{h}_{1}^{k})^{\prime},\xi\rangle<\langle-J(\widetilde{h}_{1 }^{k})^{\prime},\xi\rangle\)). Consequently, \(\mathcal{F}_{\widetilde{h}_{1}}\) (resp. \(\mathcal{F}_{\widetilde{h}_{2}}\)) is strictly decreasing (resp. increasing).
Now, we will see that \(\mathcal{F}_{\widetilde{h}_{2}}(0)>0\) for \(0<H\leq\tfrac{1}{2}\), and \(\mathcal{F}_{\widetilde{h}_{1}}(b_{\mathcal{I}}^{1}(\infty,\varphi))>0\) for \(0<H<\tfrac{1}{2}\) and \(\mathcal{F}_{\widetilde{h}_{1}}(b)>0\) for \(b>0\) large enough and \(H=\tfrac{1}{2}\). The surface \(\widetilde{\Sigma}_{\varphi}(\infty,a_{2},b)\) converges to \(\widetilde{\Sigma}_{\varphi}(\infty,a_{2},0)\) as \(b\to 0\) by continuity. The continuity is a consequence of the unicity in Lemma 3.1. We have that \(\widetilde{\Sigma}_{\varphi}(\infty,a_{2},0)\) is above the surface \(\mathcal{I}\) with axis containing \(\widetilde{h}_{2}\). Again, by the maximum principle at the boundary, we have that \(\langle-J\widetilde{h}_{2},\xi\rangle>0\) when \(b=0\). On the other hand, we have that, for \(H<\tfrac{1}{2}\), the surface \(\widetilde{\Sigma}_{\varphi}(a_{1},\infty,b_{\mathcal{I}}^{1}(\infty,\varphi))\) is above the surface \(\mathcal{I}\) with axis containing \(\widetilde{h}_{1}\), then a similar argument shows that \(\mathcal{F}_{\widetilde{h}_{1}}(b_{\mathcal{I}}^{1}(\infty,\varphi))>0\). For \(H=\tfrac{1}{2}\), the surface \(\Sigma_{\varphi}(a_{1},\infty,b)\) is above the surface \(\mathcal{I}\) in a neighborhood of \(\widetilde{h}_{1}\) (and consequently \(\mathcal{F}_{\widetilde{h}_{1}}^{-}(b)>0\)) for \(b>0\) large enough.
We will prove now that \(\mathcal{F}_{\widetilde{h}_{2}}(b)\) (resp. \(F_{\widetilde{h}_{1}}(-b)\)) is negative for \(b>0\) large enough. Assume first that \(0<H<\tfrac{1}{2}\) and \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)). We start by considering the isosceles triangle \(\widetilde{\Delta}_{0}^{2}\) (resp. \(\widetilde{\Delta}_{0}^{1}\)) with base \(l_{2}\) (resp. \(l_{1}\)) and an ideal vertex \(p_{3}^{2}\) (resp. \(p_{3}^{1}\)). As \(a_{2}<a_{\max}(\varphi)\) (resp. \(a_{1}<a_{\max}(\varphi)\)), the side \(\overline{p_{1}p_{3}^{2}}\) (resp. \(\overline{p_{2}p_{3}^{1}}\)) intersects the side \(l_{1}\) (resp. \(l_{2}\)). Let \(\widetilde{\Sigma}_{0}^{2}(b)\) (resp. \(\widetilde{\Sigma}_{0}^{1}(0)\)) be the unique minimal graph over \(\widetilde{\Delta}_{0}\) solution to the Jenkins-Serrin problem with values \(+\infty\) along \(\overline{p_{2}p_{3}}\) (resp. \(+\infty\) along \(\overline{p_{1}p_{3}}\)), \(-\infty\) along the line \(\overline{p_{0}p_{3}}\) and \(b\) along the segment \(l_{2}\) (resp. \(0\) along the segment \(l_{1}\)), see for instance [3, Lemma 3.2]. We know by [2, Lemma 4], that \(\widetilde{\Sigma}_{0}^{2}(b)\) (resp. \(\widetilde{\Sigma}_{0}^{1}(0)\)) has finite radial limit at \(p_{0}\) along \(l_{1}=\pi(\widetilde{h}_{1})\) (resp. \(l_{2}=\pi(\widetilde{h}_{2})\)) so, if \(b\) is large enough, \(\widetilde{\Sigma}_{0}^{2}(b)\) (resp. \(\widetilde{\Sigma}_{0}^{1}(0)\)) is above \(\Sigma_{\varphi}(\infty,a_{2},b)\) (resp. \(\Sigma_{\varphi}(a_{1},\infty,-b)\)) in the boundary of the common domain where both are graphs. This also happens in the interior by Proposition 2.2. We compare the conormals along the curve \(\widetilde{h}_{2}\) (resp. \(\widetilde{h}_{1}\)) which is common to both surfaces obtaining by the boundary maximum principle that \(\langle-J\widetilde{h}_{2},\xi\rangle<\langle-J\widetilde{h}_{2}^{0},\xi\rangle\) (resp. \(\langle-J\widetilde{h}_{1},\xi\rangle<\langle-J\widetilde{h}_{1}^{0},\xi\rangle\)). Then, as \(\widetilde{\Sigma}_{0}^{2}(b)\) (resp. \(\widetilde{\Sigma}_{0}^{1
(resp. \(\mathcal{F}_{\widetilde{\Sigma}_{0}^{1}}=0\)). Using similar arguments to the case \(0<H<\frac{1}{2}\), we can compare the surfaces \(\widetilde{\Sigma}_{0}^{2}(b)\) (resp. \(\widetilde{\Sigma}_{0}^{1}(0)\)) with \(\widetilde{\Sigma}_{\varphi}(\infty,a_{2},b)\) (resp. \(\widetilde{\Sigma}_{\varphi}(a_{1},\infty,-b)\)) along the curve \(\widetilde{h}_{2}\) (resp. \(\widetilde{h}_{1}\)) for \(b>0\) large enough, obtaining the statement.
### The conjugate sister surface and the period problems
We will now describe the conjugate sister surface of the fundamental piece \(\widetilde{\Sigma}_{\varphi}(a_{1},a_{2},b)\) and the two period problems that arise in the construction of the \((H,k)\)-noids and \((H,k)\)-nodoids with genus one.
The conjugate sister surface \(\Sigma_{\varphi}(a_{1},a_{2},b)\subset\mathbb{H}^{2}\times\mathbb{R}\) is a multi-graph over a (possibly non-embedded) domain \(\Delta\subset\mathbb{H}^{2}\). The boundary of this surface is composed of:
* The symmetry curves \(h_{1}\) and \(h_{2}\) contained in vertical planes of symmetry and the ideal vertical half-line \(h_{3}\) contained in \(\partial_{\infty}\mathbb{H}^{2}\times\mathbb{R}\). For \(0<H<\frac{1}{2}\), if \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)) adapting [3, Lemma 4.1] to this case we have that \(\int_{h_{1}}\nu<\infty\) (resp. \(\int_{h_{2}}\nu<\infty\)), which means that the curve \(\pi(h_{1})\) (resp. \(\pi(h_{2})\)) is compact by Lemma 2.3. For \(H=\frac{1}{2}\), if \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)), adapting [3, Lemma 4.1] to this case we get that \(\int_{h_{1}}\nu=\infty\) (resp. \(\int_{h_{2}}\nu=\infty\)), Then, the curve \(\pi(h_{1})\) (resp. \(\pi(h_{2})\)) diverges in \(\mathbb{H}^{2}\), by Lemma 2.3.
* The symmetry curves \(v_{0}\), \(v_{1}\) and \(v_{2}\) are contained in horizontal planes of symmetry. Observe that if \(0<H<\frac{1}{2}\) and \(a_{1}=\infty\) (resp \(a_{2}=\infty\)) then \(v_{1}\) (resp. \(v_{2}\)) is an ideal curve of constant curvature \(2H\) in \(\mathbb{H}^{2}\times\{+\infty\}\) (resp. \(\mathbb{H}^{2}\times\{-\infty\}\)), whose normal points to the exterior (resp. interior) of \(\Delta\), see Figures 4 and 5 and Lemmas 2.4 and 2.5.
In the sequel we will assume that \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)). Our aim is to obtain a complete \(H\)-surface in \(\mathbb{H}^{2}\times\mathbb{R}\) with genus \(1\) after successive reflections over the vertical and the horizontal planes of symmetry. This is equivalent to the condition that the curves \(v_{0}\) and \(v_{2}\) (resp. \(v_{1}\)) lie in the same horizontal plane of \(\mathbb{H}^{2}\times\mathbb{R}\) (first period problem) and the vertical planes of symmetry containing the curves \(h_{1}\) and \(h_{2}\) intersect each other with an angle \(\frac{\pi}{k}\) (second period problem).
* _The first period function._ Assume that \(a_{1}=\infty\) and \(b>0\) (resp. \(a_{2}=\infty\) and \(b<b_{2}^{1}(\infty,\varphi)\)), as in [2, 26], we define the first period function \(\mathcal{P}_{1}^{2}:\Omega_{2}\times\mathbb{R}^{+}\to\mathbb{R}\) (resp. \(\mathcal{P}_{1}^{1}:\Omega_{1}\times(-\infty,b_{2}^{1}(\infty,\varphi))\to \mathbb{R}\)) as the difference of heights between the horizontal planes containing \(v_{0}\) and \(v_{2}\) (resp. \(v_{1}\)), or the difference of heights of the endpoints of \(h_{2}\) (resp. \(h_{1}\)). Parameterizing \(h_{2}:[0,a_{2}]\to\mathbb{H}^{2}\times\mathbb{R}\)
(resp. \(h_{1}:[0,a_{1}]\to\mathbb{H}^{2}\times\mathbb{R}\)) by unit speed with \(h_{2}(0)\in v_{0}\) and \(h_{2}(a_{2})\in v_{2}\) (resp. \(h_{1}(0)\in v_{1}\) and \(h_{1}(a_{1})\in v_{0}\)), we can express the period function as: \[\mathcal{P}_{1}^{i}(a_{i},b,\varphi)=\int_{h_{i}}\langle h_{i}^{\prime},\xi \rangle=\int_{\widetilde{h}_{i}}\langle-J\widetilde{h}_{i},\xi\rangle=F_{ \widetilde{h}_{i}}(b),\ \ i=1,2.\] (3.2)
* _The second period function._
* Assume \(a_{1}=\infty\) and \(b>0\). We consider the half-space model for \(\mathbb{H}^{2}\times\mathbb{R}\) and translate and rotate \(\Sigma_{\varphi}(\infty,a_{2},b)\) so that \(h_{1}\) lies in the vertical plane \(\{x=0\}\) and \(v_{0}\) lies in the horizontal plane \(\{z=0\}\). We will call \(\gamma\times\mathbb{R}\) the vertical plane containing the symmetry curve \(h_{2}\), where \(\gamma\) is the complete extension of the geodesic \(\pi(h_{2})\). We identify \(\mathbb{H}^{2}\times\{0\}\) with \(\mathbb{H}^{2}\) and parameterize \(v_{0}:[0,b]\to\mathbb{H}^{2}\) by arc-length as \(v_{0}(s)=(x(s),y(s))\) with \(v_{0}(0)=(0,1)\) and \(v_{0}^{\prime}(0)=-E_{1}\). Then we get that \(x(s)<0\) for \(s\) near \(0\) and we call \((x_{0},y_{0})=(x(b),y(b))\). Let \(\psi\) be the angle of rotation with respect to the horocycle foliation with initial angle \(\psi(0)=\pi\) and define \(\psi_{0}=\psi(b)\), see Figure 6. The second period function \(\mathcal{P}_{2}^{2}:\Omega_{2}\times\mathbb{R}^{+}\to\mathbb{R}\) is defined as \[\mathcal{P}_{2}^{2}(a_{2},\varphi,b)=\frac{x_{0}\sin(\psi_{0})}{y_{0}}-\cos( \psi_{0}).\] (3.3)
* Assume that \(a_{2}=\infty\) and \(b<b_{\mathcal{I}}^{1}(\infty,\varphi)\). Aiming at defining the second period function analogously to the case \(a_{1}=\infty\) and keeping the same orientation, we apply a reflection over the horizontal geodesic \(\widetilde{h}_{1}\) to the surface \(\widetilde{\Sigma}_{\varphi}(a_{1},\infty,b)\) or equivalently we reflect \(\Sigma_{\varphi}(a_{1},\infty,b)\) over the vertical plane containing \(h_{1}\). We call these surfaces \(\widetilde{\Sigma}_{\varphi}^{-}(a_{1},\infty,b)\) and \(\widetilde{\Sigma}_{\varphi}^{-}(a_{1},\infty,b)\) (the conjugate \(H\)-surface). Again in the half-space model of \(\mathbb{H}^{2}\times\mathbb{R}\), we translate and rotate the surface \(\Sigma_{\varphi}^{-}(a_{1},\infty,b)\) so that \(h_{1}^{-}\) lies in the vertical plane \(\{x=0\}\) and \(v_{0}^{-}\) lies in the horizontal plane \(\{z=0\}\). We will call \(\gamma\times\mathbb{R}\) the vertical plane containing the symmetry curve \(h_{2}^{-}\), where \(\gamma\) is the complete extension of the geodesic \(\pi(h_{1}^{-})\). We identify \(\mathbb{H}^{2}\times\{0\}\) with \(\mathbb{H}^{2}\) and we parameterize \(v_{0}^{-}:[0,b]\to\mathbb{H}^{2}\) by arc-length as \(v_{0}^{-}(s)=(x(s),y(s))\) with \(v_{0}^{-}(0)=(0,1)\) and \((v_{0}^{-})^{\prime}(0)=-E_{1}\). Then we get that \(x(s)<0\) for \(s\) near \(0\) and we call \((x_{0},y_{0})=(x(b),y(b))\). This orientation coincides with the orientation that came from the election \((\widetilde{v_{0}^{-}})^{\prime}=\xi\) for \(0<b<b_{\mathcal{I}}^{1}(\infty,\varphi)\) since \(\theta_{0}^{\prime}>0\) and with the contrary orientation when \(b<0\) since \(\theta_{0}^{\prime}<0\) (see Lemma 3.2). We choose this orientation for \(v_{0}^{-}\) in order to work with both cases at once and for similarity to the construction with \(a_{1}=\infty\). Let \(\psi\) be the angle of
rotation of \(v_{0}^{-}\) with respect to the horocycle foliation with initial angle \(\psi(0)=\pi\) and define \(\psi_{0}=\psi(|b|)\), see Figure 6. The second period function \(\mathcal{P}_{2}^{1}:\Omega_{1}\times(-\infty,b_{\mathcal{I}}^{2}(\infty,\varphi ))\to\mathbb{R}\) is defined as in Equation (3.3):
\[\mathcal{P}_{2}^{1}(a_{1},\varphi,b)=\frac{x_{0}\sin(\psi_{0})}{y_{0}}-\cos( \psi_{0}). \tag{3.4}\]
We will see in Lemmas 3.5 and 3.10 that, under the assumption \(0<\mathcal{P}_{2}<1\), the vertical planes containing \(h_{1}\) and \(h_{2}\) intersect each other with an angle \(\arccos(\mathcal{P}_{2})\). We aim at proving that there exist parameters \((a_{2},\varphi,b)\) (resp. \((a_{1},\varphi,b)\)) such that \(\mathcal{P}_{1}^{2}(a_{2},\varphi,b)=0\) (resp. \(\mathcal{P}_{1}^{1}(a_{1},\varphi,b)=0\)) and \(\mathcal{P}_{2}^{2}(a_{2},\varphi,b)=\cos(\frac{\pi}{k})\) (resp. \(\mathcal{P}_{2}^{1}(a_{1},\varphi,b)=\cos(\frac{\pi}{k})\)), solving the two period problems.
**Lemma 3.4**.: _If \(0<H\leq\frac{1}{2}\) and \(a_{1}=\infty\) (resp. \(a_{2}=\infty\)), there exists an unique function \(f_{2}:\Omega_{2}\to\mathbb{R}^{+}\) (resp. \(f_{1}:\Omega_{1}\to(-\infty,b_{\mathcal{I}}^{2}(\infty,\varphi))\)) such that \(\mathcal{P}_{1}^{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))=0\) (resp. \(\mathcal{P}_{1}^{1}(a_{1},\varphi,f_{1}(a_{1},\varphi))=0\)), for every pair \((a_{2},\varphi)\in\Omega_{2}\) (resp. \((a_{1},\varphi)\in\Omega_{1}\)). Moreover:_
* _If_ \(0<H<\frac{1}{2}\)_, then_ \(f_{2}\) _(resp._ \(f_{1}\)_) is a continuous function and_ \[\lim_{a_{2}\to 0}f_{2}(a_{2},\varphi)=0\qquad\quad\text{(resp. }\lim_{a_{1}\to 0}f_{1}(a_{1},\varphi)=0^{-}),\] \[\lim_{a_{2}\to a_{\max}}f_{2}(a_{2},\varphi)=+\infty\quad\text{( resp. }\lim_{a_{1}\to a_{\max}}f_{1}(a_{1},\varphi)=-\infty),\] _and given_ \(\varphi\in(0,\frac{\pi}{2})\)_,_ \(f_{1}(\cdot,\varphi):(0,a_{\max}(\varphi))\to(0,+\infty)\) _(resp._ \(f_{2}(\cdot,\varphi):(0,a_{\max}(\varphi))\to(0,-\infty)\)_) is a strictly increasing (resp. decreasing) function for_ \(a_{2}\geq a_{\rm emb}(\varphi)\) _(resp._ \(a_{1}\geq a_{\rm emb}(\varphi)\)_)._
* _If_ \(H=\frac{1}{2}\)_, then the function_ \(f_{2}\) _(resp._ \(f_{1}\)_) is a continuous function and satisfies_ \[\lim_{a_{2}\to 0}f_{2}(a_{2},\varphi)=0\qquad\quad\text{(resp. }\lim_{a_{1}\to 0}f_{1}(a_{1},\varphi)=0^{-}),\] \[\lim_{a_{2}\to\infty}f_{2}(a_{2},\varphi)=+\infty\quad\text{(resp. }\lim_{a_{1}\to\infty}f_{1}(a_{1},\varphi)=-\infty),\] \[\lim_{\varphi\to\frac{\pi}{2}}f_{2}(a_{2},\varphi)=+\infty\qquad \text{(resp. }\lim_{\varphi\to\frac{\pi}{2}}f_{1}(a_{1},\varphi)=-\infty).\]
Proof.: First of all, observe that, as in [2, Lemma 3], the function \(\mathcal{P}_{1}^{2}\) (resp. \(\mathcal{P}_{1}^{1}\)) is continuous. This function is strictly decreasing (resp. increasing) in the third parameter by Lemma 3.3. Observe that, when \(b=0\), the vertical part of the inward conormal is not continuous since the vertical segment disappears. However, the function \(\mathcal{P}_{1}\) is continuous.
Fix \((a_{2},\varphi)\in\Omega_{2}\) (resp. \((a_{1},\varphi)\in\Omega_{1}\)). By Lemma 3.3 and the continuity and monotonicity of \(\mathcal{P}_{1}^{2}\) (resp. \(\mathcal{P}_{1}^{1}\)) in the third parameter, there exists a unique \(b_{0}\in(0,+\infty)\) (resp. \(b_{0}\in(-\infty,b_{\mathcal{I}}^{1}(\infty,\varphi))\)) such that \(\mathcal{P}_{1}^{2}(a_{2},\varphi,b_{0})=0\) (resp. \(\mathcal{P}_{1}^{1}(a_{1},\varphi,b_{0})=0\)). Therefore, we define univocally \(f_{2}(a_{2},\varphi):=b_{0}\) (resp. \(f(a_{1},\varphi):=b_{0}\)). Moreover, the continuity of \(f_{2}\) (resp. \(f_{1}\)) is guaranteed by its uniqueness, see also [2, Lemma 5].
The computations of the limits of \(f_{i}(\cdot,\varphi)\) are based on [2, Lemma 5].
We consider the case \(0<H<\frac{1}{2}\). Assume by contradiction that there exists a subsequence \(a_{2}^{n}\to 0\) (resp. \(a_{1}^{n}\to 0\)) such that \(f_{2}(a_{2}^{n},\varphi)\) (resp. \(f_{1}(a_{1}^{n},\varphi)\)) converges to some \(b_{\infty}\in(0,+\infty)\) (resp. \(b_{\infty}\in(-\infty,0)\)).
Figure 6. The projection of the surfaces \(\Sigma_{\varphi}(\infty,a_{2},b)\) and \(\Sigma_{\varphi}^{-}(a_{1},\infty,b)\) with \(0<H<\frac{1}{2}\) under the assumptions of the second period problem.
Translate vertically \(\widetilde{\Sigma}_{\varphi}(\infty,a_{2}^{n},f_{2}(a_{2}^{n},\varphi))\) (resp. \(\widetilde{\Sigma}_{\varphi}(a_{1}^{n},\infty,f_{1}(a_{1}^{n},\varphi))\)) until they take the value \(-f_{2}(a_{2}^{n},\varphi)\) (resp. \(f_{1}(a_{1}^{n},\varphi)\)) over \(l_{1}\) (resp. \(l_{2}\)) and the value \(0\) over \(l_{2}\) (resp. \(l_{1}\)). Since \(a_{i}^{n}\to 0\) for \(i=1,2\), we can blow up the surface and the metric of \(\mathbb{E}(4H^{2}-1,H)\) such that we get \(a_{2}^{n}=1\) (resp \(a_{1}^{n}\)). The new sequence of rescaled surfaces converges in the \(\mathcal{C}^{k}\)-topology to a minimal surface \(\widetilde{\Sigma}_{\infty}\) in \(\mathbb{R}^{3}\). This minimal surface is a vertical graph over a strip \(\widetilde{\Delta}(\infty,1,\varphi)\subset\mathbb{R}^{2}\) (resp. \(\widetilde{\Delta}(1,\infty,\varphi)\subset\mathbb{R}^{2}\)) bounded by the two parallel lines \(l_{1}^{\prime}\) (resp. \(l_{2}^{\prime}\)) and \(l_{3}^{\prime}\) (resp. \(l_{3}^{\prime}\)) and a segment \(l_{2}^{\prime}\) (resp. \(l_{1}^{\prime}\)) of length \(1\) forming an angle \(\varphi\) with \(l_{1}^{\prime}\) (resp. \(l_{2}^{\prime}\)). Moreover, \(\widetilde{\Sigma}_{\infty}\) takes the value \(0\) over \(l_{2}^{\prime}\) (resp. \(l_{1}^{\prime}\)), \(-\infty\) over \(l_{1}^{\prime}\) (resp. \(l_{2}^{\prime}\)) and \(+\infty\) over \(l_{3}^{\prime}\) (resp. \(l_{3}^{\prime}\)) since \(b_{\infty}\) is not zero. However, \(\widetilde{\Sigma}_{\infty}\) cannot have first period function equal to zero since \(\widetilde{\Sigma}_{\infty}\) lies below (resp. above) a helicoid \(\widetilde{\Sigma}_{0}\) with axis in \(l_{2}^{\prime}\) (resp. \(l_{1}^{\prime}\)), which is a graph over a half-strip and the helicoid \(\widetilde{\Sigma}_{0}\) has period \(0\) because it is axially symmetric, see also [2, Figure 4].
Let us see now that the case \(a_{2}=\infty\) and \(b_{\infty}\in(0,b_{2}^{1}(\infty,\varphi))\) gets into contradiction. Again we can blow up the surfaces and the metric of \(\mathbb{E}(4H^{2}-1,H)\) such that we get \(a_{1}^{n}=1\). The new sequence of rescaled surfaces converges in the \(\mathcal{C}^{k}\)-topology to a minimal surface \(\widetilde{\Sigma}_{\infty}\) in \(\mathbb{R}^{3}\). This minimal surface \(\widetilde{\Sigma}_{\infty}\) is a vertical graph over the strip \(\widetilde{\Delta}(1,\infty,\varphi)\subset\mathbb{R}^{2}\) and takes the value \(-\infty\) over \(l_{1}^{\prime}\) (since \(b_{\infty}\) is not zero), \(0\) over \(l_{2}^{\prime}\) and \(+\infty\) over \(l_{3}^{\prime}\). Again is easy to check that \(\widetilde{\Sigma}_{\infty}\) cannot have the first period function equal to zero.
We see now that \(f_{1}(a_{1}^{n},\varphi)\) converges to \(0\) from below when \(a_{1}^{n}\to 0\). Assume by contradiction there exists a subsequence \(a_{1}^{\sigma(n)}\) such that \(b_{n}:=f_{1}(a_{1}^{\sigma(n)},\varphi)>0\). Blowing up the surfaces and the metric as in the previous arguments we have that the new sequence of rescaled surfaces converges in the \(\mathcal{C}^{k}\)-topology to a minimal surface \(\widetilde{\Sigma}_{\infty}\) in \(\mathbb{R}^{3}\). This minimal surface is a vertical graph over the strip \(\widetilde{\Delta}(1,\infty,\varphi)\subset\mathbb{R}^{2}\) and takes the value \(b_{\infty}=\lim_{n\to\infty}\frac{b_{n}}{a_{n}}>0\) over \(l_{1}^{\prime}\), \(0\) over \(l_{2}^{\prime}\) and \(+\infty\) over \(l_{3}^{\prime}\). Then, as \(b_{\infty}>0\) the limit surface is above the plane \(\{z=0\}\) and consequently the first period cannot be \(0\).
Assume by contradiction that there exists a subsequence \(a_{2}^{n}\to a_{\max}\) (resp. \(a_{1}^{n}\to a_{\max}\)) such that \(f_{2}(a_{2}^{n},\varphi)\to b_{\infty}^{2}\in[0,+\infty)\) (resp. \(f_{1}(a_{1}^{n},\varphi)\to b_{\infty}^{1}\in(-\infty,b_{\mathcal{I}}^{1}(\infty,\varphi)]\)). We translate vertically the axially symmetric surface \(\widetilde{\Sigma}_{0}^{2}=\widetilde{\Sigma}_{0}^{2}(b)\) (resp. \(\widetilde{\Sigma}_{0}^{1}=\widetilde{\Sigma}_{0}^{1}(0)\)) (mentioned in Lemma 3.3) until it takes the value \(b_{\infty}^{2}\) (resp. \(b_{\infty}^{1}\)) over the edge \(l_{2}\) (resp. \(l_{1}\)). We get that the surface \(\widetilde{\Sigma}_{0}^{2}\) (resp. \(\widetilde{\Sigma}_{0}^{1}\)) is below (resp. above) the limit surface \(\widetilde{\Sigma}_{\varphi}(\infty,a_{\max},b_{\infty}^{2})\) (resp. \(\widetilde{\Sigma}_{\varphi}(a_{\max},\infty,b_{\infty}^{1})\)) and therefore the period function \(\mathcal{P}_{1}^{2}(a_{\max},\varphi,b_{\infty}^{2})\) (resp. \(\mathcal{P}_{1}^{1}(a_{\max},\varphi,b_{\infty}^{1})\)) is not zero, which is a contradiction.
Let \(0<\varphi<\frac{\pi}{2}\) and assume by contradiction that the function \(f_{2}(\cdot,\varphi)\) (resp. \(f_{1}(\cdot,\varphi)\)) is not strictly increasing (resp. decreasing) for \(a>a_{\rm emb}\). Then, in both cases, there exist two numbers \(\rho_{1},\rho_{2}\in\mathbb{R}\) such that \(a_{\rm emb}\leq\rho_{1}<\rho_{2}\) and \(f_{2}(\rho_{1},\varphi)=f_{2}(\rho_{2},\varphi)=b_{0}\in(0,+\infty)\) (resp. \(f_{2}(\rho_{1},\varphi)=f_{2}(\rho_{2},\varphi)=b_{0}\in(-\infty,b_{2}^{1}( \infty,\varphi))\)). Let \(\widetilde{\Sigma}_{i}=\widetilde{\Sigma}_{\varphi}(\infty,\rho_{i},b)\) (resp. \(\widetilde{\Sigma}_{i}=\widetilde{\Sigma}_{\varphi}(\rho_{i},\infty,b)\)). In this setting, the horizontal geodesic of finite length in \(\widetilde{\Sigma}_{i}\) is denoted with an superindex, \(\widetilde{h}_{2}^{i}\) (resp. \(\widetilde{h}_{1}^{i}\)), to indicate that \(|\widetilde{h}_{2}^{i}|=\rho_{i}\) (resp. \(|\widetilde{h}_{2}^{i}|=\rho_{i}\)) for \(i=1,2\). Then, we have that
\[\mathcal{F}(\widetilde{\Sigma}_{1},\widetilde{h}_{2}^{i})=\mathcal{ P}_{1}^{2}(\rho_{i},\varphi,f_{2}(\rho_{i},\varphi))=\mathcal{P}_{1}^{2}(\rho_{i}, \varphi,f_{2}(\rho_{i},\varphi))=\mathcal{F}(\widetilde{\Sigma}_{2},\widetilde{h} _{2}^{i})=0\ \ \mbox{for}\ \ i=1,2,\] \[(\mbox{resp. }\mathcal{F}(\widetilde{\Sigma}_{1},\widetilde{h}_{1}^{i})= \mathcal{P}_{1}^{1}(\rho_{i},\varphi,f_{1}(\rho_{i},\varphi))=\mathcal{P}_{1}^{1}( \rho_{i},\varphi,f_{1}(\rho_{i},\varphi))=\mathcal{F}(\widetilde{\Sigma}_{2}, \widetilde{h}_{1}^{i})=0\ \ \mbox{for}\ \ i=1,2.)\]
Observe that \(\widetilde{h}_{1}^{1}=\widetilde{h}_{1}^{2}\) (resp. \(\widetilde{h}_{2}^{1}=\widetilde{h}_{2}^{2}\)) and \(\widetilde{\Sigma}_{1}\) is above \(\widetilde{\Sigma}_{2}\) by Proposition 2.2. We choose a small horocycle \(\mathcal{H}\) in the vertex \(p_{1}\) (resp. \(p_{2}\)) and consider \(D_{\mathcal{H}}\) its inner domain with boundary \(\mathcal{H}\). By the maximum principle in the boundary, we can compare the vertical part of the inward conormals obtaining that
\[\mathcal{F}(\widetilde{\Sigma}_{1},\widetilde{h}_{1}^{1})D_{ \mathcal{H}})>\mathcal{F}(\widetilde{\Sigma}_{2},\widetilde{h}_{1}^{2}|D_{ \mathcal{H}})\] (3.5) \[(\mbox{resp. }\mathcal{F}(\widetilde{\Sigma}_{1},\widetilde{h}_{2}^{1} )D_{\mathcal{H}})>\mathcal{F}(\widetilde{\Sigma}_{2},\widetilde{h}_
(in both cases)
\[|\pi(\widetilde{h}_{3}^{2}\backslash D_{\mathcal{H}})|-|\pi(\widetilde{h}_{3}^{1} \backslash D_{\mathcal{H}})|<\mathcal{F}(\widetilde{\Sigma}_{1},\mathcal{H})-F( \widetilde{\Sigma}_{2},\mathcal{H})<\epsilon.\]
Furthermore, we have that \(|\pi(\widetilde{h}_{3}^{1}\backslash D_{\mathcal{H}})|-|\pi(\widetilde{h}_{3}^ {2}\backslash D_{\mathcal{H}})|=c>0\) for any choice of \(\mathcal{H}\) since \(a_{\rm emb}\leq\rho_{1}<\rho_{2}\). Then choosing \(\epsilon<c\), we achieve a contradiction.
We consider now the case \(H=\frac{1}{2}\). The limits of \(f_{i}(a_{i},\varphi)\) when \(a_{i}\to 0\) and \(a_{i}\to\infty\) can be computed by similar arguments to those in the case \(0<H<\frac{1}{2}\). Finally, we compute the limit when \(\varphi\to\frac{\pi}{2}\). Assume by contradiction that there exists a subsequence \(\varphi^{n}\to\frac{\pi}{2}\) such that \(f_{2}(a_{2},\varphi^{n})\to b_{\infty}^{2}\in[0,+\infty)\) (resp. \(f_{1}(a_{1},\varphi^{n})\to b_{\infty}^{1}\in\mathbb{R}\)). The limit surface \(\widetilde{\Sigma}_{\frac{\pi}{2}}(\infty,a_{2},b_{\infty})\) projects onto the strip \(\widetilde{\Delta}(\infty,a_{2},\frac{\pi}{2})\subset\mathbb{R}^{2}\) (resp. \(\widetilde{\Delta}(a_{1},\infty,\frac{\pi}{2})\subset\mathbb{R}^{2}\)) and it is a solution to the Jenkins-Serrin problem with boundary values \(0\) over \(l_{1}\), \(b_{\infty}^{2}\) (resp. \(b_{\infty}^{1}\)) over \(l_{2}\) and \(+\infty\) over \(l_{3}\). We may compare \(\widetilde{\Sigma}_{\frac{\pi}{2}}(\infty,a_{2},b_{\infty}^{2})\) (resp. \(\widetilde{\Sigma}_{\frac{\pi}{2}}(a_{1},\infty,b_{\infty}^{1})\)) with twice the fundamental piece of the helicoid \(\mathcal{H}_{\infty,a_{2}}\) (resp. \(\mathcal{H}_{a_{1},\infty}\)), which is a vertical graph (after a suitable ambient isometry) over \(\widetilde{\Delta}(\infty,a_{2},\frac{\pi}{2})\) (resp. \(\widetilde{\Delta}(a_{1},\infty,\frac{\pi}{2})\)) with boundary values \(b_{\infty}^{1}\) (resp. \(-\infty\)) over \(l_{2}\), \(-\infty\) (resp. \(0\)) over \(l_{1}\) and \(+\infty\) over \(l_{3}\). Twice the helicoid \(\mathcal{H}_{\infty,a_{2}}\) (resp. \(\mathcal{H}_{a_{1},\infty}\)) has first-period function equals to \(0\) along \(l_{2}\) (resp. \(l_{1}\)) and it is below \(\widetilde{\Sigma}_{\frac{\pi}{2}}(\infty,a_{2},b_{\infty}^{2})\) (resp. \(\widetilde{\Sigma}_{\frac{\pi}{2}}(a_{1},\infty,b_{\infty}^{1})\)), we get into a contradiction by the maximum principle because \(\widetilde{\Sigma}_{\frac{\pi}{2}}(\infty,a_{2},b_{\infty}^{2})\) (resp. \(\widetilde{\Sigma}_{\frac{\pi}{2}}(a_{1},\infty,b_{\infty}^{1})\)) must have first period function equal to \(0\).
### Solving the second period problem for the \((h,k)\)-noids \(\Sigma_{\varphi}(\infty,a_{2},b)\)
**Lemma 3.5**.: _Set \(a_{1}=\infty\) and \((a_{2},\varphi)\in\Omega_{2}\). In the notation of the second period problem (see Page 12), the following statements hold true:_
1. \(x(s)<0\) _and_ \(\pi<\psi(s)<2\pi\) _for all_ \(s\in(0,b)\)_._
2. _The curve_ \(v_{0}\) _intersects only once the geodesic_ \(\gamma\)_._
3. _If_ \(\gamma\) _intersects the_ \(y\)_-axis with angle_ \(\delta\)_, then_ \(\varphi>\delta+2Hb\) _and_ \(\mathcal{P}_{2}^{2}(a_{2},\varphi,b)=\cos(\delta)\)_._
4. _If_ \(\mathcal{P}_{2}^{2}(a,\varphi,b)=\cos(\delta)\)_, then_ \(\gamma\) _intersects the_ \(y\)_-axis with an angle_ \(\delta\) _and_ \(\frac{y_{0}}{\sin(\psi_{0})}>-\frac{1}{\sin(\delta)}\)_._
_Moreover:_
* _If_ \(0<H<\frac{1}{2}\)_,_ \(\lim_{a_{2}\to 0}\mathcal{P}_{2}^{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))=\cos(\varphi)\) _and_ \(\mathcal{P}_{2}^{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))>1\) _for_ \(a_{2}\) _close enough to_ \(a_{\rm max}(\varphi)\)_._
* _If_ \(H=\frac{1}{2}\)_,_ \(\lim_{a_{2}\to 0}\mathcal{P}_{2}^{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))=\cos(\varphi)\) _and_ \(\mathcal{P}_{2}^{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))>1\) _for_ \(\varphi\) _close enough to_ \(\frac{\pi}{2}\)_._
Proof.: We will identify \(v_{0}\) with its projection in \(\mathbb{H}^{2}\) in what follows.
1. As \(v_{0}\) and \(h_{1}\) are orthogonal, \(x(s)<0\) in an interval \((0,\epsilon)\). Assume by contradiction that \(x(s)<0\) is not true for all \(s\in(0,b)\), then let \(s_{0}\) be the first instant where \(x(s_{0})=0\). Let \(U\) be the domain enclosed by the arc \(v_{0}(0,s_{0})\) and a segment in the \(y\)-axis joining \(v_{0}(0)\) and \(v_{0}(s_{0})\). Let \(\alpha\) be the non-oriented angle between \(v_{0}\) and the \(y\)-axis at \(v_{0}(s_{0})\). Applying Gauss-Bonnet formula to the domain \(U\) and taking into account that \(\theta_{0}^{\prime}>0\), we get the following contradiction \[0>-\text{area}(U) =2\pi+\int_{0}^{s_{0}}k_{g}(s)ds-(\pi-\tfrac{\pi}{2}+\pi-\alpha)\] \[=\frac{\pi}{2}+\alpha-\int_{0}^{s_{0}}\theta_{0}^{\prime}(s)ds+2Hs _{0}>\tfrac{\pi}{2}-\varphi>0.\] (3.7)
As \(\theta_{0}^{\prime}>0\), we know by Lemma 2.4 that the normal along \(v_{0}\) points the interior of \(\Delta\) and \(k_{g}<2H\) with respect to the interior of \(\Delta\). We have that \(v_{0}\) stays locally in the concave side of the tangent curve of constant curvature \(2H\) at \(v_{0}(0)\). If \(\psi(s)>\pi\) were not true for all \(s\in(0,b)\), consider the first instant \(s_{0}\) in which \(\psi(s_{0})=\pi\). At this point we have that \(v_{0}\) has points locally around \(v_{0}(s_{0})\) in the mean convex side of the tangent curve of constant curvature \(2H\) at \(v_{2}(s_{0})\), which contradicts the fact that \(k_{g}<2H\).
Assume again by contradiction that there is a first instant \(s_{0}>0\) where \(\psi(s_{0})=2\pi\) and consider the domain \(U\) enclosed by an arc of \(v_{0}\) and a parallel segment to the \(y\)-axis at \(v_{0}(s_{0})\). Applying Gauss-Bonnet formula in \(U\), we get the same contradiction as in Equation (3.7).
2. Assume once again by contradiction that \(v_{0}\) intersects \(\gamma\) twice, one in \(v_{0}(s_{0})\) and the other at \(v_{0}(s_{0})\) for some \(0<s_{0}<b\). Then, the arc of \(v_{0}\) joint with an arc of the curve \(\gamma\) enclose a compact domain \(U\). Applying Gauss-Bonnet formula to the domain \(U\) we get the same contradiction as in Equation (3.7).
3. As \(\pi<\psi_{0}<2\pi\), we can parameterize the geodesic \(\gamma\) as \[\gamma:(0,\pi)\to\mathbb{H}^{2},\quad\gamma(t)=\left(x_{0}-y_{0}\frac{\cos(t)+ \cos(\psi_{0})}{\sin(\psi_{0})},-y_{0}\frac{\sin(t)}{\sin(\psi_{0})}\right).\] (3.8) As \(\gamma\) intersects the \(y\)-axis, the first coordinate of \(\gamma(0)\) is positive. Let \(s_{*}\) be the instant where \(\gamma\) intersects the \(y\)-axis, then we can compute the oriented angle as \[\cos(\delta)=\frac{\langle\gamma^{\prime}(s_{*}),y\partial_{y}\rangle}{| \gamma^{\prime}(s_{*})|}=\frac{x_{0}\sin(\psi_{0})}{y_{0}}-\cos(\psi_{0})= \mathcal{P}_{2}^{2}(a_{2},\varphi,b).\] (3.9) Consider now the domain \(U\) enclosed by \(v_{0}\), an arc of \(\gamma\) and a segment of the \(y\)-axis. Applying Gauss-Bonnet formula, we have that \[0>-\mathrm{area}(U) =2\pi+\int_{0}^{b}k_{g}(s)ds-(\pi-\frac{\pi}{2}+\pi-\frac{\pi}{2}+ \pi-\delta)\] \[=-\int_{0}^{b}\theta_{0}^{\prime}(s)ds+2Hb+\delta=-\varphi+2Hb+\delta.\] (3.10)
4. From -- and --, we get: \[\gamma(\pi)=\left(y_{0}\frac{1+\mathcal{P}_{2}^{2}(a_{2},\varphi,b)}{\sin(\psi _{0})},0\right)\quad\text{and}\quad\gamma(0)=\left(y_{0}\frac{\mathcal{P}_{2} ^{2}(a_{2},\varphi,b)-1}{\sin(\psi_{0})},0\right)\] (3.11) whose first coordinates are negative and positive, respectively. That means that \(\gamma\) intersect the \(y\)-axis, whence the intersection angle is \(\delta\) by (3.9). On the other hand, as \(v_{0}\) only intersects \(\gamma\) once, we deduce that the second coordinate of \(\gamma(s_{*})\) (the instant where \(\gamma\) intersects the \(y\)-axis) is less than \(1\). Then we get that \(-y_{0}\frac{\sin(\delta)}{\sin(\psi_{0})}<1\), and the inequality of the statement follows.
Let us compute the limits. Integrating \(\psi^{\prime}=\theta^{\prime}-\cos(\psi)-2H\) along \(v_{0}\), see formula (2.10), and taking into account that \(\psi(b)-\psi(0)=\psi_{0}-\pi\) and \(\theta_{0}(b)-\theta_{0}(0)=\varphi\), we obtain
\[\psi_{0}=\varphi+\pi-\int_{0}^{b}\cos(\psi(s))ds-2Hb. \tag{3.12}\]
In particular, when \(b\to 0\), we have that \(\psi_{0}\) converges to \(\varphi+\pi\) and consequently \(\mathcal{P}_{2}^{2}(a_{2},\varphi,b)\) converges to \(\cos(\varphi)\). Thus, if \(a_{2}^{n}\) is a sequence converging to \(0\), then \(f_{2}(a_{2}^{n},\varphi)\) also converges to \(0\), so that we obtain the desired limit.
Assume now that \(0<H<\frac{1}{2}\). Let us consider a sequence \(a_{2}^{n}\to a_{\max}(\varphi)\). If we translate properly vertically the surface \(\widetilde{\Sigma}_{\varphi}(\infty,a_{2}^{n},f_{2}(a_{2}^{n},\varphi))\), we have that \(\widetilde{\Sigma}_{\varphi}(\infty,a_{2}^{n},f_{2}(a_{2}^{n},\varphi))\) converges to twice the fundamental piece of the conjugate surface of the embedded \(H\)-catenoids constructed in [3, 27]. Here we are using that \(f_{2}(a_{2}^{n},\varphi)\to+\infty\) by Lemma 3.4. However, as in the setting of the second period problem, we are translating and rotating \(\Sigma_{\varphi}(\infty,a_{2},\varphi,f_{2}(a_{2}^{n},\varphi))\) so that \(v_{0}^{n}(0)=(0,1,0)\) and \((v_{0}^{n})^{\prime}(0)=-E_{1}\). We obtain that the limit surface is not twice the fundamental piece of the \(H\)-catenoid but a subset of the vertical \(H\)-cylinder that projects onto a curve of constant curvature \(2H\) orthogonal to the \(y\)-axis at \((0,1)\). The \(H\)-cylinder can be parameterized as \(\alpha\times\mathbb{R}\) with \(\alpha:(-\arccos(2H),\arccos(2H))\to\mathbb{H}^{2}\) given by
\[\alpha(s)=\frac{1}{1-2H}(\sin(s),-2H+\cos(s)).\]
We have that \(x_{0}^{n}\to\frac{-1-2H}{\sqrt{1-4H^{2}}}<0\) and \(y_{0}^{n}\to 0\). Moreover, for large \(n\), the curve \(\gamma^{n}\) does not intersect the \(y\)-axis since we have shown that the limit is a subset of the \(H\)-cylinder \(\alpha\times\mathbb{R}\). That means that the first coordinate of \(\gamma^{n}(0)\) is positive and \(\mathcal{P}_{2}^{2}(a_{2}^{n},\varphi,f_{2}(a_{2}^{n},\varphi))>1\) since we have proved that \(\sin(\psi_{0}^{n})<0\), see Equation (3.11).
Assume now that \(H=\frac{1}{2}\) and consider the sequence \(\widetilde{\Sigma}_{\varphi_{n}}(\infty,a_{2},f_{2}(a_{2},\varphi^{n}))\) with \(\varphi^{n}\to\frac{\pi}{2}\). By Lemma 3.4, after a suitable translation, the limit surface is twice the fundamental piece of the helicoid \(\mathcal{H}_{\infty,a_{2}}\). The conjugate limit surface \(\Sigma_{\frac{\pi}{2}}(\infty,a_{2},+\infty)\) is an embedded \(\frac{1}{2}\)-catenoid constructed in [3, 7, 26]. However,
in the setting of the second period problem, that is, \(v_{0}^{n}(0)=(0,1,0)\) and \((v_{0}^{n})^{\prime}(0)=-E_{1}\), the conjugate limit surface is not a \(\frac{1}{2}\)-catenoid but a subset of the horocylinder \(\{y=1\}\).
Since the family \(\widetilde{\Sigma}_{\varphi}(a_{2},f_{2}(a_{2},\varphi))\) is continuous in the parameters \(a_{2}\) and \(\varphi\), so is the conjugate family. We have that \(v_{0}^{n}\) converges to the line \(\{y=1,z=0\}\). Then we have that \(x_{0}^{n}\to+\infty\) and \(y_{0}^{n}\to 1\). Therefore, as \(\sin(\psi_{0}^{n})<0\), we deduce that \(\mathcal{P}_{2}^{2}(a_{2},\varphi^{n},f(a_{2},\varphi^{n}))>1\) for \(n\) large enough.
**Lemma 3.6**.: _The surface \(\Sigma_{\varphi}(\infty,a_{2},b)\) is a vertical graph. In particular, it is embedded._
Proof.: We continue working in the half-space model and the setting of the second period problem. First observe that \(v_{0}\) and \(v_{2}\) are embedded curves since \(\theta_{i}^{\prime}>0\) and \(\int_{\widetilde{v}_{i}}\theta_{i}^{\prime}\leq\pi\ i=0,2\) (see [4, Proposition 3.10]). In particular each curve of the boundary of \(\Sigma_{\varphi}(\infty,a_{2},b)\) is embedded. We will show that \(\partial\Sigma_{\varphi}(\infty,a_{2},b)\) projects one-to-one to a curve of \(\mathbb{H}^{2}\), so that \(\Sigma_{\varphi}(\infty,a_{2},b)\) is a graph by a standard application of the maximum principle.
Assume that \(0<H<\frac{1}{2}\). Observe that the curves \(\pi(v_{1})\) and \(\pi(h_{1})\) do not intersect each other, since they are consecutive and \(\pi(v_{1})\) is a constant curve of curvature \(2H\) and \(\pi(h_{1})\) is contained in a orthogonal geodesic to \(\pi(v_{1})\). Moreover, by item (1) in Lemma 3.5, the curve \(\pi(v_{0})\) does not intersect \(\pi(h_{1})\) or \(\pi(v_{1})\). If \(\pi(h_{2})\subset\gamma\) does not intersect \(\pi(v_{1})\), we are done (as \(\Sigma_{\varphi}(\infty,a_{2},b)\) is a multigraph then \(\pi(v_{2})\) cannot intersect any component of \(\pi(\partial\Sigma_{\varphi}(\infty,a_{2},b))\) and then we conclude that the multigraph is a graph). Otherwise, if \(\pi(h_{2})\subset\gamma\) intersects \(\pi(v_{1})\), then as \(\pi(v_{2})\) must join these two curves enclosing a multigraph, we would obtain that \(\pi(v_{2})\) intersects itself. However, this is not possible and we conclude that \(\partial\Sigma_{\varphi}(\infty,a_{2},b)\) projects one-to-one.
The proof for \(H=\frac{1}{2}\) is similar, yet \(v_{1}\) does not exist and the curve \(\pi(h_{3})\) is not compact.
_Remark 3_.: For \(H=0\), the embeddedness of the fundamental piece \(\Sigma_{\varphi}(\infty,a_{2},b)\) is guaranteed by the Krust property, see [11]. However, for \(H>0\) there is not a Krust property (see [3]) and the embeddedness has to be proven in order to show the global embeddedness of the \((H,k)\)-noids with genus one for some values of \(H\), see Proposition 3.8.
**Theorem 3.7**.: _For each \(k\geq 3\) and \(\frac{\pi}{k}<\phi\leq\frac{\pi}{2}\), there exists a properly Alexandrov embedded \(H\)-surface with \(0\leq H\leq\frac{1}{2}\) in \(\mathbb{H}^{2}\times\mathbb{R}\) with genus \(1\) and \(k\) ends. These \(H\)-surfaces have dihedral symmetry with respect to \(k\) vertical planes and they are symmetric with respect to a horizontal plane. Moreover, if \(0<H<\frac{1}{2}\) each of their ends is embedded and asymptotic to (and contained in the concave side of) a vertical \(H\)-cylinder._
Proof.: The case \(H=0\) is treated in [2]. Assume first that \(0<H<\frac{1}{2}\) and take \(k\geq 3\) and \(\phi\in(\frac{\pi}{k},\frac{\pi}{2})\). We choose \(\varphi=\phi\) and, by Lemma 3.5, we have that \(\mathcal{P}_{2}^{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))\) tends to \(\cos(\varphi)\) when \(a_{2}\to 0\) and becomes greater than \(1\) when \(a_{2}\to a_{\text{max}}(\varphi)\). By the continuity of \(\mathcal{P}_{2}^{2}\), there exists \(a_{\varphi}\) such that \(\mathcal{P}_{2}^{2}(a_{\varphi},\varphi,f_{2}(a_{\varphi},\varphi))=\cos(\frac {\pi}{k})\). Therefore, the surface \(\Sigma_{\varphi}:=\Sigma_{\varphi}(\infty,a_{\varphi},f_{2}(a_{\varphi},\varphi))\) solves the two period problems, so after successive reflections over the vertical planes and the horizontal plane of symmetry, we obtain the desired complete \(H\)-surface with genus \(1\) and \(k\) ends asymptotic to vertical \(H\)-cylinders from the concave side. We shall see now that the ends are embedded. First, observe that, by the maximum principle with respect to horizontal planes arriving from above, \(\Sigma_{\varphi}\) is contained in the slab \(\mathbb{H}^{2}\times(-\infty,0]\) (we are assuming after a vertical translation that \(v_{0}\) and \(v_{2}\) lies in \(\mathbb{H}^{2}\times\{0\}\)). Moreover, if we reflect \(\widetilde{\Sigma}_{\varphi}\) about the horizontal geodesic \(\widetilde{h}_{1}\), the total variation of the angle of rotation \(\theta_{0}\) along the complete vertical line \(\widetilde{v}_{0}^{*}\) of the reflected surface \(\widetilde{\Sigma}_{\varphi}^{*}\) is \(2\varphi<\pi\), whence the curve \(v_{0}^{*}\) (the extension of the curve \(v_{0}\) after reflection) is embedded by [4, Proposition 3.10]. Therefore, the conjugate surface of the reflected surface \(\widetilde{\Sigma}_{\varphi}^{*}\) is a vertical graph contained in the half-space \(\mathbb{H}^{2}\times(-\infty,0)\). Then, after reflecting over the horizontal plane \(\mathbb{H}^{2}\times\{0\}\), we obtain an embedded surface that contains an end and this proves that the ends are embedded.
Assume now that \(H=\frac{1}{2}\) and take \(k\geq 3\) and the parameter \(\phi\in(\frac{\pi}{k},\frac{\pi}{2})\). Again we will use a continuity argument to prove that there exist parameters \((a(\phi),\varphi(\phi))\in\Omega_{2}\) that solve both period problems. We define the foliation of \(\Omega_{2}\) by the family of curves \(\{\alpha_{\phi}:[0,1]\to\Omega:\phi\in(0,\frac{\pi}{2})\}\) where
\[\alpha_{\phi}(t)=(1-t)(0,\phi)+t(\tan(\tfrac{\pi}{2}-\phi),\tfrac{\pi}{2}). \tag{3.13}\]
By Lemma 3.5, we get
\[\lim_{t\to 0}\mathcal{P}_{2}^{2}(\alpha_{\phi}(t),f_{2}(\alpha_{\phi}(t)))= \mathcal{P}_{2}^{2}(0,\phi,f_{2}(0,\phi))=\cos(\phi).\]
For \(t_{\epsilon}=1-\epsilon\) with \(\epsilon>0\) small enough we have that, the second coordinate of \(\alpha_{\phi}\) is \(\frac{\pi}{2}-\epsilon(\frac{\pi}{2}-\phi)\). Hence, by Lemma 3.5, we get \(\mathcal{P}_{2}^{2}(\alpha_{\phi}(t_{\epsilon}),f(\alpha_{\phi}(t_{\epsilon}) ))>1\). Since \(\cos(\phi)<\cos(\frac{\pi}{k})\), by continuity, there exists an instant \(t_{*}\in(0,1)\) such that \(\mathcal{P}_{2}^{2}(\alpha_{\phi}(t_{*}),f_{2}(\alpha_{\phi}(t_{*})))=\cos( \frac{\pi}{k})\). We have proved that, for each \(\phi\), there exists at least \((a(\phi),\varphi(\phi))=\alpha_{\phi}(t_{*})\) such that \(\Sigma_{\phi}=\Sigma_{\varphi(\phi)}(\infty,a(\phi),f_{2}(a(\phi),\varphi(\phi)))\) solves both periods problems. Then, after successive reflections over the vertical planes and the horizontal plane of symmetry, we obtain a complete \(\frac{1}{2}\)-surface with genus \(1\) and \(k\) ends.
_Remark 4_.: We also obtain \(H\)-surfaces with genus \(1\) and \(k\geq 5\) ends when the first period function vanishes and the second period function is equal to \(\cos(\frac{m\pi}{k})\) with \(m<\frac{k}{2}\) and \(\gcd(m,k)=1\). If \(m>1\), the \(H\)-surfaces constructed close after \(m\) laps around the origin and they are never embedded, see Figure 7 (right).
**Proposition 3.8**.: _The \((H,k)\)-noids with genus one given by Theorem 3.7 are embedded for \(\frac{1}{2}\cos(\frac{\pi}{k})<H\leq\frac{1}{2}\). In particular, for \(\frac{1}{4}<H\leq\frac{1}{2}\), all \((H,3)\)-noids with genus one are embedded._
Proof.: Observe that the embeddedness of each \((H,k)\)-noid with genus one can be guaranteed if the extended surface of \(\Sigma_{\varphi}\) by the reflection about \(h_{2}\) is embedded, or equivalently if the extension of the curve \(v_{2}\) is embedded after the reflection. As \(\Sigma_{\varphi}\) is a vertical graph, this is equivalent to the fact that \(v_{2}\) intersects only once the geodesic \(\gamma\).
Assume first that \(\frac{1}{2}\cos(\frac{\pi}{k})H<\frac{1}{2}\). Consider \(p=\pi(h_{3})=(p_{1},0)\), which coincides with the ideal endpoint of \(v_{1}\) and \(v_{2}\); in particular, the first coordinate of \(p\) verifies \(p_{1}<\frac{-1-2H}{\sqrt{1-4H^{2}}}\). Moreover, we have that \(v_{2}\) intersects \(\gamma\) just once if and only if \(p_{1}\) is smaller than the first coordinate of \(\gamma(\pi)\), which will be denoted by \(\gamma(\pi)_{x}\). Using the inequality in Lemma 3.5 item (4) and Equation (3.11), if \(\mathcal{P}_{2}^{2}=\cos(\frac{\pi}{k})\) we have \(\gamma(\pi)_{x}>-\frac{1+\cos(\frac{\pi}{k})}{\sin(\frac{\pi}{k})}\). Then, for \(H>\frac{1}{2}\cos(\frac{\pi}{k})\) we have that
\[p_{1}<\frac{-1-2H}{\sqrt{1-4H^{2}}}<-\frac{1+\cos(\frac{\pi}{k})}{\sin(\frac{ \pi}{k})}<\gamma(\pi)_{x},\]
which proves the case \(H<\frac{1}{2}\).
Assume now that \(H=\frac{1}{2}\). We will prove that the first coordinate of the curve \(v_{2}\) goes to \(-\infty\). This means that \(v_{2}\) can intersect \(\gamma\) only once so that the surface will be embedded. Consider \(\widetilde{\Sigma}_{\phi}^{n}:=\widetilde{\Sigma}_{\varphi(\phi)}^{n}(n,a(\phi),f_{2}(a(\phi),\phi))\) the sequence of minimal graphs over \(\widetilde{\Delta}_{n}\) converging to \(\widetilde{\Sigma}_{\phi}\) and its respective conjugate surfaces \(\Sigma_{\phi}^{n}\) converging to \(\Sigma_{\phi}\). On the one hand, let \(\widetilde{v}_{1}^{n}\subset\partial\widetilde{\Sigma}_{\phi}^{n}\) and \(\widetilde{v}_{2}^{n}\subset\partial\widetilde{\Sigma}_{\phi}^{n}\) be the vertical geodesics projecting onto \(p_{1}^{n}\) and \(p_{2}^{n}\) respectively, and let \(v_{1}^{n}\) and \(v_{2}^{n}\) their conjugate curves contained in horizontal planes. Let \(k_{g}^{n}=1-(\theta_{1}^{n})^{\prime}\) be the curvature of \(v_{1}^{n}\) with respect to the normal that points to the interior of the domain \(\Delta^{n}\) where \(\Sigma_{\phi}^{n}\) is projecting. We know that \(k_{g}^{n}\) approaches \(1\) as \(n\to\infty\). On the other hand, the second coordinate of \(h_{1}^{n}\) diverges since we have shown that \(\pi(h_{1})\) is not compact. Then we have that the two coordinates of \(v_{1}^{n}\) diverge. In particular, the first coordinate of \(v_{2}^{n}\) also diverges to \(-\infty\) and the embeddedness follows.
_Remark 5_.: We can also guarantee the embeddedness of the \((H,k)\)-noids for \(0<H<\frac{1}{2}\) if the value \(a_{\varphi}\) verifies \(a_{\varphi}>a_{\mathrm{emb}}\), see also [2]. This condition means that the angle at the point \(p_{2}\) is less or equal than \(\frac{\pi}{2}\). Let \(\theta_{2}^{*}\) be the angle of rotation of \(\widetilde{v}_{2}^{*}\), the extension of \(\widetilde{v}_{2}\). As \(\int_{\widetilde{v}_{2}^{*}}\theta_{2}{}^{*}<\pi\) then \(v_{2}{}^{*}\), the extension of \(v_{2}\) by reflection, is embedded by [4, Proposition 3.10] and therefore the \((H,k)\)-noid is also embedded.
### \(H\)-surfaces with infinitely many ends
We are going to analyze now the case where the first problem is solved but \(\mathcal{P}_{2}\geq 1\).
* When \(\mathcal{P}_{2}^{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))=1\), \(h_{1}\) and \(h_{2}\) lies in vertical asymptotic planes, so after successive reflections over the vertical planes and the horizontal plane, we obtain a periodic surface invariant by a discrete group of parabolic translations that fix the common vertical ideal line of the vertical planes of symmetry. This gives us a \(1\)-parameter family of parabolic \((H,\infty)\)-noids with one limit end, by similar arguments to those of Proposition 3.8 one can prove that they are embedded for \(H=\frac{1}{2}\).
* When \(\mathcal{P}_{2}(a_{2},\varphi,f_{2}(a_{2},\varphi))>1\), \(h_{1}\) and \(h_{2}\) lie in two disjoint vertical planes, so after successive reflections over the vertical planes and the horizontal plane, we obtain a periodic surface invariant by a discrete group of hyperbolic translations spanned by successive reflections over the vertical planes. This gives us a \(2\)-parameter family of hyperbolic \((H,\infty)\)-noids with two limit ends. Similar arguments to the proof of Theorem 3.7 show that the ends of these surface are embedded. Moreover in this case for \(0<H<\frac{1}{2}\) we have more freedom and we can choose \(a_{2}>a_{\mathrm{emb}}(\varphi)\),whence the reflected surface of \(\Sigma_{\varphi}(\infty,a_{2},f_{2}(a_{2},\varphi))\) about the vertical plane containing \(h_{2}\) is embedded, and consequently the complete surface is embedded. For \(H=\frac{1}{2}\) they are always embedded by similar arguments to those of Proposition 3.8.
We state the following result:
**Theorem 3.9**.: _There exists properly embedded \(H\)-surfaces in \(\mathbb{H}^{2}\times\mathbb{R}\) with genus zero, infinitely many ends and two limit ends for \(0\leq H\leq\frac{1}{2}\)._
_Remark 6_.: Properly embedded surfaces with genus zero, and a finite number of ends were constructed in [3]. Observe that in the case of \(H=\frac{1}{2}\), the parabolic \((H,\infty)\)-noids are properly embedded surfaces with infinitely many ends and one limit end.
### Solving the second period problem for the \((H,k)\)-nodoids \(\Sigma_{\varphi}(a_{1},\infty,b)\)
**Lemma 3.10**.: _Set \(a_{2}=\infty\) and \((a_{1},\varphi)\in\Omega_{1}\). Under the assumptions of the second period problem, the following statements hold true:_
1. _If_ \(|\mathcal{P}_{2}^{1}(a_{1},\varphi,b)|<1\)_, then_ \(\gamma\) _intersects the_ \(y\)_-axis with an angle_ \(\delta\) _with_ \[\cos(\delta)=\begin{cases}\mathcal{P}_{2}^{1}(a_{1},\varphi,b),\text{ if }\sin(\psi_{0})<0,\\ -\mathcal{P}_{2}^{1}(a_{1},\varphi,b),\text{if }\sin(\psi_{0})>0.\end{cases}\]
2. _Assume_ \(0<b<b_{T}^{1}(\infty,\varphi)<\)_. We have that_ * \(x(s)<0\) _and_ \(\psi(s)\in(\pi,2\pi)\)_;_ * _if_ \(\gamma\) _intersect the_ \(y\)_-axis with an angle_ \(\delta\)_, then_ \(\varphi>\delta+2Hb\) _and_ \(\mathcal{P}_{2}^{1}(a_{1},\varphi,b)>0\)_._
3. _If_ \(b<0\) _then_ \(\psi(s)>\pi\)_._
_Moreover:_
* _If_ \(0<H<\frac{1}{2}\)_,_ \(\lim_{a_{1}\to 0}\mathcal{P}_{2}^{1}(a_{1},\varphi,f_{1}(a_{1},\varphi))=\cos(\varphi)\) _and_ \(|\mathcal{P}_{2}^{1}(a_{1},\varphi,f_{1}(a_{1},\varphi))|>1\) _for_ \(a_{1}\) _close to_ \(a_{\max}(\varphi)\)_. In fact, there exist_ \(0<\varphi_{-}<\varphi_{+}<\frac{\pi}{2}\) _such that_ \(\mathcal{P}_{2}^{1}(a_{1},\varphi,f_{1}(a_{1},\varphi))>1\) _for all_ \(\varphi<\varphi_{-}\) _and_ \(a_{1}\) _close to_ \(a_{\max}(\varphi)\)_, and_ \(\mathcal{P}_{2}^{1}(a_{1},\varphi,f_{1}(a_{1},\varphi))<-1\) _for all_ \(\varphi>\varphi_{+}\) _and_ \(a_{1}\) _close to_ \(a_{\max}(\varphi)\)_._
* _If_ \(H=\frac{1}{2}\)_,_ \(\lim_{a_{1}\to 0}\mathcal{P}_{2}^{1}(a_{1},\varphi,f_{1}(a_{1},\varphi))=\cos(\varphi)\) _and_ \(\mathcal{P}_{2}^{1}(a_{1},\varphi,f_{1}(a_{1},\varphi))<-1\) _for_ \(\varphi\) _close enough to_ \(\frac{\pi}{2}\)_._
Proof.: In what follows, we consider the surfaces \(\widetilde{\Sigma}_{\varphi}^{-}(a_{1},\infty,b)\) and \(\Sigma_{\varphi}^{-}(a_{1},\infty,b)\) its conjugate \(H\)-surface as in the setting of the second period problem.
1. Observe that, if \(\sin(\psi_{0})=0\), then \(|\mathcal{P}_{2}^{1}(a_{1},\varphi,b)|=1\). Assume first that \(\sin(\psi_{0})<0\) and proceed as in Lemma 3.5. We parameterize the curve \(\gamma\) as in Equation (3.8). The same computation of item (3) in Lemma 3.5 tells us that \(\gamma\) intersects the \(y\)-axis with an angle \(\delta\) given by Equation (3.9).
Otherwise, if \(\sin(\psi_{0})>0\), we parameterize the curve \(\gamma\) as \[\gamma:(0,\pi)\to\mathbb{H}^{2},\quad\gamma(t)=\left(x_{0}-y_{0}\frac{\cos(\pi-t )+\cos(\psi_{0})}{\sin(\psi_{0})},y_{0}\frac{\sin(\pi-t)}{\sin(\psi_{0})}\right).\] (3.14) We get \[\gamma(0)=\left(y_{0}\frac{1+\mathcal{P}^{1}_{2}(a_{1},\varphi,b)}{\sin(\psi_ {0})},0\right)\ \ \text{and}\ \ \gamma(\pi)=\left(y_{0}\frac{-1+\mathcal{P}^{1}_{2}(a_{1},\varphi,b)}{\sin( \psi_{0})},0\right),\] (3.15) whose first coordinates are positive and negative, respectively. That means that \(\gamma\) intersects the \(y\)-axis. The angle of intersection \(\delta\) at the instant \(s^{*}\) where \(\gamma\) intersects the \(y\)-axis satisfies \[\cos(\delta)=\frac{\langle\gamma^{\prime}(s_{*}),y\partial_{y}\rangle}{| \gamma^{\prime}(s_{*})|}=-\frac{x_{0}\sin(\psi_{0})}{y_{0}}+\cos(\psi_{0})=- \mathcal{P}^{1}_{2}(a_{1},\varphi,b).\] (3.16)
2. As the angle of rotation along \(\widetilde{v_{0}}^{-}\) turns in a positive sense (\(\theta_{0}^{\prime}>0\)), we can apply item (1), (2) and (3) of Lemma 3.5 obtaining similar results. Assume now by contradiction that \(\mathcal{P}^{1}_{2}(a_{1},\varphi,b)<0\). By Equation (3.3) we get that \[x_{0}=y_{0}\frac{\mathcal{P}^{1}_{2}(a_{1},\varphi,b)+\cos(\psi_{0})}{\sin( \psi_{0})}.\] If \(\mathcal{P}^{1}_{2}\leq-1\), we obtain that \(x_{0}\geq 0\) which is a contradiction. If \(-1<\mathcal{P}^{1}_{2}\leq 0\), by item (1) we have that \(\delta=\arccos(\mathcal{P}^{1}_{2})\geq\frac{\pi}{2}>\varphi\) which contradicts \(\varphi>\delta+2H|b|\).
3. As \(\theta_{2}^{\prime}<0\), we know by Lemma 2.4 that the normal along \(v_{0}^{-}\) points to the exterior of \(\Delta\) and \(k_{g}>2H\) with respect to this normal. We have that \(v_{0}^{-}\) stays locally in the mean convex side of the tangent curve of constant curvature \(2H\) at \(v_{0}^{-}(0)\). If \(\psi(s)>\pi\) were not true for all \(s\in(0,b)\), let us consider the first instant \(s_{0}>0\) in which \(\psi(s_{0})=\pi\). At this instant, we have that \(v_{0}^{-}\) contains points locally around \(v_{0}^{-}(s_{0})\) in the non-mean convex side of the tangent curve of constant curvature \(2H\) at \(v_{0}^{-}(s_{0})\), which contradicts \(k_{g}>2H\).
Let us now analyze the limits. Assume that \(b<0\). Integrating along \(v_{0}^{-}\) the identity \(\psi^{\prime}=-\theta^{\prime}-\cos(\psi)+2H\) (see Formula 2.10 and Remark 1) and taking into account that here \(\psi(b)-\psi(0)=\psi_{0}-\pi\) and \(\theta_{0}(|b|)-\theta_{0}(0)=-\varphi\) since \(\theta_{0}^{\prime}<0\), we obtain
\[\psi_{0}=\varphi+\pi-\int_{0}^{|b|}\cos(\psi(s))ds+2H|b|. \tag{3.17}\]
In particular, when \(b\to 0^{-}\), we have that \(\psi_{0}\) converges to \(\varphi+\pi\) and consequently \(\mathcal{P}^{1}_{2}(a_{1},\varphi,b)\) converges to \(\cos(\varphi)\) as \(b\to 0\). If we take a sequence \(a_{1}^{n}\) converging to \(0\), then Lemma 3.4 implies that \(f_{1}(a_{1}^{n},\varphi)\to 0^{-}\) and we get that \(\lim_{a_{1}\to 0}\mathcal{P}^{1}_{2}(a_{1},\varphi,f_{1}(a_{1},\varphi))=\cos(\varphi)\).
Assume first that \(H<\frac{1}{2}\). Let us consider a sequence \(a_{1}^{n}\to a_{\max}(\varphi)\). By Lemma 3.4 we get that \(f_{1}(a_{1}^{n},\varphi)\to-\infty\), so \(\widetilde{\Sigma}_{\varphi}(a_{1}^{n},\infty,f_{1}(a_{1}^{n},\varphi))\) converges to twice the fundamental piece of the conjugate surface of the \(H\)-catenodoids constructed in [3] and therefore \(\Sigma_{\varphi}^{-}(a_{1}^{n},\infty,f_{1}(a_{1}^{n},\varphi))\) converges to twice the fundamental piece of an \(H\)-catenodoid. Nevertheless, as in the setting of the second period problem we are translating and rotating \(\Sigma_{\varphi}^{-}(a_{1}^{n},\infty,f_{1}(a_{1}^{n},\varphi))\) in order to \(v_{0}^{n-}(0)=(0,1,0)\) and \((v_{0}^{n-})^{\prime}(0)=-E_{1}\). We obtain that the limit surface is not twice the fundamental piece of the \(H\)-catenoid but a subset of the \(H\)-cylinder that projects onto a curve of constant curvature \(2H\) orthogonal to the \(y\)-axis at \((0,1)\). The \(H\)-cylinder can be parameterized as \(\alpha\times\mathbb{R}\) with \(\alpha:(-\arccos(-2H),\arccos(-2H))\to\mathbb{H}^{2}\) given by
\[\alpha(s)=\frac{1}{1+2H}(\sin(s),2H+\cos(s)).\]
We deduce that \(x_{0}^{n}\to\frac{-1+2H}{\sqrt{1-4H^{2}}}<0\) and \(y_{0}^{n}\to 0\).
To understand the limit we distinguish if the limit (after translation) \(H\)-catenodoid is or is not embedded. Assume first that the limit \(H\)-catenodoid is not embedded. We translate and rotate the surface \(\Sigma_{\varphi}^{-}(a_{1}^{n},\infty,f_{1}(a_{1}^{n},\varphi))\) such that (in the half-space model) \((h_{2}^{-})^{n}\) is contained in the vertical plane \(\{x=0\}\) and \(v_{0}^{n-}\) and \(v_{2}^{n-}\) are contained in the horizontal plane \(\mathbb{H}^{2}\times\{0\}\). In the limit we have that the projection of \((v_{1}^{-})^{\infty}\in\mathbb{H}^{2}\times\{+\infty\}\) intersects twice the geodesic \(\{x=0\}\subset\mathbb{H}^{2}\) and the same happens for the curves \((v_{0}^{-})^{\infty}\) and \((v_{2}^{-})^{\infty}\), see Figure 8 ((A) - up left). By the continuity of the conjugation (see [4, Proposition 3.3]), the same happens for the curves \(v_{i}^{n-}\) with \(n\) large enough, see Figure 8 ((A) - up right). Then we rotate the surface \(\Sigma_{\varphi}^{-}(a_{1}^{n},\infty,f_{1}(a_{1}^{n},\varphi))\) until \((h_{1}^{-})^{n}\) lies in the vertical plane \(\{x=0\}\) and \(v_{0}^{n-}(0)=(0,1,0)\)
and \((v_{0}^{n-})^{\prime}(0)=-E_{1}\) (the setting of the second period problem). We have that the projections of \(v_{0}^{n-}\), \(v_{1}^{n-}\) and \(v_{2}^{n-}\) intersect twice the vertical plane containing the curve \(h_{2}^{n-}\), see Figure 8 ((A) - bottom right). In particular, we have that \(\psi_{0}^{n}\in(2\pi,3\pi)\), that is, \(\sin(\psi_{0}^{n})>0\). Moreover, the curve \(\gamma^{n}\) intersects twice the curve \(v_{1}^{n-}\) and in particular \(\gamma^{n}\) does not intersect the \(y\)-axis, then by Equation (3.15) we deduce that \(\mathcal{P}_{2}^{1}(a_{1}^{n},\varphi,f_{1}(a_{1}^{n},\varphi))<-1\) for \(n\) large enough since \(\sin(\psi_{0}^{n})>0\).
If the limit \(H\)-catenodoid is embedded (not in the boundary case where \((v_{0}^{-})^{\infty}\) and its reflected copy intersect each other in a point of the asymptotic boundary), the argument is analogous but, in this case, the curves \(\pi(v_{0}^{n-})\), \(\pi(v_{1}^{n-})\) and \(\pi(v_{2}^{n-})\) intersect only once \(\gamma^{n}\) (the projection of the vertical plane containing the curve \(h_{2}^{n-}\)) for large \(n\) obtaining that \(\psi_{0}^{n}\in(\pi,2\pi)\). A similar analysis shows that in this case \(\mathcal{P}_{2}^{1}(a_{1}^{n},\varphi,f_{1}(a_{1}^{n},\varphi))>1\) for large \(n\).
On the other hand, if \(\varphi\to\frac{\pi}{2}\), we have that \(a_{\max}(\varphi)\to 0\), and then [3, Proposition 4.8] ensures that the limit \(H\)-catenodoid is not embedded. If \(\varphi\to 0\), we have that \(a_{\max}(\varphi)\to+\infty\), and then [3, Proposition 4.8] ensures that the limit \(H\)-catenodoid is embedded, which completes the case \(H<\frac{1}{2}\).
Assume now that \(H=\frac{1}{2}\). Let us consider a sequence \(\varphi^{n}\to\frac{\pi}{2}\). By Lemma 3.4 we have that \(f_{1}(a_{1},\varphi^{n})\to-\infty\), therefore \(\widetilde{\Sigma}_{\varphi^{n}}(a_{1},\infty,f_{1}(a_{1},\varphi^{n}))\) converges to twice the fundamental piece of the helicoid \(\mathcal{H}_{a_{1},\infty}\) of Section 2.2. The conjugate surface \(\Sigma_{\frac{\pi}{2}}^{-}(a_{1},\infty,-\infty)\) is twice the fundamental piece of a non-embedded \(\frac{1}{2}\)-catenodoid, see [3, Section 4.3]. Nevertheless, as in the setting of the second period problem, we are translating and rotating \(\Sigma_{\varphi^{n}}^{-}(a_{1},\infty,f(a_{1},\varphi^{n}))\) in order to have \(v_{0}^{n-}(0)=(0,1,0)\) and \((v_{0}^{n-})^{\prime}(0)=-E_{1}\). We obtain that the limit surface is not twice the fundamental piece of the \(H\)-catenoid but a subset of the horocylinder that projects onto a horocycle orthogonal to the \(y\)-axis at \((0,1)\). The horocylinder can be parameterized as \(\alpha\times\mathbb{R}\) with \(\alpha:(-\pi,\pi)\to\mathbb{H}^{2}\) given by
\[\alpha(s)=\tfrac{1}{2}(\sin(s),1+\cos(s)).\]
We deduce that \(x_{0}^{n}\to 0\) and \(y_{0}^{n}\to 0\).
We translate and rotate the surface \(\Sigma_{\varphi^{n}}^{-}(a_{1},\infty,f_{1}(a_{1},\varphi^{n}))\) such that, in the half-space model, \((h_{2}^{-})^{n}\) is contained in the vertical plane \(\{x=0\}\) and \(v_{0}^{n-}\) and \(v_{2}^{n-}\) are contained in the horizontal plane \(\mathbb{H}^{2}\times\{0\}\). In the limit, we have that the projection of \((v_{0}^{-})^{\infty}\) and \((v_{2}^{-})^{\infty}\) intersect twice the geodesic \(\{x=0\}\subset\mathbb{H}^{2}\), see Figure 8 ((B) - up left). By the continuity of conjugation (see [4, Proposition 3.3]), for large \(n\), the curves \(\pi(v_{0}^{n-})\) and \(\pi(v_{2}^{n-})\) also intersect twice the \(y\)-axis, see Figure 8. Moreover, the curve \(\pi((h_{1}^{-})^{n})\)
is a non compact curve contained in a geodesic that cannot intersect the \(y\)-axis. Then we rotate the surface \(\Sigma^{-}_{\varphi^{n}}(a_{1},\infty,f_{1}(a_{1},\varphi^{n}))\) until \((h_{1}^{-})^{n}\) is contained in the vertical plane \(\{x=0\}\) (the setting of the second period problem) and we have that the projections of \(v_{0}^{n-}\) and \(v_{2}^{n-}\) intersect twice the vertical plane containing the curve \((h_{2}^{-})^{n}\), see Figure 8 ((B) - up right). In particular, we have that \(\psi_{0}^{n}\in(2\pi,3\pi)\), that is, \(\sin(\psi_{0}^{n})>0\). Moreover, the curve \(\gamma^{n}\) does not intersect the \(y\)-axis. We deduce from Equation (3.15) that \(\mathcal{P}^{1}_{2}(a_{1},\varphi^{n},f(a_{1},\varphi^{n}))<-1\) for \(n\) large enough since \(\sin(\psi_{0}^{n})>0\).
**Theorem 3.11**.: _For each \(k\geq 2\), there exists properly Alexandrov-embedded \(H\)-surfaces with \(0<H\leq\frac{1}{2}\) in \(\mathbb{H}^{2}\times\mathbb{R}\) with genus \(1\) and \(k\) ends. These \(H\)-surfaces have dihedral symmetry with respect to \(k\) vertical planes and they are symmetric with respect to a horizontal plane. Moreover for \(0<H<\frac{1}{2}\), each of their ends is asymptotic to (and contained in the convex part of) a vertical \(H\)-cylinder._
Proof.: Assume that \(0<H<\frac{1}{2}\) and fix \(0<\varphi<\frac{\pi}{2}\). By Lemma 3.10, \(\mathcal{P}^{1}_{2}(a_{1},\varphi,f_{1}(a_{1},\varphi))\) tends to \(\cos(\varphi)\) when \(a_{1}\to 0\). If \(\mathcal{P}^{1}_{2}(a_{1},\varphi,f_{1}(a_{1},\varphi))\) becomes greater than \(1\) as \(a_{1}\to a_{\max}(\varphi)\), then by the continuity of \(\mathcal{P}^{1}_{2}\) we have that there exists \(a_{\varphi}\) such that \(\mathcal{P}^{1}_{2}(a_{\varphi},\varphi,f_{1}(a_{\varphi},\varphi))=\cos(\frac {m\pi}{k})\) for all \(k\geq 3\) and \(m<k\) with \(\gcd(m,k)=1\) satisfying \(\cos(\varphi)<\cos(\frac{m\pi}{k})\). On the other hand, if \(\mathcal{P}^{1}_{2}(a_{1},\varphi,f_{1}(a,\varphi))\) gets smaller than \(-1\) as \(a\to a_{\max}(\varphi)\), then there exists \(a_{\varphi}\) such that \(\mathcal{P}^{1}_{2}(a_{\varphi},\varphi,f_{1}(a_{\varphi},\varphi))=\cos( \frac{m\pi}{k})\) for all \(k\geq 2\) and \(m<k\) with \(\gcd(m,k)=1\) satisfying \(\cos(\frac{m\pi}{k})<\cos(\varphi)\). We know that if \(\varphi\) is close to \(0\) and \(a\) is close to \(a_{\max}(\varphi)\), then \(\mathcal{P}^{1}_{2}(a_{1},\varphi,f_{1}(a_{1},\varphi))>1\) and for values of \(\varphi\) close to \(\frac{\pi}{2}\) and \(a_{1}\) close to \(a_{\max}(\varphi)\) we have that \(\mathcal{P}^{1}_{2}(a_{1},\varphi,f_{1}(a_{1},\varphi))<-1\). Then, by varying \(\varphi\in(0,\frac{\pi}{2})\) we find values of \(\varphi\) and \(a_{\varphi}\) such that \(\mathcal{P}^{1}_{2}(a_{\varphi},\varphi,f_{1}(a_{\varphi},\varphi))=\cos( \frac{m\pi}{k})\) for all \(m<k\) and \(\gcd(m,k)=1\).
Therefore, \(\Sigma^{-}_{\varphi}:=\Sigma^{-}_{\varphi}(a_{\varphi},\infty,f_{1}(a_{ \varphi},\varphi))\) solves the two period problems, and then after successive reflections over the vertical planes and the horizontal plane of symmetry, we obtain a complete \(H\)-surface with genus \(1\) and \(k\) ends asymptotic to vertical cylinders from the convex side.
Now assume that \(H=\frac{1}{2}\) and consider the foliation \(\{\alpha_{\phi}:[0,1]\to\Omega\}_{\phi\in(0,\frac{\pi}{2})}\) defined in Equation (3.13). Set \(k\geq 2\) and \(m<k\) with \(\gcd(m,k)=1\) and choose \(\phi\) such that \(\cos(\frac{m\pi}{k})<\cos(\phi)\). By Lemma 3.5, we have that \(\mathcal{P}^{1}_{2}(\alpha_{\phi}(0),f_{1}(\alpha_{\phi}(0)))=\cos(\phi)\) and \(\mathcal{P}^{1}_{2}(\alpha_{\phi}(t),f_{1}(\alpha_{\phi}(t)))<-1\) for \(t\) close enough to \(1\). We deduce that there exist \(a(\phi)\) and \(\varphi(\phi)\) such that \(\mathcal{P}^{1}_{2}(a(\phi),\varphi(\phi),f_{1}(a(\phi),\varphi(\phi)))=\cos( \frac{m\pi}{k})\). Then the surface
\[\Sigma^{-}_{\phi}:=\Sigma^{-}_{\varphi(\phi)}(a(\phi),\infty,f(a(\phi),\varphi (\phi)))\]
solves both periods problems, and we obtain a complete \(H\)-surface with genus one and \(k\) ends after successive reflections over the vertical planes and the horizontal plane of symmetry.
**Proposition 3.12**.: _For \(H=\frac{1}{2}\), the \((H,k)\)-nodoids with genus one and \(k\geq 2\) ends are never embedded._
Proof.: We will prove that the ideal extreme \(\pi(v_{2}^{-})\) is \((0,0)\). As the curve \(\gamma\) intersects the \(y\)-axis when \(|\mathcal{P}^{1}_{2}|<1\), this means that \(\pi(v_{0}^{-})\) or \(\pi(v_{2}^{-})\) must cross \(\gamma\) and then \(\Sigma^{-}_{\phi}\) is not embedded after the reflection over the curve \(h_{2}^{-}\).
We use similar ideas to those in the proof of the embeddedness of the \(\frac{1}{2}\)-noids with genus one, see Proposition 3.8. Let consider \(\widetilde{\Sigma}^{n}_{\phi}:=\widetilde{\Sigma}_{\varphi(\phi)}(a(\phi),n,f_{1 }(a(\phi),\phi))\) the sequence of minimal graphs over \(\widetilde{\Delta}(n,a_{1},\varphi(\phi))\) converging to \(\widetilde{\Sigma}_{\phi}\) and its respective conjugate surfaces (after the reflection over \(h_{1}\)) \((\Sigma^{-}_{\phi})^{n}\) converging to \(\Sigma^{-}_{\phi}\). On the one hand, let \(\widetilde{v}_{1}^{n}\subset\partial\widetilde{\Sigma}^{n}_{\phi}\) and \(\widetilde{v}_{2}^{n}\subset\partial\widetilde{\Sigma}^{n}_{\phi}\) be the vertical geodesics projecting onto \(p_{1}^{n}\) and \(p_{2}^{n}\) respectively, and let \(v_{1}^{n-}\) and \(v_{2}^{n-}\) be their conjugate curves contained in horizontal planes. Let \(k_{g}^{n}=1-(\theta_{1}^{n})^{\prime}\) be the curvature of \(v_{1}^{n-}\) with respect to the normal that points to the exterior of the domain \(\Delta^{n}\) where \((\Sigma^{-}_{\phi})^{n}\) is projecting. We know that \(k_{g}^{n}\) approaches \(1\) as \(n\to\infty\). On the other hand, the second coordinate of \((h_{1}^{-})^{n}\) diverges since we have shown that \(\pi(h_{1})\) is not compact. We have that \(\pi(v_{1}^{n-})\) approaches a half of a horocylinder with arbitrary large euclidean radius with the ideal extreme in \((0,0)\) that contains the endpoint of \(\pi((h_{1}^{n})^{-})\) in the line \(\{x=0\}\). That proves that the ideal extreme of \(\pi(v_{2}^{n-})\) converges to \((0,0)\) and in particular \(\pi(v_{1}^{n-})\) approaches the asymptotic boundary \(\{y=0\}\cup\{+\infty\}\) as \(n\to\infty\).
_Remark 7_.: For \(H<\frac{1}{2}\) we can prove that we have examples of genus \(1\) when the second-period function is negative. However, it seems complicated to decide if the sign of \(\sin(\psi_{0})\) or the sign of \(x_{0}\) are positive or negative in any case. This produces different kinds of \(H\)-surfaces, depending on these signs as we sketch out in Figure 9.
_Remark 8_.: In the case of \(k=2\), that is, the second period function vanishes, we have two possibilities depending on the sign of \(\sin(\psi_{0})\), see Figure 10. We expect that these examples with \(2\) ends and genus \(1\) are never embedded. At least, they should not be embedded for \(H\) near \(0\), since there are not examples for \(H=0\) by the uniqueness of the horizontal catenoid proved in [10]. In that case for \(H\) close to \(0\), our examples with \(2\) ends should be near to a vertical plane.
**Acknowledgments.** The authors would like to express their gratitude to Jose Miguel Manzano for his valuable comments during the preparation of this paper. This research is supported by MCIN/AEI project PID-2019-111531GA-I00. The first author is also partially supported by the FEDER/ANDALUCIA P18-FR-4049 and by the MCIN/AEI project PID-2020-117868GB-I00. The second author is also supported by a PhD grant funded by University of Jaen and by a FEDER-UJA grant (Ref. 1380860).
| ```
H^2×ℝで、平均曲率が0<H≦1/2、Genus1、k≧2の端を持つ2種類の異なる Alexandrov-immersed
表面を構築する。この端は、0<H<1/2に対して垂直のH円筒に漸近する。これにより、
H^2×ℝで、平均曲率が0<H≦1/2、Genus1、k≧2の端を持つ2種類の異なる Alexandrov-immersed
表面について、Schoen型定理は存在しないことが示された。これらの表面は、共役構築を用いて作成される。
``` |
2309.09088 | Enhancing GAN-Based Vocoders with Contrastive Learning Under
Data-limited Condition | Vocoder models have recently achieved substantial progress in generating
authentic audio comparable to human quality while significantly reducing memory
requirement and inference time. However, these data-hungry generative models
require large-scale audio data for learning good representations. In this
paper, we apply contrastive learning methods in training the vocoder to improve
the perceptual quality of the vocoder without modifying its architecture or
adding more data. We design an auxiliary task with mel-spectrogram contrastive
learning to enhance the utterance-level quality of the vocoder model under
data-limited conditions. We also extend the task to include waveforms to
improve the multi-modality comprehension of the model and address the
discriminator overfitting problem. We optimize the additional task
simultaneously with GAN training objectives. Our results show that the tasks
improve model performance substantially in data-limited settings. | Haoming Guo, Seth Z. Zhao, Jiachen Lian, Gopala Anumanchipalli, Gerald Friedland | 2023-09-16T20:04:16 | http://arxiv.org/abs/2309.09088v2 | # Enhancing Gan-Based Vocoders with Contrastive Learning Under Data-Limited Condition
###### Abstract
Vocoder models have recently achieved substantial progress in generating authentic audio comparable to human quality while significantly reducing memory requirement and inference time. However, these data-hungry generative models require large-scale audio data for learning good representations. In this paper, we apply contrastive learning methods in training the vocoder to improve the perceptual quality of the vocoder without modifying its architecture or adding more data. We design an auxiliary task with mel-spectrogram contrastive learning to enhance the utterance-level quality of the vocoder model under data-limited conditions. We also extend the task to include waveforms to improve the multi-modality comprehension of the model and address the discriminator overfitting problem. We optimize the additional task simultaneously with GAN training objectives. Our result shows that the tasks improve model performance substantially in data-limited settings. Our analysis based on the result indicates that the proposed design successfully alleviates discriminator overfitting and produces audio of higher fidelity.
Haoming Guo, Seth Z. Zhao, Jiachen Lian, Gopala Anumanchipalli, Gerald Friedland University of California, Berkeley
+
Footnote †: This paper is based on Haoming’s thesis [1] at University of California, Berkeley.
**Index Terms**: GAN, self-supervised learning, vocoder
## 1 Introduction
Generative Adversarial Networks (GANs) [2] have been widely used in vocoders and have achieved the state-of-the-art in the domain [3, 4, 5]. However, training GAN vocoders still meets two challenges, data insufficiency and discriminator overfitting.
In the realm of single-speaker speech synthesis, the limited size of available datasets poses a significant challenge. To enhance the performance of vocoders operating under such constraints, we propose the use of unsupervised learning techniques to extract additional self-supervised signals for training. Self-supervised learning (SSL) methods have demonstrated efficacy in a diverse array of speech domains, including representation learning [6, 7, 8, 9, 10], synthesis [11, 12, 13, 14], and multi-modality [15, 16]. Drawing on the exceptional transfer learning capabilities of SSL, we seek to harness this power in the realm of Vocoder modeling, focusing specifically on the application of contrastive learning. Although contrastive learning has been explored in the context of speech recognition [6], we are unaware of any previous efforts to apply this approach to Vocoder modeling. In this work, our aim is to leverage contrastive learning as an auxiliary task to enhance the vocoding performance of GAN generators under data-limited conditions.
The second challenge, discriminator overfitting, is also shown to be crucial, especially on small dataset [17, 18, 19], and the convergence of GAN also critically depends on the quality of discriminators [20]. Contrastive learning on the discriminator has been proved to alleviate this problem in image generation [21], and the method, in general, is also shown to increase model's performance and robustness on vision and language tasks [22, 23, 24, 25]. However, in speech synthesis, a naive approach of mel-spectrogram contrastive learning will only involve the generator, which encodes mel-spectrograms, but not the discriminator, which encodes the waveform. Therefore, we propose to extend the training to the discriminator by using a multi-modal contrastive task between mel-spectrograms and waveforms.
Our contributions can be summarized as the following.
1. We propose a contrastive learning task with masked mel-spectrograms to improve the performance on limited data.
2. We design a novel contrastive learning task of matching mel-spectrogram to waveforms to regularize the discriminator and improve the perceptual quality of the generator.
3. We implement a framework for integrating contrastive learning into the GAN training pipeline.
4. We provide experimental results and in-depth analysis of the methods' effectiveness compared to the baseline.
## 2 Methods
In this section, we first introduce the auxiliary contrastive task that we have designed for the GAN vocoder model. Subsequently, we explicate the details of how we modified the task to train both the generator and the discriminator of the
vocoder model. Finally, we illustrate our proposed training framework, which synergizes the contrastive task with GAN objectives. It is worth noting that we have utilized the same model architecture as HiFi-GAN [4]. However, it is pertinent to mention that our method can be applied to other GAN frameworks for vocoders as well.
### Mel-spectrogram Contrastive Learning
In our GAN model, the generator takes a mel-spectrogram as input and outputs a raw waveform through a stack of convolutional layers. We use a learnable feed-forward layer to project the features of the convolutional layers onto a latent space \(R^{D}\), where elements of similar semantics are close to each other through contrastive learning. For each anchor in a batch of \(N\) samples, we apply masking on randomly selected intervals in time and frequency to create a positive sample, while all other \((N-1)\) input samples and \((N-1)\) masked samples are used as negative samples. Together, the method results in \(1\) positive pair and \(2(N-1)\) negative pairs in the batch. We then adapt the InfoNCE loss [26] used in CLIP [27] for our loss function as follows:
\[\mathcal{L}_{cl}=-\frac{1}{N}\sum_{i=1}^{N}\left(\log\frac{\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{v}_{k})}{\sum_{j=1;i\neq j}^{2N}\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{v}_{j}))}\right) \tag{1}\]
where \(\mathbf{v}_{k}\in R^{D}\) is the masked sample from \(\mathbf{v}_{i}\in R^{D}\) and \(\tau\) is a temperature parameter. This method is shown in Fig. 1.
### Mel-spectrogram Waveform Contrastive Learning
In addition to training solely the generator, we propose a novel task that involves contrastive spectrogram-waveform matching. This task serves to train both the generator and the discriminators, promoting rich semantic representation and preventing overfitting of the discriminators to the real or fake classification. The method is illustrated in Fig. 2. For a batch of pairs of mel-spectrograms and waveforms, we assign the labels of the true pairs to be positive and those of the other pairs to be negative, resulting in \(N\) positive pairs and \(N(N-1)\) negative pairs in a batch of \(N\) samples. We use the backbone of the generator to encode the mel-spectrogram and the backbone of the discriminator to encode the waveform. Similar to the method in section 2.1, we use two separate feed-forward layers to project each encoded feature to the same latent dimension \(R^{D}\). Then, we perform the modified loss function
\[\mathcal{L}_{cl}=-\frac{1}{N}\sum_{i=1}^{N}\left(\log\frac{\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{w}_{i})}{\sum_{j=1;i\neq j}^{N}\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{w}_{j}))}\right) \tag{2}\]
where \(\mathbf{w}_{i}\in R^{D}\) is the latent embedding of the waveform corresponding to the \(i\)th mel-spectrogram, \(\mathbf{v}_{i}\in R^{D}\) is the latent embedding of the \(i\)th mel-spectrogram, and \(\tau\) is a temperature parameter. HiFi-GAN contains multiple discriminators, so we calculate a contrastive loss between the mel-spectrogram embedding and each of the waveform embeddings and sum them up. For simplicity, we refer them as one discriminator in this paper unless otherwise mentioned.
### Multi-tasking Framework
To integrate contrastive learning with GAN tasks, we adopt a multi-tasking framework that makes auxiliary tasks a joint optimization objective with original learning goals [28]. As illustrated in Fig. 3, we create additional heads for the training
Figure 1: **Illustration of Mel-spectrogram Contrastive Learning.** The Mel Encoder is the backbone of the generator. This method only trains the generator in a GAN framework.
Figure 2: **Illustration of Mel-spectrogram & Waveform Contrastive Learning.** The Mel Encoder is the backbone of the generator, and the Wave Encoder is the backbone of the discriminator. Therefore, this method trains both the generator and discriminator.
generator and discriminator with auxiliary tasks. The total loss for training the vocoder model thus becomes:
\[\mathcal{L}_{G}=\mathcal{L}_{adv}+\lambda_{fm}\mathcal{L}_{fm}+\lambda_{mel} \mathcal{L}_{mel}+\lambda_{el}\mathcal{L}_{cl} \tag{3}\]
\[\mathcal{L}_{D}=\mathcal{L}_{adv}+\mathcal{I}_{disc}\lambda_{cl}\mathcal{L}_{cl} \tag{4}\]
where \(\mathcal{L}_{G}\) is the total loss for the generator and \(\mathcal{L}_{D}\) is the total loss for the discriminator. \(\mathcal{L}_{adv}\) is the adversarial loss, \(\mathcal{L}_{fm}\) is the feature matching loss, and \(\mathcal{L}_{mel}\) is the mel-spectrogram reconstruction loss in the original HiFi-GAN training pipeline. \(\mathcal{L}_{mel}\) can be either of the contrastive loss described in section 2.1 or 2.2, and \(\mathcal{I}_{disc}\) is an indicator of whether the latter is used. Each loss is weighted with a \(\lambda\) coefficient which can be set as hyperparameters. We use a \(\lambda_{fm}\) of 2, \(\lambda_{mel}\) of 45 from the HiFi-GAN setting [4] and a \(\lambda_{cl}\) of 1.
## 3 Experiments
### Experimental Setting
In this section, we describe the details of our experimental settings including the dataset, model choice, hyperparameters and evaluation metrics.
#### 3.1.1 Dataset
In order to have a fair comparison with other vocoder models, we train the model on the LJSpeech dataset [29] which is also used in other vocoder works like HiFi-GAN [4]. LJSpeech is a public single-speaker dataset with 13100 short English audio clips whose durations span from 1 second to 10 seconds. We use the default data split with 12950 training samples and 150 validation samples. We use the same preprocessing configurations with HiFi-GAN, including 80 bands of mel-spectrograms as input and FFT size of 1024, window size of 1024, and hop size of 256 for conversion from waveform to mel-spectrograms.[4]
#### 3.1.2 Implementation details
For experimental comparison on audio quality, we choose the most powerful HiFi-GAN V1 and the most lightweight HiFi-GAN V3 as the baseline methods, and we use the same model architecture as the backbone to apply the contrastive tasks described in section 2.1 and 2.2. Under the multi-tasking framework, we train HiFi-GAN along with the contrastive learning methods with a batch size of 16, an AdamW optimizer, and a learning rate of 0.0002. For the following experiments on the full dataset, all models are trained for 400k steps (about 96 hours) on one Nvidia TITAN RTX GPU. The experiments on 20% of the dataset train for 300k steps (about 72 hours) on the same device, and those on 4% of the dataset train for 200k steps. The model inference time on GPU is about 70ms for V1 models and 32ms for V3 models.
#### 3.1.3 Evaluation metrics
To objectively evaluate our models compared to the baseline, we measure the mean average error (MAE) and mel-cepstral distortion (MCD) [30] on mel-spectrograms. On both metrics, lower scores indicate closer alignment with the ground truth. We also include a 5-scale mean opinion score (MOS) on audio quality as a subjective evaluation performed on 50 samples excluded from the training set.
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline Model & MAE & MCD & MOS (CI) \\ \hline Ground Truth & - & - & 4.32 (\(\pm 0.05\)) \\ \hline HiFi-GAN V1 & **0.111** & **4.203** & **4.21** (\(\pm 0.05\)) \\ + Mel CL & 0.114 & 4.289 & 4.18 (\(\pm 0.06\)) \\ + Mel-Wave CL & 0.113 & 4.228 & 4.20 (\(\pm 0.05\)) \\ \hline HiFi-GAN V3 & **0.203** & 7.786 & 4.10 (\(\pm 0.05\)) \\ + Mel CL & 0.204 & 7.766 & **4.13** (\(\pm 0.07\)) \\ + Mel-Wave CL & **0.203** & **7.723** & 4.09 (\(\pm 0.06\)) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Objective and subjective evaluation results for models with mel-spectrogram contrastive loss (Mel CL) and mel-spectrogram contrastive loss (Mel-Wave CL). Models are trained on the full training set. CI is 95% confidence interval of the MOS score.
Figure 3: **Illustration of our multi-tasking frameworks.** GAN-based Vocoder models [3, 4] follow an adversarial network (**top**) consisting of a generator that generates raw waveforms from mel-spectrograms and a discriminator that aims to distinguish real from generated waveform samples. To incorporate the auxiliary contrastive learning task, we propose a multi-tasking (**bottom**) framework, which we set the contrastive task as additional learning objectives along with the original GAN optimization objectives. This framework applies to both contrastive learning methods described in section 2.1 and 2.2.
### Results
We present the results of models trained on full data with the multi-tasking framework in Table 1. Below, we refer Mel CL as the mel-spectrogram contrastive learning in section 2.1, and Mel-Wave CL as the mel-spectrogram waveform contrastive learning in section 2.2. For V1 models, the baseline performs slightly better than the proposed methods by margins of 0.02 on MAE, 0.025 on MCD, and 0.01 on MOS. For V3 models, on the objective tests, we observe that the model trained with mel-spectrogram contrastive loss has comparable performance with the baseline, while the one trained with mel-spectrogram waveform contrastive loss achieves the highest scores on both metrics. The results show that our proposed methods have at least comparable performance to the baseline HiFi-GAN when training on the full dataset. On the subjective tests, the V3 model with Mel CL achieves the highest MOS score, 0.03 above the V3 baseline. The model with Mel-Wave CL has a similar MOS score with the baseline on the full dataset. Overall, when trained on the full dataset, the proposed methods have limited gains on top of the baseline.
To investigate how each model performs under data limitation, we train the three models on 20% of the dataset and evaluate them with the same validation set. We present the results in Table 2. With less data, the baseline HiFi-GAN V3 suffers a significant performance degradation across all metrics, including 0.371 on MCD and 0.22 on MOS. Meanwhile, the V3 model trained with Mel CL experiences an increase of 0.194 on MCD and a drop of 0.18 on MOS. The V3 model trained with Mel-Wave CL has an increase of 0.251 on MCD and a drop of only 0.05 on MOS. It suggests Mel-Wave CL is most resistant to data insufficiency. The two proposed methods have comparable scores on the objective evaluation, but the model with Mel-Wave CL obtains a significantly higher score on the subjective test, 0.16 higher than the V3 baseline. The findings align with our hypothesized alleviation of discriminator overfitting by Mel-Wave CL, which is a more severe problem on the small training dataset. Both of the proposed methods perform substantially better than the baseline by 0.07 and 0.16 respectively.
A similar trend exists in the HiFi-GAN V1 experiments, where Mel-Wave CL achieves the best scores and the least performance drop on all metrics. One slightly surprising finding is that the larger model V1 often experiences a smaller performance drop compared to the smaller model V3 when trained on 20% data. Typically, a larger model is expected to be more prone to overfitting when trained on less data, which should lead to a larger performance drop. In this specific case, however, HiFi-GAN V1 has a larger generator but the same discriminator as HiFi-GAN V3 [4], which is our suspected reason for the finding. Overall, the results show the benefits of additional supervision signals from contrastive learning in data-limited situations and the superior performance of Mel-Wave CL on a small dataset.
## 4 Conclusion
This paper describes our proposed contrastive learning framework to improve GAN vocoders. Our results show the legacy of using contrastive learning as an auxiliary task that facilitates vocoder training without adding more data or modifying model architecture. We demonstrate that the proposed framework is significant especially when training on limited data by extracting additional supervision signals and reducing discriminator overfitting.
For future work, we plan to repeat the experiments on different model architectures and datasets to test our method's generalizability. In particular, we want to test its extension to multi-speaker datasets, another domain where data insufficiency is critical. We will also explore other metrics to evaluate the discriminator overfitting problem more holistically.
| 音声合成モデルは、近年、人間の品質に匹敵する高品質な音声の生成に大きく進歩を遂げ、メモリ要求と推論時間を大幅に削減しています。しかし、これらのデータ依存性のある生成モデルは、優れた表現を学習するために大規模なオーディオデータが必要となります。この論文では、音声合成モデルのトレーニングに対照的学習方法を採用することで、モデルの可聴性向上を実現し、そのアーキテクチャを修正したり、さらにデータを追加したりする必要がないことを示しています。私たちは、メルスペクトログラム対照的学習を使って、音声合成モデルの utterances のレベルの質を向上させ、データ不足の状況下でのモデルの性能向上を実現しました。このタスクを波形を含むように拡張することで、モデルの多様性理解を向上させ、discriminator の過剰学習問題に対処しました。GAN 訓練目標と同時に、追加のタスクを最適化しています。私たちの結果により |
2307.16404 | Nonvolatile Magneto-Thermal Switching in MgB2 | Ongoing research explores thermal switching materials to control heat flow.
Specifically, there has been interest in magneto-thermal switching (MTS)
materials based on superconductors, which only exhibited switching behavior
when a magnetic field was applied. However, a recent report highlighted
nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux
trapping. In this study, we focused on flux trapping in a type-II
superconductor MgB2. Magnetization and thermal conductivity measurements under
magnetic fields were conducted on polycrystalline MgB2. We confirmed that
magnetic flux was indeed trapped in MgB2 even after demagnetization.
Additionally, we observed nonvolatile MTS in MgB2 as well as Sn-Pb solders.
These results suggest that the nonvolatile MTS may be a widespread
characteristic of superconducting materials with flux trapping. | Hiroto Arima, Yoshikazu Mizuguchi | 2023-07-31T04:59:19 | http://arxiv.org/abs/2307.16404v1 | # Nonvolatile Magneto-Thermal Switching in MgB\({}_{2}\)
###### Abstract
Ongoing research explores thermal switching materials to control heat flow. Specifically, there has been interest in magneto-thermal switching (MTS) materials based on superconductors, which only exhibited switching behavior when a magnetic field was applied. However, a recent report highlighted nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux trapping. In this study, we focused on flux trapping in a type-II superconductor MgB\({}_{2}\). Magnetization and thermal conductivity measurements under magnetic fields were conducted on polycrystalline MgB\({}_{2}\). We confirmed that magnetic flux was indeed trapped in MgB\({}_{2}\) even after demagnetization. Additionally, we observed nonvolatile MTS in MgB\({}_{2}\) as well as Sn-Pb solders. These results suggest that the nonvolatile MTS may be a widespread characteristic of superconducting materials with flux trapping.
The recent advancements in electronic device technology have spurred research into thermal switching materials, which enable control of heat flow through external parameters[1; 2]. Recent progress has been made in the development of thermal switching materials, where the control of thermal conductivity (\(\kappa\)) is achieved through the application of electric[3] and magnetic fields[4; 5]. Among these materials, superconductors have received particular attention in magneto-thermal switching (MTS) research [6; 7]. Here, we introduce an index to assess the effectiveness of MTS known as the MTS ratio (MTSR). The MTSR is calculated as the ratio of the change in \(\kappa\) between the presence and absence of a magnetic field. The MTSR is expressed as [\(\kappa(H)\) - \(\kappa(0\) Oe)] / \(\kappa(0\) Oe). It is widely recognized that, in the normal state, heat is carried by charge carriers, whereas in the superconducting state, heat transport by Cooper pairs is negligible. Consequently, the phase transition from the superconducting state to the normal state results in an increase in \(\kappa\). Recent studies reported MTSR of 650 % for Nb[6] and over 1000 % for high purity 5N-Pb[7]. However, previously reported MTS using superconductors had a limitation, \(\kappa(H)\) returned to its initial value \(\kappa(0\) Oe) when the magnetic field was reduced to zero, indicating that MTS was effective only in the presence of a magnetic field. In the most recent discovery reported in arXiv: 2307.05957 (preprint)[8], a nonvolatile MTS, which retains the altered \(\kappa(H)\) even when the magnetic field is completely removed, has been identified. Surprisingly, this nonvolatile MTS material was discovered in commercially available Sn-Pb solders. The nonvolatile MTSR is defined as [\(\kappa\) (0 Oe, demagnetized) - \(\kappa(0\) Oe, initial)]/\(\kappa\) (0 Oe, initial), and it has been determined that the nonvolatile MTSR of flux-core-free Sn45-Pb55 solder was 150 %. The origin of nonvolatile MTS in Sn-Pb solders is attributed to the presence of magnetic flux trapped in the solder even after the applied magnetic field is removed, resulting in a partial loss of superconducting bulkiness at \(H=0\) Oe. While magnetic flux trapping in Sn-Pb solders is relatively rare due to both Sn and Pb being type-I superconductors, the magnetic flux trap after demagnetization is commonly observed in type-II superconductor samples.
In this study, our primary focus is on exploring the occurrence of nonvolatile MTS in type-II superconductors, with particular emphasis on MgB\({}_{2}\), which has been studied for its flux trapping properties[9; 10]. MgB\({}_{2}\) was discovered in 2001 and stands out among intermetallic superconductors for having the highest superconducting transition temperature \(T_{\rm SC}\sim 39\) K under ambient pressure [11]. This compound exhibits a unique characteristic as a multi-gap superconductor, with multiple conduction bands and independent superconducting gaps present on the Fermi surface[12; 13]. Shortly after its discovery, it was observed that grain boundaries in MgB\({}_{2}\) could
serve as effective pinning centers, contributing to high critical current density (\(J_{\rm c}\)) in superconducting materials[14; 15; 16; 17]. Consequently, extensive research has been conducted to investigate the relationship between magnetic flux trapping at grain boundaries and \(J_{\rm c}\).
Until now, the association between magnetic flux trapping and nonvolatile MTS has solely been reported in Sn-Pb solders. To gain a deeper understanding of this phenomenon, it is essential to explore other materials. MgB\({}_{2}\) presents an appealing platform for investigating nonvolatile MTS due to the existing body of research on flux trapping effects at grain boundaries[9]. While previous studies have conducted thermal conductivity measurements under magnetic field on MgB\({}_{2}\)[18; 19], there has been no specific focus on nonvolatile MTS. In this study, magnetization measurements and thermal conductivity measurements under magnetic fields were conducted for commercial MgB\({}_{2}\). Notably, nonvolatile MTS was also observed in MgB\({}_{2}\).
Polycrystalline MgB\({}_{2}\) used in this experiment was a commercially available powder sample (99%, KOJUNDO). Before the measurements, the powder sample underwent a high-pressure sintering process. In this experiment, high-pressure sintering was performed at relatively low temperatures to suppress grain growth. The specific conditions for this high-pressure sintering entailed a pressure of 3 GPa and a temperature of 400 \({}^{\circ}\)C, sustained around 30 minutes. The crystal structure was examined through powder X-ray diffraction employing the Cu-K\(\alpha\) radiation using the \(\theta\)-2\(\theta\) method (Miniflex-600 RIGAKU). The Rietveld refinement of the XRD data was performed using the RIETAN-FP package[20]. The scanning electron microscope (SEM, TM3030, Hitachi High-Tech) was used for microstructure observation. The thermal conductivity was measured using a Physical Property Measurement System (PPMS, Quantum Design) equipped with a thermal transport option (TTO). The measurement employed a four-probe steady-state method, incorporating a heater, two thermometers, and a base-temperature terminal. For the thermal conductivity measurements of MgB\({}_{2}\), a cylindrical sample with a diameter of 4.61 mm and a height of 4.10 mm was employed. The magnetization measurements were carried out using a superconducting quantum interference device (SQUID) magnetometry technique, employing the Magnetic Property Measurement System (MPMS3, Quantum Design) in a VSM (vibrating sample magnetometry) mode. In this experiment, thermal conductivity measurements were conducted on a high-pressure sintered MgB\({}_{2}\) sample within a week. Subsequently, the sample was crushed, and further analyses including XRD and magnetization measurements, and SEM imaging were performed. All the experiments were carried out using the same batch of sample.
Figure 1 illustrates the XRD patterns obtained from the high-pressure sintered MgB\({}_{2}\) sample.
In the high-pressure sintered sample, the presence of MgB\({}_{4}\) and MgO were detected as an impurity, alongside the main MgB\({}_{2}\) peaks. The reliability factor, denoted as \(R_{\rm wp}\), was determined to be \(R_{\rm wp}=3.7\) %, and the goodness-of-fit indicator, represented by \(S\), was calculated as \(S=1.8\). The results of Rietveld refinement indicated that the sample composition consisted of approximately 90 % MgB\({}_{2}\), 5 % MgB\({}_{4}\), and 5% MgO. The as-purchased MgB\({}_{2}\) powder contained a similar amount of MgB\({}_{4}\) and MgO. The discrepancy with the nominal purity of 99% MgB\({}_{2}\) is likely a result of certain compounds not being accounted for in the chemical analysis. Furthermore, the XRD profile exhibited broadening, implying lattice strain induced by the high-pressure sintering process.
Figure 2 shows the SEM image of the high-pressure sintered MgB\({}_{2}\). Numerous granular grains were observed in the structure of the high-pressure sintered MgB\({}_{2}\), with the majority of the grain sizes measuring less than approximately 5 \(\mu\)m.
Figure 3 (a) illustrates the temperature dependence of the magnetization \(4\pi M\) measured at 10 Oe under both zero-field-cooling (ZFC) and field-cooling (FC) conditions. The magnetization measurement under ZFC demonstrates a large shielding signal below \(T_{\rm SC}\sim 39\) K. The difference between ZFC and FC measurements is a characteristic behavior commonly observed in type-II superconductors. The temperature dependence of \(4\pi M\) exhibited broadening, which has also been reported in previous studies on high-pressure sintered MgB\({}_{2}\)[17]. The exact cause of this broadening is not yet clear, but the inhomogeneity of the crystals likely plays a role, as suggested by the broad profile observed in the XRD measurement. Figure 3 (b) depicts the temperature dependence of \(4\pi M\) measured at 10 Oe after FC at three different fields : 1000 Oe, 10000 Oe, and 70000 Oe. In all cases, \(4\pi M\) exhibited ferromagnetic-like behavior below \(T_{\rm SC}\), similar to the findings of previously reported hydrogen-rich superconductors[21] and Sn-Pb solders[8], implying the presence of trapped magnetic flux at grain boundaries of MgB\({}_{2}\). The value of magnetization at 1.8 K increased as the field increased from 1000 Oe to 10000 Oe, but it did not change further with the application of a higher magnetic field. This suggests that the amount of trapped magnetic flux increases with the applied magnetic field, but there is a threshold where the trapped magnetic flux saturates. To further discuss, we show the \(4\pi M\)-\(H\) curves at 2.5 K and 4.0 K in Figs. 3(c) and 3(e), respectively. These curves display the distinct shape commonly observed in type-II superconductors, which signifies the presence of flux trapping in the material. As depicted in Figures 3(d) and 3(f), the inner magnetic flux density (\(B\)) given by \(B=H+4\pi M\) near 0 Oe is displayed at 2.5 K and 4.0 K. The results at 2.5 K and 4.0 K showed similarities: immediately after the zero-field-cooling, the initial magnetic flux density of MgB\({}_{2}\) was \(B=0\). However, upon applying a magnetic field to
MgB\({}_{2}\), \(B\) did not return to its initial value when the applied field reached \(H\) = 0, due to the magnet flux trapping. The magnetic flux density trapped at \(H\) = 0 Oe was 500 G for both temperatures.
Figure 4 (a) depicts the temperature dependence of \(\kappa\) in both a zero magnetic field and a magnetic field of 10000 Oe. In the absence of a magnetic field, \(\kappa\) decreased as the temperature decreased. The observed variation in the slope of \(\kappa\) at approximately 10 K was consistent with previous measurements on polycrystalline MgB\({}_{2}\)[22]. Furthermore, \(\kappa\) at 50 K in this experiment was approximately 3.5 W/Km, which aligns with the order of magnitude reported in previous studies, where values ranged from 5 W/Km[23] to 9 W/Km[22]. It is noted that thermal conductivity is a sensitive indicator of grain boundaries, and therefore, the discrepancy with previous studies is attributed to the sample dependence. When a magnetic field of 10000 Oe was applied, a similar trend in \(\kappa\) was observed, but the decrease in \(\kappa\) was suppressed. This can be attributed to the suppression of the superconducting state in MgB\({}_{2}\) under the magnetic field. Figures 4(b) and 4(c) illustrate the magnetic field dependence of \(\kappa\) at 2.5 K and 4 K, respectively. When the MgB\({}_{2}\) was zero-field-cooled to 2.5 K, the initial \(\kappa\) in the absence of magnetic field was 6.9 mW/Km. When a magnetic field was applied, \(\kappa\) increased and reached a value of 14.0 mW/Km at 10000 Oe. As the magnetic field gradually decreased from 10000 Oe, \(\kappa\) showed a decrease. However, the value at 0 Oe deviated from the initial value, indicating nonvolatile MTS. Upon further reduction of the magnetic field, a minimum value of \(\kappa\) was observed, followed by an increase in \(\kappa\). Similar trends were observed when the magnetic field was increased from -10000 Oe. As mentioned earlier, the presence of approximately 500 G of trapped magnetic flux in MgB\({}_{2}\) after demagnetization partially suppresses the superconducting state and prevented \(\kappa\) from returning to its initial value. The nonvolatile MTSR observed in MgB\({}_{2}\) at 2.5 K in this experiment was 18 %, which is smaller than to that of flux-core-free Sn45-Pb55 solder[8]. Furthermore, nonvolatile MTS was also observed at 4.0 K, although the nonvolatile MTSR decreased to that at 2.5 K, reaching 15 %.
The primary discovery of this study is the confirmation of nonvolatile MTS occurring in the magnetic flux trapped at the grain boundaries of the type-II superconductor MgB\({}_{2}\). This finding diverges from prior research, which predominantly focused on composites such as Sn-Pb solders. Notably, the phenomenon of flux trapping at grain boundaries has been observed not only in MgB\({}_{2}\) but also in other type-II superconductors, including cuprate superconductors and iron-based superconductors [24]. This suggests that the trapping of flux at grain boundaries is a widespread occurrence in various types of type-II superconducting materials. In this study, the maximum value of the nonvolatile MTSR achieved for MgB\({}_{2}\) remained relatively small at 18 % at 2.5 K. To
further enhance the nonvolatile MTSR, potential methods include controlling the grain boundary size to increase the trapped magnetic flux and regulating the thermal conductivity in the normal conducting region. However, further systematic investigations are required in this regard. Recent advancements in machine learning have contributed to the elucidation of heat conduction mechanisms in grain boundaries and nanopolycrystals [25]. Given that nonvolatile MTS is a relatively new phenomenon, it is crucial to not only investigate the thermal conductivity under magnetic field in various materials but also consider theoretical approaches that utilize machine learning to gain a deeper understanding of nonvolatile MTS.
The motivation for this study was derived from the discovery of nonvolatile MTS induced by magnetic flux trapping in Sn-Pb solders. Drawing inspiration from this phenomenon, our research focused on investigating the magnetic field dependence of thermal conductivity in type-II superconductor MgB\({}_{2}\), a material renowned for its ability to trap magnetic flux at grain boundaries. Through our experiments, we successfully observed nonvolatile MTS in MgB\({}_{2}\) and identified magnetic flux trapping as the underlying mechanism. Moving forward, it is imperative to extend this research to encompass other type-II superconductors with effective pinning centers. Such endeavors will contribute to a deeper understanding of nonvolatile MTS at a fundamental level and facilitate improvements in both the nonvolatile MTSR and the operational temperature range, thereby paving the way for potential engineering applications.
## Acknowledgment
We thank O. Miura and K. Uchida for supports in experiments and fruitful discussion on the results. This work was partly supported by JST-ERATO (JPMJER2201), TMU Research Project for Emergent Future Society, and Tokyo Government-Advanced Research (H31-1).
| ongoing research is exploring thermal switching materials to control heat flow. Specifically, there has been interest in magneto-thermal switching (MTS) materials based on superconductors, which only exhibited switching behavior when a magnetic field was applied. However, a recent report highlighted nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux trapping. In this study, we focused on flux trapping in a type-II superconductor MgB2. Magnetization and thermal conductivity measurements under magnetic fields were conducted on polycrystalline MgB2. We confirmed that magnetic flux was indeed trapped in MgB2 even after demagnetization. Additionally, we observed nonvolatile MTS in MgB2 as well as Sn-Pb solders. These results suggest that the nonvolatile MTS may be a widespread characteristic of superconducting materials with flux trapping. |
2309.08382 | Double Domain Guided Real-Time Low-Light Image Enhancement for
Ultra-High-Definition Transportation Surveillance | Real-time transportation surveillance is an essential part of the intelligent
transportation system (ITS). However, images captured under low-light
conditions often suffer the poor visibility with types of degradation, such as
noise interference and vague edge features, etc. With the development of
imaging devices, the quality of the visual surveillance data is continually
increasing, like 2K and 4K, which has more strict requirements on the
efficiency of image processing. To satisfy the requirements on both enhancement
quality and computational speed, this paper proposes a double domain guided
real-time low-light image enhancement network (DDNet) for ultra-high-definition
(UHD) transportation surveillance. Specifically, we design an encoder-decoder
structure as the main architecture of the learning network. In particular, the
enhancement processing is divided into two subtasks (i.e., color enhancement
and gradient enhancement) via the proposed coarse enhancement module (CEM) and
LoG-based gradient enhancement module (GEM), which are embedded in the
encoder-decoder structure. It enables the network to enhance the color and edge
features simultaneously. Through the decomposition and reconstruction on both
color and gradient domains, our DDNet can restore the detailed feature
information concealed by the darkness with better visual quality and
efficiency. The evaluation experiments on standard and transportation-related
datasets demonstrate that our DDNet provides superior enhancement quality and
efficiency compared with the state-of-the-art methods. Besides, the object
detection and scene segmentation experiments indicate the practical benefits
for higher-level image analysis under low-light environments in ITS. | Jingxiang Qu, Ryan Wen Liu, Yuan Gao, Yu Guo, Fenghua Zhu, Fei-yue Wang | 2023-09-15T13:16:24 | http://arxiv.org/abs/2309.08382v1 | Double Domain Guided Real-Time Low-Light Image Enhancement for Ultra-High-Definition Transportation Surveillance
###### Abstract
Real-time transportation surveillance is an essential part of the intelligent transportation system (ITS). However, images captured under low-light conditions often suffer the poor visibility with types of degradation, such as noise interference and vague edge features, etc. With the development of imaging devices, the quality of the visual surveillance data is continually increasing, like 2K and 4K, which has more strict requirements on the efficiency of image processing. To satisfy the requirements on both enhancement quality and computational speed, this paper proposes a double domain guided real-time low-light image enhancement network (DDNet) for ultra-high-definition (UHD) transportation surveillance. Specifically, we design an encoder-decoder structure as the main architecture of the learning network. In particular, the enhancement processing is divided into two subtasks (i.e., color enhancement and gradient enhancement) via the proposed coarse enhancement module (CEM) and LoG-based gradient enhancement module (GEM), which are embedded in the encoder-decoder structure. It enables the network to enhance the color and edge features simultaneously. Through the decomposition and reconstruction on both color and gradient domains, our DDNet can restore the detailed feature information concealed by the darkness with better visual quality and efficiency. The evaluation experiments on standard and transportation-related datasets demonstrate that our DDNet provides superior enhancement quality and efficiency compared with the state-of-the-art methods. Besides, the object detection and scene segmentation experiments indicate the practical benefits for higher-level image analysis under low-light environments in ITS. The source code is available at [https://github.com/QuJX/DDNet](https://github.com/QuJX/DDNet).
Intelligent transportation system (ITS), transportation surveillance, low-light image enhancement, ultra-high-definition (UHD), double domain guidance.
## I Introduction
With the rapid growth of intelligent transportation system (ITS), more and more visual sensors are employed for transportation surveillance. However, when the imaging device is under low-light environments, the acquired images always suffer poor sharpness, low contrast, and undesirable noise [1]. The poor imaging quality makes it difficult to see the captured scenes clearly and brings great challenges to higher-level image analysis, such as object detection [2, 3, 4] and scene segmentation [5, 6, 7]. Even though some imaging devices attempt to enlighten the darkness with extra artificial light such as infrared and ultraviolet flashes [8], the cost and the poor quality are the main limitations. Therefore, an effective low-light image enhancement method is necessary for nocturnal transportation surveillance. Moreover, with the development of imaging and parallel computational devices, the resolution of the captured visual surveillance data is continually increasing, from the standard definition (SD, 480p, 720p), the high definition (HD, 1080p), to the ultra-high definition (UHD, 4K). The corresponding image processing algorithm has also been widely investigated under multiple transportation scenes, e.g., parking lot [9], waterway [10], and airport surveillance [11], etc. The trade-off between visibility enhancement and computational complexity is a major problem to be solved in current transportation applications [12].
### _Motivation_
The real-time transportation surveillance has two main requirements for low-light image enhancement: effectiveness and efficiency. Specifically, the main targets of transportation surveillance are vehicles [13], pedestrians [14], vessels [15], etc. It is thus necessary to enlighten the darkness effectively with better noise suppression and feature preservation. For traditional low-light enhancement methods, the illumination
Fig. 1: The illustration of our DDNet for real-time low-light transportation surveillance under different practical scenes.
is mainly improved by enhancing the contrast globally (e.g., histogram equalization (HE) [16]), which only improve visual perception without effective noise suppression. Compared with traditional methods, learning methods are robust to the noise due to the strong learning ability of deep neural networks, which could also improve the computational efficiency due to the acceleration of GPU. However, in transportation scenes, the edge feature is rarely considered in previous low-light image enhancement methods [17], which is especially important for higher-level image analysis like vehicle detection [13], pedestrian detection [18], and scene segmentation [19]. In practical applications, the frame rate of most transportation surveillance cameras is less than 30 FPS [20], which is thus the basic efficiency requirement of real-time image processing methods. However, most previous low-light image enhancement methods can not satisfy this requirement [21]. Therefore, in most cases, the UHD images will be firstly resized to smaller scales for lower computational complexity. It is doubtless that the image resizing has severe degeneration on the image quality. As shown in Fig. 2, the resizing operation causes significant detail loss, making the blur of vehicle license plates. Many methods have achieved real-time processing on UHD images, like Zero [22], SCI [23], and UHDFour [24], but the results are unsatisfactory in ITS scenes.
To achieve effective real-time low-light image enhancement in UHD transportation surveillance, we propose a double domain guided network (DDNet). It achieves superior noise suppression and brightness enhancement with enhancing the feature map on the color and gradient domains simultaneously. The experiments on running time have demonstrated the efficiency of the implementation on UHD images. Furthermore, the object detection and scene segmentation experiments indicate the practical improvement for higher-level image analysis. In general, this paper provides an effective and efficient method to improve the transportation surveillance under low-light environments.
### _Contributions_
In this paper, we propose a real-time low-light image enhancement network for UHD transportation surveillance, which achieves competitive enhancement quality and computational efficiency. The main contributions of the proposed method can be summarized as follows:
* We propose a double domain guided low-light image enhancement network (DDNet), aided by Laplacian of Gaussian (LoG)-based gradient information. It effectively improves the image quality captured under low-light conditions with keeping most details on both color and gradient domains.
* We design the LoG-based gradient enhancement module (GEM) and the coarse enhancement module (CEM) embedded in the encoder-decoder structure, which enhances the color and gradient domain features effectively. Besides, a joint loss function is proposed to constrain the enhancement of different domains separately.
* The quantitative and qualitative evaluation experiments compared with the state-of-the-arts are conducted on standard and transportation-related datasets. Experimental results show that our DDNet significantly improves the enhancement performance. Besides, the running time satisfies the requirements of real-time UHD transportation surveillance. The object detection and scene segmentation experiments indicate the improvement of our DDNet for higher-level visual tasks in ITS.
The rest of this paper is organized as follows. The recent studies on low-light image enhancement are reviewed in Section II. In Section III, We introduce the details of our DDNet. Numerous experiments on standard and transportation-related datasets have been implemented to evaluate the enhancement performance and practical benefits for transportation surveillance in Section IV. Conclusion and future perspectives are finally given in Section V.
## II Related Work
In this section, we briefly introduce the previous low-light image enhancement methods (i.e., traditional and learning methods) and their applications in ITS.
### _Traditional Methods_
The traditional methods employ some mathematical models to enhance the low-light images. Histogram equalization (HE) [16] flattens the histogram and expands the dynamic range of intensity to improve the brightness of the image. However, it is challenging to discriminate the noise and clear information with HE-based methods. Excessive noise corrupts the histogram distribution, making it harder to get reliable information from low-light backgrounds. Retinex theory [25] and related methods [26, 27, 28] decompose the low-light image into the reflectance and illumination components to get the underlying normal-light image. To make a better balance between the brightness enhancement and noise suppression. However, Retinex-based methods have two major drawbacks. First, insufficient brightness enhancement in complex scenes results in unqualified enhanced images. Besides, they have difficulty in balancing noise suppression and edge feature preservation. Ying _et al_. [29, 30] suggested a camera response model to improve the effect of low-light image enhancement. Dong [31] and DeHz [32] enhanced the low lightness based on the atmospherical scattering model. SRRP [33] kept the smoothness of the original illumination to achieve qualified image
Fig. 2: The comparison between the low-light enhancement results on UHD images in transportation surveillance. From left to right: (a) raw 4K low-light image, (b) enhanced result after resizing the image to 1080P, and (c) 4K image enhancement. It is obvious that the resizing operation causes significant detail loss on UHD images.
enhancement. However, they failed to simultaneously achieve satisfactory detail preservation, illumination enhancement, and computational efficiency for real-time UHD transportation surveillance.
### _Learning Methods_
In recent years, deep learning [34] has achieved widespread success in diverse fields of computer vision tasks, such as object detection, scene segmentation, and low-light image enhancement. Based on the Retinex theory, many methods employed the CNN to formulate the decomposition and enhancement of low-light images, e.g., KinD [35], RetinexNet [36], RUAS [37], Uretinex-net [38] and LR3M [39]. Meanwhile, many multi-branch networks [40, 41, 42, 43, 44, 45] were designed to tackle different subtasks in low lightness enhancement, e.g., noise reduction and color restoration. In addition to the supervised training, EnlightenGAN [46] and DRBN [47] enlightened the darkness with semi-supervised network. LLFormer [48] used vision transformer to achieve UHD low-light image enhancement. Although with considerable efforts, the running time of most previous works is not suitable for real-time UHD transportation surveillance. Besides, in transportation scenes, edge feature restoration is typically important, which was rarely considered. Lu _et al_. [49] proposed a gradient prior-aided neural network employing Laplacian and Sobel filters to guide the enhancement. However, these filters are sensitive to noise interference, which is harmful to image quality enhancement. In this paper, we employ the robust LoG operator to extract the gradient information and enhance it in the network to obtain better edge features.
### _Applications in Transportation System_
The efficient low-light image enhancement methods are necessary for nocturnal surveillance in ITS. Therefore, many efforts have been devoted to overcoming the restriction of poor illumination. For instance, a CycleGAN-based image enhancement method is proposed for railway inspections [50], and an attention-guided lightweight generative adversarial network is designed for maritime video surveillance [51]. Guo _et al_. [52] enlightened the darkness in maritime transportation scenes with a lightweight neural network. Besides, [53] and [54] have demonstrated the benefits of low-light enhancement for promoting the accuracy of higher-level image analysis tasks in ITS.
## III Double Domain Guided Low-Light Image Enhancement Network
In this section, we first introduce the Laplacian of Gaussian Operator (LoG) in Section III-A. The architecture of DDNet and the implementation details of the self-calibrated convolutions are then presented in Section III-B and III-C. The joint loss function is introduced in Section III-D.
### _Laplacian of Gaussian Operator_
The transportation surveillance under low-light environments suffers from low brightness along with vague edge features, which causes knotty troubles to higher-level visual tasks in ITS [55]. Therefore, it is necessary to take the restoration of edge features into consideration [56]. The Laplace operator is the sum of the second-order partial derivatives of the gray image function in the horizontal and vertical directions [57]. It responds to areas where the intensity changes rapidly and can be used to extract the image edge features. The Laplacian operator \(L(u,v)\) corresponding to the intensity value \(I\) of the image pixel can be given as follows
\[L(u,v)=\frac{\partial^{2}I}{\partial u^{2}}+\frac{\partial^{2}I}{\partial v^{ 2}}. \tag{1}\]
A single image can be represented by a discrete set of pixel values. The gradient feature map can thus be generated through a second-order derivative discrete convolutional kernel \(K_{L}\), which approximates the Laplacian operator, i.e.,
\[K_{L}=\left[\begin{array}{ccc}0&+1&0\\ +1&-4&+1\\ 0&+1&0\end{array}\right]. \tag{2}\]
However, the images captured in low-light environments commonly contain unwanted noise. The sensitivity to noise makes it challenging to accurately extract gradient features from low-light images. To this end, we first reduce the interference of noise on the image by Gaussian smoothing filtering, which can be expressed as follows
\[G_{\sigma}(u,v)=\frac{1}{2\pi\sigma^{2}}\exp\left(-\frac{u^{2}+v^{2}}{2\sigma ^{2}}\right), \tag{3}\]
where \(\sigma\) is the Gaussian standard deviation. Benefiting from the associative property of the convolutional operation, we obtain a hybrid filter by convolving the Gaussian smoothing filter and Laplacian filter to generate LoG-based gradient features. The 2-D LoG function centered on zero with Gaussian standard deviation \(\sigma\) is given by
\[LoG(u,v)=-\frac{1}{\pi\sigma^{4}}\left[1-\frac{u^{2}+v^{2}}{2\sigma^{2}} \right]e^{-\frac{u^{2}+v^{2}}{2\sigma^{2}}}. \tag{4}\]
The convolutional kernel of LoG is small, and the kernel parameters are pre-calculated, which brings little computational burden. In this work, the convolutional kernel parameters of LoG can be given as follows
\[K_{LoG}=\left[\begin{array}{ccccc}0&0&+1&0&0\\ 0&+1&+2&+1&0\\ +1&+2&-16&+2&+1\\ 0&+1&+2&+1&0\\ 0&0&+1&0&0\end{array}\right]. \tag{5}\]
Fig. 3: The examples of the enhanced results on gradient domain, from left to right: (a) low-light images, (b) LoG-based gradient feature map, (c) GEM-enhanced gradient map, and (d) final enhanced images.
In the network, we first generate the gradient map of the low-light image via the LoG-based operator, which will be then enhanced in the GEM, as shown in Fig. 3.
### _Network Architecture_
An ordinary neural network can not simultaneously and accurately generate the normal-light image and gradient feature map from the low-light image. We thus use multi-stage architecture to perform fusion-decomposition-fusion on the color and gradient domains. For the sake of better understanding, Fig. 4 depicts the architecture of our DDNet. Specifically, we first concatenate low-light images and their corresponding LoG-based gradient feature maps and feed them into the network. The proposed architecture includes six self-calibrated convolutions with attention modules (ScCAM) in the peripheral en-decoder, GEM and CEM, respectively. As introduced in Section. III-C, the ScCAMs leverage spatial attention to identify valuable information locations within the feature maps, which are then utilized for self-calibration convolutions. This enables the convolutional modules to extract more important features without incurring additional computational costs. Additionally, the feature maps share similar structures, e.g., the same size (width and height) and intensity range (\([0,255]\)), allowing ScCAM to effectively extract and enhance the spatial features on gradient and color domains simultaneously. Therefore, the potential spatial features of gradient and color domains are effectively mined and enhanced during the en-decoding in GEM and CEM. The outputs of GEM and CEM, as well as the outputs of previous encoders are then fed to the final feature fusion decoder, which reconstructs the normal-light image based on the fused feature map. It is noted that during the training process, the enhanced gradient and color maps are generated by their respective decoders, which are constrained by individual loss functions to guarantee the restoration of both gradient and color information, as introduced in Section III-D. Due to the comprehensive enhancement on double domains with GEM and CEM, the proposed DDNet restores the low-light image with clear edges and natural colors.
### _ScCAM_
To reduce the computational parameters, the majority of deep learning-based lightweight low-light enhancement networks extract hierarchical features progressively. However, this strategy leads to the insufficient utilization of low-frequency information, which results in poor performance on image detail restoration. Meanwhile, the self-calibrated convolutions (SCCs) perform satisfactorily in a variety of low-level and higher-level vision tasks [58]. SCCs can efficiently extract multi-domain and multi-scale feature information to guide the enhancement processing without additional computational effort. In this section, we propose the ScCAM to conduct the encoder-decoder structures. It mainly consists of two parts (i.e., the upper and lower branches), as shown in Fig. 5. In particular, the upper part computes the attention information by introducing the spatial attention module, which can be expressed as follows
\[y_{\text{upper}}=F_{\text{scm}}\left(M\left(F_{\text{am}}\,\left(f^{1\times 1 }\left(x_{\text{in}}\right)\right);f^{3\times 3}\left(f^{1\times 1}\left(x_{ \text{in}}\right)\right)\right)\right), \tag{6}\]
Fig. 4: The flowchart of our double domain guided low-light image enhancement network. The coarse enhancement module (CEM) and LoG-based gradient enhancement module (GEM) are embedded in the encoder-decoder structure to improve the image quality on separate domains. Moreover, the outputs of diversified decoders are constrained by the proposed joint loss function respectively.
Fig. 5: The sketch map of the encoder-decoder structure, which employs the Self-calibrated Convolutions with Attention Module (ScCAM) for spatial and attention feature extraction.
where \(x_{in}\), \(f^{1\times 1}\), \(f^{3\times 3}\), \(F_{sam}\), \(M(\cdot;\cdot)\), and \(F_{scm}\) represent the input of convolutional layer, the convolutional operation with 1\(\times\)1 kernel size, the convolutional operation with 3\(\times\)3 kernel size, the spatial attention module, the multiplication function, and the standard convolution module, respectively. In addition, the lower part uses the standard convolution module to recover the spatial domain information, which can be expressed as follows
\[y_{\text{lower}}=F_{\text{scm}}\left(F_{\text{scm}}\left(f^{1\times 1}\left(x_{ \text{in}}\right)\right)\right). \tag{7}\]
The output features of these two parts are then concatenated together and fed into a 1\(\times\)1 convolution layer for information fusion. To speed up model training, the local residual path is employed to generate the final output feature. The output (\(y_{ScCAM}\)) of ScCAM can be thus yielded by
\[y_{ScCAM}=f^{1\times 1}\left(y_{upper};y_{lower}\right)+x_{in}, \tag{8}\]
where \((\cdot;\cdot)\) represents the concatenation operation.
#### Iii-C1 Spatial Attention Module
In the process of low-light image enhancement, the complexity of scene information increases the difficulty of enhancement. Considering the human visual cerebral cortex, applying the attention mechanism can analyze complex scene information more quickly and effectively. The spatial attention module is beneficial for analyzing where the valuable information on the feature map is, which contributes to focusing more precisely on the feature map's valuable information. As shown in Fig. 5, to achieve spatial attention, we first use the average pooling and max pooling in the channel dimension. The feature maps are then concatenated and fed into a convolution layer with 7 \(\times\) 7 kernel to generate the final spatial attention feature map. The spatial attention function can be expressed as follows
\[F_{sam}(\mathcal{I})=S\left(f^{7\times 7}\left(F_{avg}^{s}(\mathcal{I});F_{ \text{max}}^{s}(\mathcal{I})\right)\right), \tag{9}\]
where \(\mathcal{I}\), \(F_{avg}^{s}\), \(F_{max}^{s}\), \(f^{7\times 7}\), and \(S(\cdot)\) represent the inputs of spatial attention module, average pooling, max pooling, the convolutional operation with 7\(\times\)7 kernel size, and the sigmoid function, respectively.
#### Iii-C2 Standard Convolution Module
In the standard convolution module, the convolution layer is first employed to guarantee the learning ability. Layer normalization (LN) is independent of batch size, which reduces the computational complexity when calculating normalization statistics. Furthermore, the Parametric Rectified Linear Unit (PReLU) is employed to perform nonlinear activation on the normalized data, which improves the generalization ability of the network in complex low-light scenes. The standard convolution function can be generated as follows
\[F_{scm}(w)=PR(LN(f^{3\times 3}(w))), \tag{10}\]
where \(w\), \(LN(\cdot)\), and \(PR(\cdot)\) represent the inputs of the standard convolution module, layer normalization, and parametric rectified linear unit, respectively.
### _Loss Function_
To effectively constrain each component of the DDNet, we propose a joint loss function \(\mathcal{L}_{total}\) consisting of Laplacian-based gradient consistency loss \(\mathcal{L}_{\text{Lap}}\), coarse enhancement loss \(\mathcal{L}_{\text{Coarse}}\), and final enhancement loss \(\mathcal{L}_{\text{Final}}\), which can be expressed as follows
\[\mathcal{L}_{total}=\omega_{1}\mathcal{L}_{\text{Lap}}+\omega_{2}\mathcal{L}_{ \text{Coarse}}+\omega_{3}\mathcal{L}_{\text{Final}}, \tag{11}\]
where \(\omega_{1}\), \(\omega_{2}\), and \(\omega_{3}\) are the weights of each loss, which are set to 0.2, 0.2, and 0.6, respectively. The GEM and CEM are proposed to enhance the gradient and color features, respectively, which are constrained by the \(\ell_{2}\) loss function. The \(\mathcal{L}_{\text{Lap}}\) and \(\mathcal{L}_{\text{Coarse}}\) can be given as follows
\[\mathcal{L}_{\text{Lap}}=\frac{1}{N}\sum_{p=1}^{N}\sum_{i=1}^{1}||\hat{I}_{i}^{ l}(p)-I_{i}^{l}(p)||^{2}, \tag{12}\]
\[\mathcal{L}_{\text{Coarse}}=\frac{1}{N}\sum_{p=1}^{N}\sum_{i=1}^{3}||\hat{I}_{i }^{c}(p)-I_{i}^{c}(p)||^{2}, \tag{13}\]
where \(N\) is the number of pixels, \(\hat{I}_{i}^{l}(p)\) and \(I_{i}^{l}(p)\) are the \(i\)-th color channel of pixel \(p\) in the gradient map of low-light image and ground truth, respectively. \(\hat{I}_{i}^{c}(p)\) and \(I_{i}^{c}(p)\) represent the corresponding values on the color domain.
To finely fuse the gradient and coarse enhancement features, we use the structural similarity (SSIM) [59] as the constraint of the final enhancement to further refine the learning and mapping, i.e.,
\[\mathcal{L}_{\text{Final}}=1-\sum_{i=1}^{3}ssim(\hat{I}_{i}^{f},I_{i}), \tag{14}\]
where \(\hat{I}_{i}^{f}\) is the final fine enhancement image, and \(I_{i}\) is the ground truth. \(ssim(\cdot,\cdot)\) calculates the structural similarity consisting of the aspects of color, structure, and contrast.
## IV Experiments and Analysis
In this section, the experimental details are first introduced, which include datasets, evaluation metrics, and running
platform. To clearly demonstrate the superiority of DDNet, qualitative and quantitative comparisons with several state-of-the-art methods on standard and transportation-related datasets are then presented. To validate the rationality of the network, we conduct ablation experiments on each module. The experiments on running time, object detection, and scene segmentation are finally conducted, which demonstrate practical contributions of the proposed method to real-time UHD transportation surveillance in ITS.
### _Implementation Details_
#### Iv-A1 Datasets
It is commonly intractable to capture the real-world low/normal-light image pairs, which brings great challenges for data-driven image enhancement networks. Therefore, to improve the robustness of our DDNet to the complex natural environments, we utilize the real-captured and synthesized low-light images simultaneously. The most commonly used dataset is LOL [36], which contains 1500 pairs of low-light images. Among them, 500 pairs are captured in real scenes, and the rest are synthesized with the adaption of the Y channel in YCbCr image through the interface from Adobe Light-room software 1.
Footnote 1: The hyperparameters of Adobe Light-room software: Exposure (\(-5+5F\)), Highlights (\(50\min\left\{Y,0.5\right\}+75\)), Shadows (\(-100\min\left\{Z,0.5\right\}\)), Nitrate (\(-75+75F\)), and Whites (\(16(5-5F)\)). It is noted that the \(X\), \(Y\), and \(Z\) are the variable obeys uniform random distribution \(\mathcal{U}(0,1)\), and \(F=X^{2}\).
Besides LOL, to improve the enhancement effect on transportation surveillance scenes, we select 1000 clear outdoor images from the PASCAL VOC 2007 [60], COCO [61], as well as DETRAC [62] datasets and synthesize the low-light images with another method, which multiplies a specific coefficient to all image pixels. The synthesized image \(L(x)\) can be generated by
\[L(x)=C(x)m(x), \tag{15}\]
where \(C(x)\) is the clear image, and \(m(x)\) is the coefficient, which is a random number between 0.1 and 0.9. To prove the generalization ability of DDNet, besides evaluation on the LOL dataset, we also select representative low-light images from DICM [22], LIME [27], MEF [63], and TMDIED dataset for testing.
#### Iv-A2 Evaluation Metrics
For low-light image enhancement, the evaluation metrics can be broadly classified into two groups: with or without the reference of ground truth. To conduct a more comprehensive analysis of the enhancement effectiveness, we first utilize the peak signal-to-noise ratio (PSNR) [64], structural similarity (SSIM) [59], and learned perceptual image patch similarity (LPIPS) [65] as our reference-based evaluation metrics. Additionally, we have incorporated the natural image quality evaluator (NIQE) [66] and perceptual-based image quality evaluator (PIQE) [67] as our no-reference metrics to quantitatively evaluate the performance of image enhancement across diverse low-light scenarios. It is noteworthy that larger values of PSNR and SSIM, as well as smaller values of NIQE, PIQE, and LPIPS, are indicative of better image quality.
#### Iv-A3 Running Platform
In the training period, the Adam optimizer is employed to suggest 100 epochs for training DDNet. The initial learning rate of the optimizer is 0.001, which is multiplied by 0.1 after every 20 epochs. Besides, the experimental network is trained and tested in a Python 3.7 environment using the PyTorch software package. The computational device is a PC with an AMD EPYC 7543 32-Core Processor CPU accelerated by an Nvidia A40 GPU, which has also been widely used in industrial-grade servers (e.g., Advantech SKY-6000 series and Thinkmate GPX servers). The proposed method could be thus easily extended to the higher-level visual task (e.g., vehicle detection and tracking) in ITS.
### _Image Quality Assessment_
To assess the quality of low-light image enhancement, we compare DDNet with several state-of-the-art methods,
including HE [16], NPE [26], LIME [27], JIEP [28], CRM [29], Dong [31], BIMEF [30], DeHz [32], RetinexNet [36], MBLLEN [40], KinD [35], EnlightenGAN [46], DLN [41], Zero [22], StableLIVE [42], RUAS [37], LLFlow [43], MTRBNet [44], and SCI [23]. It is noted that the parameters of each model are loaded from the corresponding official file of model weight.
#### Iv-B1 Quantitative Analysis
We first compute objective evaluation metrics (PSNR, SSIM, NIQE, PIQE, and LPIPS) for 15 LOL test images. As presented in Table I, LIME outperforms the Retinex-based approach (i.e., NPE) overall, with credit to the noise reduction achieved by BM3D. Furthermore, CRM utilizes a camera response model, which is more effective in extracting information from low-light backgrounds. Zero yields unsatisfactory results in extremely low-light regions. Although DLN utilizes both local and global features of low-light images and exhibits better generalization capabilities, the enhancement effect still falls short. Compared with the state-of-the-arts, our DDNet has an obvious advantage in the objective evaluation indicators with better stability, which is beneficial from the comprehensive guidance of both color and gradient domains.
We also made an objective evaluation of images on other public datasets, including DICM [22], LIME [27], MEF [63], and TMDIED, as illustrated in Tables II and III. Traditional methods are relatively uneven because they are challenging to deal with the nonuniform noise. The learning methods can receive satisfactory performance on both low-light enhancement and noise suppression, which thus performs better. In addition, due to the decomposition and reconstruction of double-domain features, DDNet can effectively recover the valuable information hidden in the dark with better robustness. Therefore, the enhanced image can better satisfy the complex transportation scenes and has the best quantitative evaluation metric. In Fig. 6, we present the quantitative evaluation results with the box plots. The first row is the NIQE evaluation results, and the second row is the PIQE evaluation results. The non-referenced metrics indicate that our method has better image quality compared with the state-of-the-arts.
#### Iv-B2 Visual Analysis
To compare the visual performance of our DDNet with the state-of-the-arts, we first analyze the visual differences in the standard LOL test dataset. As shown in Fig. 7, HE has demonstrated significant improvements in the brightness and contrast of low-light images with rapid computational efficiency. However, it lacks the capability to
Fig. 6: The quantitative comparisons of enhancement methods on different datasets. From left to right: (a) DICM [22], (b) LIME [27], (c) MEF [63], and (d) TMDIED datasets. NIQE (top) and PIQE (bottom) are employed as the quantitative evaluation indices.
suppress the noise and results in color distortion in local areas. NPE and BIMEF exhibit similar visual performance with poor contrast. Although LIME can eliminate noise in localized regions of the image, the BM3D algorithm struggles to distinguish between noise and texture information. CRM produces severely skewed color information in comparison to Retinex-based methods. RetinexNet demonstrates promising color extraction capabilities, but the edge feature is often severely compromised. MBLLEN and KinD can effectively remove unwanted noise information; however, the color naturalness is often unsatisfactory. EnlightenGAN, which employs a weakly-supervised architecture, can achieve low-light enhancement, but it is ineffective in extremely dark areas. Zero is lightweight and efficient, but the enhancement effect is often compromised for the sake of computational speed. DLN suffers from noise interference, which limits its effectiveness. While the StableLIVE recovers a significant amount of valuable information from dark regions, the resulting image is often overexposed, leading to a gray-and-white image with minimal contrast. SCI exhibits unsatisfactory performance when applied to extremely low-light images. By comparison, our proposed DDNet achieves a better balance between brightness enhancement and noise suppression in comparison to the current state-of-the-art methods.
To verify the robustness of the proposed method on low-light transportation surveillance, we also collect UHD low
Fig. 8: The visual comparisons of different enhancement methods on the real-captured UHD low-light images in transportation surveillance. From left to right: (a) Low-light image, restored images generated by (b) HE [16], (c) Dong [31], (d) EnlightenGAN [46], (e) DLN [41], (f) Zero [22], (g) RUAS [37], (h) LLFlow [43], (i) SCI [23], and (j) the proposed DDNet, respectively.
Fig. 7: The visual comparisons of different enhancement methods for three typical images from the LOL dataset [36]. From left to right: (a) Low-light images, restored images, generated by (b) HE [16], (c) NPE [26], (d) LIME [27], (e) CRM [29], (f) Dong [31], (g) BIMEF [30], (h) DeHz [32], (i) RetinexNet [36], (j) MBLLEN [40], (k) KinD [35], (l) EnlightenGAN [46], (m) DLN [41], (m) Zero [22], (o) StableLIVE [42], (p) LLFlow [43], (q) MTRBNet [44], (r) SCI [23], (s) the proposed DDNet, and (o) Ground Truth, respectively.
light images in transportation surveillance for testing2. The comparison of global naturalness and local magnification of the enhanced image is shown in Fig. 8. HE, RUAS, and SCI suffer from overexposure, resulting in unnatural observation of luminous objects. Dong and Zero fail to satisfactorily recover the color features. EnlightenGAN and DLN are significantly interfered by the noise. LLFlow performs better, but the computational speed on 4K image can not meet the real-time video surveillance. In general, the double domain guided DDNet can achieve both satisfactory enhancement and computational efficiency.
Footnote 2: The real-captured UHD low-light images in transportation surveillance are available at: [https://github.com/QuIX/DDNet](https://github.com/QuIX/DDNet).
#### Iv-B3 Running Time Comparisons
To prove the advantage of DDNet in terms of computational efficiency, we compare the performance on the running time with the objective indicators of the enhancement performance, as shown in Table. VII and Fig. 10. It is noted that the time over one second is shown in '--', which is not worth considering in UHD transportation surveillance due to the poor efficiency. With the outperforming enhancement performance, our method is able to enhance the 4K images over 35 FPS on the experimental platform, which is faster than most of the previous methods, meeting the requirements of UHD transportation surveillance. Although Zero [22] and SCI [23] are faster, their enhancement effect is much worse than ours.
Fig. 10: The trade-off between the running time, NIQR, and PSNR on 4K images (\(3840\times 2160\) pixels). The results show the superiority of our DDNet among the start-of-the-art methods.
Fig. 9: The qualitative results of object detection experiments on low-light transportation surveillance data, which select YOLOv5 and YOLOX [3] as the basic detection methods. From left to right: (a) Low-light images, the enhanced images of (b) KinD [35], (c) EnlightenGAN [46], (d) Zero [22], (e) RUAS [37], (f) LLFlow [43], (g) SCI [23], and (h) the proposed DDNet, respectively. It can be seen that DDNet is more beneficial for detection accuracy improvement due to the enhancement of edge features on the gradient domain.
### _Ablation Study_
In this section, we attempt to verify the necessities of ScCAM and double-domain guidance. The 15 images from the LOL test dataset are utilized as the basic reference. According to the metrics provided in Table IV, the employment of the spatial attention module (SAM) and standard convolution module (SCM) significantly improves the enhancement performance. When both SAM and SCM are employed, PSNR, SSIM, and LPIPS performance are improved by \(1.38\), \(0.015\), and \(0.019\), respectively. The experimental results about double-domain guidance are illustrated in Table V. The objective evaluation performance is the worst when the information of both color and gradient domains is not enhanced. The employment of coarse enhancement module (CEM) and LoG-based gradient enhancement module (GEM) significantly improves the enhancement performance. When both CEM and GEM are employed, PSNR, SSIM, and LPIPS performance are improved by \(0.85\), \(0.009\), and \(0.019\), respectively.
In addition, to verify the balance between the constraint on different domains, we conduct the ablation experiment on the design of loss function. Specifically, we set the weight of each loss differently in the training period. Table VI presents the quantitative result. Firstly, we fix the weight ratio between \(\omega_{1}\) and \(\omega_{2}\) and adjust the ratio between them and \(\omega_{3}\). We then fix \(\omega_{3}\) as the obtained best result and adjust the ratio between \(\omega_{1}\) and \(\omega_{2}\). The ablation experiment indicates that current weights can supervise the network better with more satisfactory enhancement results.
### _Improvement of Object Detection in ITS_
In order to further demonstrate the practical benefits of our proposed DDNet in the domain of transportation surveillance, we have employed the YOLOv5 and YOLOX [3] to detect objects under low-light conditions, and compare the detection results with or without the application of image enhancement methods. To conduct our analysis, we have selected experimental images from the COCO [61] and ExDARK [70] datasets. Specifically, we initially selected 1500 transportation-related images from the COCO dataset for the training of our detection networks. Subsequently, we performed evaluation tests on the ExDark dataset. As depicted in Fig. 9, the detection networks exhibit poor performance in dark transportation scenes, often failing to achieve accurate object detection owing to the low contrast and vague edge features. However, following the application of enhancement methods, the detection accuracy is significantly increased. Furthermore, in comparison to state-of-the-art methods, the images enhanced by DDNet demonstrate superior performance, primarily due to the comprehensive recovery of both color and gradient features. These findings provide the evidence that DDNet holds practical benefits for low-light transportation surveillance tasks, and is beneficial for higher-level visual tasks in ITS when operating under low-light environments.
### _Improvement of Scene Segmentation in ITS_
The scene segmentation is also a typical higher-lever visual task in transportation surveillance. To demonstrate the practical improvement of our method for scene segmentation, we conducted the comparison experiment on ACDC [68], a real-captured transportation-related dataset under adverse visual conditions, including low-light, hazy, rainy, etc. We employed the DAFormer [69] with the model weight pre-trained on cityscapes dataset, which mainly consists of normal-light images. Fig. 11 presents the visual results. As can be observed, in low-light environments, the edge features of objects appear vague, and the color brightness is low, making it challenging for segmentation methods to accurately classify the pixels. Additionally, accurately classifying small
Fig. 11: The detailed results of segmentation experiments on the ACDC dataset [68], which selects DAFormer [69] as the basic segmentation method. The first and third rows are raw images, and the second and fourth rows are the visualized results of scene segmentation. From left-top to right-bottom: (a) Low-light image, and the segmentation results on the enhanced images of (b) HE [16], (c) RetinexNet [36], (d) KinD [35], (e) EnlightenGAN [46], (f) Zero [22], (g) RUAS [37], (h) SCI [23], (i) the proposed DDNet, and (j) Ground Truth, respectively. It is noted that the employed DAFormer is pre-trained on cityscapes dataset. Compared with other methods, our DDNet enables the model pre-trained on normal-light images performing better under low-light conditions.
objects, such as distant pedestrians, is difficult owing to the low contrast. Following the application of low-light image enhancement method, the visibility of low-light scenes is significantly improved. However, most state-of-the-art methods tend to suffer from noise interference and color distortion, leading to erroneous segmentation. Furthermore, it is still challenging to accurately segment small objects due to the vague edge features. In particular, our DDNet effectively recovers the low-light image with better color naturalness and clear edge features, resulting in more accurate classification of challenging pixels in the enhanced images. Overall, our method enables models pre-trained on normal-light images to perform better in low-light conditions.
## V Conclusion and Future Perspectives
This paper proposes a double domain guided real-time low-light image enhancement network (DDNet) for UHD transportation surveillance. Specifically, we suggest the encoder-decoder structure as the main architecture of the learning network, and the original task is divided into two subtasks (i.e., coarse enhancement and Laplacian of Gaussian (LoG)-based gradient enhancement). The coarse enhancement module (CEM) and LoG-based gradient enhancement module (GEM) are proposed and embedded in the encoder-decoder structure, which assist the network to efficiently enhance the color and gradient features under the constraint of the proposed joint loss function. Through the decomposition and reconstruction of both color and gradient features, our DDNet can perceive the detailed information concealed by the dark background with greater precision. Image quality and running time experiments on standard datasets and UHD low-light images in transportation surveillance demonstrate that our DDNet satisfies the requirement of real-time transportation surveillance. Besides, compared with the state-of-the-arts, the object detection and segmentation experiments prove that our method contributes more to higher-level image analysis tasks under low-light environments in ITS. It is mainly beneficial from the guidance of both color and gradient domains.
In conclusion, our work presents a real-time low-light image enhancement method for UHD transportation surveillance in ITS. Although our method obtains promising results in this study, it still faces several challenges, e.g., inadequate real-captured dataset and relative large model size. The further improvement of our method includes follows.
* To overcome the inadequate real-captured dataset, the semi-supervised architecture and generative adversarial networks (GAN) will be considered to reduce the dependence of our DDNet on paired datasets.
* Currently, although the proposed method achieves the real-time processing for transportation surveillance, the model size is not lightweight enough. In the future, we will consider to employ the pruning technology [71] to build more lightweight models.
* To overcome the blurred appearance features of the fast-moving objects in real-time transportation surveillance (e.g., the vehicles on the expressways), we will consider to utilize the multi-task learning to achieve image deblurring and enhancement simultaneously.
| リアルタイム交通監視システムは、Intelligent Transportation System (ITS) の重要な要素です。しかし、低照度環境で撮影された画像は、ノイズ干渉やぼやけたエッジ特徴など、視認性に欠けて、劣化していることがよくあります。画像装置の発展により、視覚監視データの質は継続的に向上しており、2Kや4Kなど、画像処理の効率性の要求が厳しくなっています。強化品質と計算速度の両方を満たすため、本論文では、超高解像度 (UHD) 交通監視のための双領域誘導リアルタイム低照度画像強化ネットワーク (DDNet) を提案します。特に、エンコーダ-デコーダ構造を学習ネットワークの主要構成要素として設計しています。具体的には、強化処理は、提案された粗調整モジュール (CEM) とLoGベースの勾配強化モジュール (GEM) を介して、2つのサブタ |
2307.16410 | HiREN: Towards Higher Supervision Quality for Better Scene Text Image
Super-Resolution | Scene text image super-resolution (STISR) is an important pre-processing
technique for text recognition from low-resolution scene images. Nowadays,
various methods have been proposed to extract text-specific information from
high-resolution (HR) images to supervise STISR model training. However, due to
uncontrollable factors (e.g. shooting equipment, focus, and environment) in
manually photographing HR images, the quality of HR images cannot be
guaranteed, which unavoidably impacts STISR performance. Observing the quality
issue of HR images, in this paper we propose a novel idea to boost STISR by
first enhancing the quality of HR images and then using the enhanced HR images
as supervision to do STISR. Concretely, we develop a new STISR framework,
called High-Resolution ENhancement (HiREN) that consists of two branches and a
quality estimation module. The first branch is developed to recover the
low-resolution (LR) images, and the other is an HR quality enhancement branch
aiming at generating high-quality (HQ) text images based on the HR images to
provide more accurate supervision to the LR images. As the degradation from HQ
to HR may be diverse, and there is no pixel-level supervision for HQ image
generation, we design a kernel-guided enhancement network to handle various
degradation, and exploit the feedback from a recognizer and text-level
annotations as weak supervision signal to train the HR enhancement branch.
Then, a quality estimation module is employed to evaluate the qualities of HQ
images, which are used to suppress the erroneous supervision information by
weighting the loss of each image. Extensive experiments on TextZoom show that
HiREN can work well with most existing STISR methods and significantly boost
their performances. | Minyi Zhao, Yi Xu, Bingjia Li, Jie Wang, Jihong Guan, Shuigeng Zhou | 2023-07-31T05:32:57 | http://arxiv.org/abs/2307.16410v1 | # HiREN: Towards Higher Supervision Quality for Better Scene Text Image Super-Resolution
###### Abstract
Scene text image super-resolution (STISR) is an important pre-processing technique for text recognition from low-resolution scene images. Nowadays, various methods have been proposed to extract text-specific information from high-resolution (HR) images to supervise STISR model training. However, due to uncontrollable factors (_e.g._ shooting equipment, focus, and environment) in manually photographing HR images, the quality of HR images cannot be guaranteed, which unavoidably impacts STISR performance. Observing the quality issue of HR images, in this paper we propose a novel idea to boost STISR by first enhancing the quality of HR images and then using the enhanced HR images as supervision to do STISR. Concretely, we develop a new STISR framework, called High-Resolution ENhancement (HiREN) that consists of two branches and a quality estimation module. The first branch is developed to recover the low-resolution (LR) images, and the other is an _HR quality enhancement_ branch aiming at generating high-quality (HQ) text images based on the HR images to provide more accurate supervision to the LR images. As the degradation from HQ to HR may be diverse, and there is no pixel-level supervision for HQ image generation, we design a kernel-guided enhancement network to handle various degradation, and exploit the feedback from a recognizer and text-level annotations as weak supervision signal to train the HR enhancement branch. Then, a _quality estimation module_ is employed to evaluate the qualities of HQ images, which are used to suppress the erroneous supervision information by weighting the loss of each image. Extensive experiments on Text/Zoom show that HiREN can work well with most existing STISR methods and significantly boost their performances.
Scene text image super-resolution, scene text recognition, super-resolution, resolution enhancement
## I Introduction
Scene text recognition [1, 2] (STR), which aims at recognizing texts from scene images has wide applications in scene text based image understanding (_e.g._ auto-driving [3], TextVQA [4], Doc-VQA [5], and ViteVQA [6]). Despite the fact that STR has made great progress with the rapid blossom of deep learning in recent years, performance of text recognition from low-resolution (LR) text images is still unsatisfactory [7]. Therefore, scene text image super-resolution (STISR) [8, 9, 7] is gaining popularity as a pre-processing technique to recover the missing details in LR images for boosting text recognition performance as well as the visual quality of the scene texts.
As shown in Fig. 1(a), recent STISR works usually try to directly capture pixel-level (via \(L1\) or \(L2\) loss) or text-specific information from high-resolution (HR) text images to supervise the training of STISR models. For instance, Gradient profile loss [7] calculates the gradient fields of HR images as ground truth for sharpening the boundaries of the super-resolution (SR) images. PCAN [10] is proposed to learn sequence-dependent features and high-frequency information of the HR images to better reconstruct SR text images. STT [8] exploits character-level attention maps from HR images to assist the recovery. [11] and TG [9] extract stroke-level information from HR images through specific networks to provide more fine-grained supervision information. [12, 13, 14] additionally introduce external modules to extract various text-specific clues to facilitate the recovery and use the supervision from HR images to finetune their modules.
Although various techniques that extract information from the HR images have been proposed to improve the recognition accuracy, they all assume that the HR images are completely trustworthy, which is actually not true, due to the uncontrollable factors (e.g. shooting equipment, focus, and environment) in manually photographing the HR images. As shown in Fig. 1(c), the HR images may suffer from blurring (the 1st and 2nd cases) and low-contrast (the 3rd case), which unavoidably impacts the performance of STISR. In the worst case, these quality issues may cause the failure of recognition on HR images and lead to wrong supervision information. What is worse, the HR quality problem in real world is absolutely not negligible, as the recognition accuracy on HR images can be as low as 72.4% (see Tab. II).
Considering the fact that improving the photographing of LR/HR images and eliminating environmental impacts are extremely expensive (if not impossible) in the wild, and applying huge models for extracting more accurate information is also time-consuming and costly, in this paper we propose a novel solution to advance STISR by first enhancing the quality of HR images and then using the enhanced HR images as supervision to perform STISR. To this end, we develop a new, general and easy-to-use STISR framework called **H**igh-**R**esolution **EN**chancement (HiREN) to improve STISR by providing more accurate supervision. In particular, as shown in Fig. 1(b), besides the typical LR recovery branch, HiREN additionally introduces an HR enhancement branch that aims at improving the quality of HR images and a quality estimation (QE) module to conduct a quality-aware supervision. Here, the
resulting high-quality (HQ) images, instead of the HR images as in existing works, are used to supervise the LR recovery branch. Note that the degradation from HQ to HR is unknown, and there is no explicit supervision for HR enhancement, existing STISR approaches are not able to solve the task of HR enhancement. To tackle these problems, on the one hand, we introduce a degradation kernel predictor to generate the degradation kernel and then use this kernel as a clue to enhance various degraded HR images. On the other hand, we exploit the feedback of a scene text recognizer and text-level annotations as weak supervision signal to train the HR enhancement branch. What is more, to suppress the erroneous supervision information, a quality estimation (QE) module is proposed to evaluate the quality of the HQ images through the normalized Levenshtein similarity [15] of the recognized text and the ground truth, and then use this quality estimation to weight the loss of each HQ image.
Such design above offers our method four-fold advantages:
* _General_. Our framework can work with most existing STISR approaches in a plug-and-play manner.
* _Easy-to-use_. After training the HR enhancement branch, our method can be plugged online to the training of existing techniques easily.
* _Efficient_. HiREN does not introduce additional cost during inference. What is more, HiREN can also be deployed offline by caching all the enhanced HR images. This offline deployment does not introduce any additional training cost.
* _High-performance_. Our method can significantly boost the performances of existing methods.
Contributions of this paper are summarized as follows:
* We propose a novel approach for STISR. To the best of our knowledge, this is the first work to consider and exploit the quality of HR images in STISR. That is, different from existing approaches that extract various text-specific information, Our work pioneers the exploration of the quality issue of HR images.
* We develop a general, efficient and easy-to-use **H**igh-**R**esolution **EN**hancement (HiREN) framework to boost STISR by improving the supervision information from the HR images.
* We conduct extensive experiments on TextZoom, which show that HiREN is compatible with most existing STISR methods and can significantly lift their performances.
The rest of this paper is organized as follows: Section II surveys related works and highlights the differences between our method and the existing ones; Section III presents our method in detail; Section IV introduce the experimental results of our method and performance comparisons with existing methods; Section V further discusses the quality issues of HR images, error cases and limitations of the proposed method; Section VI concludes the paper while pinpointing some issues of future study.
## II Related Work
In this section, we briefly review the super-resolution techniques and some typical scene text recognizers. According to whether exploiting text-specific information from HR images, recent STISR methods can be roughly divided into two groups: generic super-resolution approaches and scene text image super-resolution approaches.
### _Generic Image Super-Resolution_
Generic image super-resolution methods [16, 17, 18, 19] usually recover LR images through pixel information
Fig. 1: Overview of existing STISR approaches and our method, and examples illustrating the quality problem of HR images. (a) The framework of existing STISR methods; (b) The HiREN framework; (c) Some examples of low-quality HR images and their enhanced results (HQ) by our method, as well as the recognized results. For each case, the 1st row shows HR and HQ images, the 2nd row presents the normalized HR and HQ images to highlight their visual differences, and the 3rd row gives the recognized characters: red indicates incorrectly recognized, and black means correctly recognized.
from HR images captured by pixel loss functions. In particular, SRCNN [20] is a three-layer convolutional neural network. [21] and SRResNet [22] adopt generative adversarial networks to generate distinguishable images. [23] employs convolutional layers, transposed convolution and sub-pixel convolution layers to extract and upscale features. RCAN [24] and SAN [25] introduce attention mechanisms to boost the recovery. Nowadays, transformer-structured approaches [26, 27, 28] are proposed to further advance the task of generic image super-resolution. Nevertheless, these approaches ignore text-specific properties of the scene text images, which leads to low recognition performance when applied to STISR.
### _Scene Text Image Super-Resolution_
Recent approaches focus on extracting various text-specific information from the HR images, which is then utilized to supervise model training. Specifically, [29, 30] calculate text-specific losses to boost performance. [31] proposes a multi-task framework that jointly optimizes recognition and super-resolution branches. [7] introduces TSRN and gradient profile loss to capture sequential information of text images and gradient fields of HR images for sharpening the texts. PCAN [10] is proposed to learn sequence-dependent and high-frequency information of the reconstruction. STT [8] makes use of character-level information from HR images extracted by a pre-trained transformer recognizer to conduct a text-focused super-resolution. [32] proposes a content perceptual loss to extract multi-scale text recognition features to conduct a content aware supervision. TPGSR [12], TATT [13], and C3-STISR [14] extract text-specific clues to guide the super-resolution. In particular, TPGSR is the first method that additionally introduces a scene text recognizer to provide text priors. Then, the extracted priors are fed into the super-resolution to iteratively benefit the super-resolution. TATT [13] introduces a transformer-based module, which leverages global attention mechanism, to exert the semantic guidance of text prior to the text reconstruction process. C3-STISR [14] is proposed to learn triple clues, including recognition clue from a STR, linguistical clue from a language model, and a visual clue from a skeleton painter to rich the representation of the text-specific clue. TG [9] and [11] exploit stroke-level information from HR images via stroke-focused module and skeleton loss for more fine-grained super-resolution. Compared with generic image super-resolution approaches, these methods greatly advance the recognition accuracy through various text-specific information extraction techniques. Nevertheless, they all assume that HR images are completely trustable, which is actually not true in practice. As a result, their extracted supervision information may be erroneous, which impacts the STISR performance. Since HiREN applies these methods to implement the LR recovery branch, to elaborate the differences among various super-resolution techniques in this paper, we give a summary of these methods in Tab. I on three major aspects: how their super-resolution blocks and loss functions are designed, and whether they use iterative super-resolution technique to boost the performance.
### _Scene Text Recognition_
Scene text recognition (STR) [33, 1, 2, 34, 35] has made great progress in recent years. Specifically, CRNN [36] takes CNN and RNN as the encoder and employs a CTC-based [37] decoder to maximize the probabilities of paths that can reach the ground truth. ASTER [38] introduces a spatial transformer network (STN) [39] to rectify irregular text images. MORAN [40] proposes a multi-object rectification network. [41, 42, 43] propose novel attention mechanisms. AutoSTR [44] searches backbone via neural architecture search (NAS) [45]. More recently, semantic-aware [46, 43], transformer-based [47], linguistics-aware [48, 49], and efficient [50, 51] approaches are proposed to further boost the performance. Although these methods are able to handle irregular, occluded, and incomplete text images, they still have difficulty in recognizing low-resolution images. For example, as can be seen in Sec. IV-C, CRNN, MORAN, and ASTER only achieve the recognition accuracy of 27.3%, 41.1% and 47.2% respectively when directly using LR images as input. What is more, finetuning these recognizers is insufficient to accurately recognize texts from LR images, as reported in [7]. Therefore, a pre-processor is required for recovering the details of low-resolution images.
### _Difference between Our Method and Existing STISR Works_
The motivation of HiREN is totally different from that of existing STISR approaches. As described above, existing methods focus on extracting text-specific information from HR images to supervise STISR. On the contrary, HiREN first lifts the quality of HR images, then uses the enhanced images to supervise STISR. This allows HiREN to work with most existing STISR approaches and boost their recognition performances in a general, economic and easy-to-use way.
## III Method
Here, we first give an overview of our framework HiREN, then briefly introduce the LR recovery branch. Subsequently, we present the HR enhancement branch and the quality estimation module in detail, followed by the usage of HiREN.
### _Overview_
Given a low-resolution (LR) image \(I_{LR}\in\mathbb{R}^{C\times N}\). Here, \(C\) is the number of channels of the image, \(N=H\times W\) is the collapsed spatial dimension, \(H\) and \(W\) are the height and width of image \(I_{LR}\). Our aim is to produce a super-resolution (SR)
\begin{table}
\begin{tabular}{c|c c c} \hline Method & Super-resolution block & Loss function \(\mathcal{L}_{LR}\) & Iterative \\ \hline SRCNN [20] & SRCNN [20] & MSE & \(\times\) \\ SRResNet [22] & SRResNet [22] & MSE & \(\times\) \\ TSRN [7] & SSB [7] & Gradient profile loss [7] & \(\times\) \\ PCAN [10] & PCA [10] & Edge guidance loss [10] & \(\times\) \\ STT [8] & TBSRN [8] & Text-focused loss [8] & \(\times\) \\ TPGSR [12] & SRN [7] & Gradient profile loss [7] & \(\checkmark\) \\ TG [9] & SSB [7] & Stroke-focused loss [9] & \(\times\) \\ \hline \end{tabular}
\end{table} TABLE I: Differences between typical STISR methods from three aspects: super-resolution block, loss function, and whether this method is iterative or not.
image \(I_{SR}\in\mathbb{R}^{C\times(4\times N)}\) with the magnification factor of \(\times 2\). Fig. 2 shows the architecture of our framework HiREN, which is composed of two major branches: the _LR recovery branch_\(f_{LR}\) that takes \(I_{LR}\) as input to generate a super-resolution image \(I_{SR}=f_{LR}(I_{LR})\) and a corresponding loss \(\mathcal{L}_{o}\), and the _HR enhancement branch_\(f_{HR}\) that takes \(I_{HR}\) as input to generate a high-quality (HQ) image \(I_{HQ}=f_{HR}(I_{HR})\) where \(I_{HQ}\in\mathbb{R}^{C\times(4\times N)}\), and a _quality estimation module_\(f_{QE}\) that takes \(I_{HQ}\) and \(\mathcal{L}_{o}\) as input to compute a quality-aware loss \(\mathcal{L}_{LR}\) to supervie the LR branch:
\[\mathcal{L}_{LR}=f_{QE}(I_{HQ},\mathcal{L}_{o}). \tag{1}\]
During inference, \(f_{HR}\) and \(f_{QE}\) are removed. Thus, HiREN does not introduce extra inference cost.
### _LR Recovery Branch_
In HiREN, the LR recovery branch can be one of the existing STISR approaches. As shown in Fig. 2, these methods usually work in the following way: 1) Start with a spatial transformer network (STN) [39] since in the TextZoom dataset [7] the HR-LR pairs are manually cropped and matched by humans, which may incur several pixel-level offsets. 2) Several super-resolution blocks are used to learn sequence-dependent information of text images. 3) A pixel shuffle module is employed to reshape the super-resolved image. 4) Various loss functions are served as \(\mathcal{L}_{o}\) to extract text-specific information from ground truth (\(I_{HR}\) in existing works, \(I_{HQ}\) in HiREN) to provide the supervision. To elaborate the differences among various LR branches tested in this paper, we give a summary of these methods in Tab. I.
As the motivation of HiREN is totally different from that of the existing methods, our method can work with most of them and significantly improve their performances.
### _HR Enhancement Branch_
#### Iii-C1 Overall introduction.
The enhancement of HR images is a challenging task, where the challenges lie in two aspects that will be detailed in the sequel. Formally, the HR image \(I_{HR}\) and the corresponding HQ image \(I_{HQ}\) we are pursuing are connected by a degradation model as follows:
\[I_{HR}=k\otimes I_{HQ}+n, \tag{2}\]
where \(\otimes\) denotes the convolution operation, \(k\) is the degradation kernel, and \(n\) is the additive noise that follows Gaussian distribution in real world applications [52, 53]. Different from the degradation from \(I_{HR}\) to \(I_{LR}\) where the kernel is determined by lens zooming, unfortunately, the degradation \(k\) of \(I_{HQ}\) is unknown. As shown in Fig. 1(c), such degradation can be but not limited to blurring (the 1st and 2nd cases) and low-contrast (the 3rd case). What is more, we also lack pixel-level supervision information of \(I_{HQ}\). These two challenges make existing STISR methods unable to enhance \(I_{HR}\). To cope with the first challenge, here we adopt blind image deblurring techniques [54, 55, 53, 52] to boost the recovery of \(I_{HR}\). Specifically, as shown in Fig. 2, our HR enhancement branch consists of two components: a _kernel predictor_\(P\) and a _kernel-guided enhancement network_\(f_{ke}\). The kernel predictor aims at estimating the degradation kernel \(k\) (_i.e.,_\(k=P(I_{HR})\) where \(k\in\mathbb{R}^{d}\), and \(d\) is the size of the kernel), while the kernel-guided enhancement network takes the predicted kernel and \(I_{HR}\) as input to conduct a kernel-guided enhancement: \(I_{HQ}=f_{ke}(I_{HR},k)\). The predicted kernel is utilized as a clue to strengthen the model's ability to handle various degradation and boost the recovery of HR images. As for the second challenge, we introduce a pre-trained scene text recognizer \(R\) to provide the supervision for generating more recognizable HQ images. And after training the HR enhancement branch \(f_{HR}\), HiREN uses the trained \(f_{HR}\) to generate HQ images, which are exploited for training the LR recovery branch.
#### Iii-C2 The kernel predictor.
As shown in Fig. 3, to generate a prediction of the degradation kernel, we first utilize convolution layers to obtain a spatial estimation of the kernel. Then, we employ global average pooling [56] to output the global prediction by evaluating the spatial mean value. Thus, we can
Fig. 2: The framework of HiREN. Red lines are valid only during training.
get the prediction of the kernel of size \(\mathbb{R}^{d}\), in a simple yet effective way.
#### Iii-C3 The kernel-guided enhancement network.
As shown in Fig. 3, our kernel-guided enhancement network is designed in the following way: 1) Start with an input convolution to change the channel number from \(C\) to \(C^{\prime}\). 2) Repeat \(N\) modified SRB blocks [7]. Each block consists of two convolution layers and one Bi-directional GRU [57] (BGRU) to handle sequential text images. At this step, we first stretch the predicted kernel \(k\) to pixel shape, then concatenate the pixel kernel with the feature map extracted by convolution layers at channel dimension. 3) An output convolution is applied to getting the final enhanced HQ image \(I_{HQ}\).
#### Iii-C4 Loss functions.
Here, we design the loss functions of the HR enhancement branch \(f_{HR}\). As shown in Fig. 2, there are two loss functions in \(f_{HR}\). The first one is the recognition loss \(\mathcal{L}_{rec}\) that is used to make the enhanced image \(I_{HQ}\) to be more easily recognized than \(I_{HR}\). It is provided by a pre-trained recognizer \(R\) and the text-level annotation of \(I_{HR}\). Suppose the encoded text-level annotation is \(p_{GT}\in\mathbb{R}^{L\times|\mathcal{A}|}\), where \(L\) is the max prediction length of recognizer \(R\), and \(|\mathcal{A}|\) denotes the length of the alphabet \(\mathcal{A}\). Then, the recognition loss can be evaluated by
\[\mathcal{L}_{rec}=-\sum_{j=0}^{L}p_{GT}^{j}log(R(I_{HQ})^{j}), \tag{3}\]
which is the cross entropy of \(p_{GT}\) and \(R(I_{HQ})\). Beside the recognition loss, it is essential to keep the style of the enhanced images, which has also been pointed out in a recent work [8]. Though HR images are not trustworthy, pixel information from HR images can help the model to enhance the input images, rather than totally regenerate them, which is a much more challenging and uncontrollable task. In HiREN, we use mean-squared-error (MSE) to compute pixel loss to keep the style unchanged. Formally, we have
\[\mathcal{L}_{sty}=||I_{HQ},I_{HR}||_{2}. \tag{4}\]
With the recognition loss Eq. (3) and the style loss Eq. (4), the whole loss function of the HR enhancement branch can be written as follows:
\[\mathcal{L}_{HR}=\alpha\mathcal{L}_{rec}+\mathcal{L}_{sty}, \tag{5}\]
where \(\alpha\) is a hyper-parameter to trade-off the two losses.
### _Quality Estimation Module_
Though we can improve the quality of supervision information with the help of the HR enhancement branch, we cannot guarantee the correctness of the supervision information. Therefore, to suppress wrong supervision information, we design a quality estimation module \(f_{QE}\) to evaluate the qualities of HQ images and weight the losses of HQ images according to their qualities.
Let the original loss of the LR branch be \(\mathcal{L}_{o}\in\mathbb{R}^{B}\), where \(B\) denotes the batch size. We adopt the Levenshtein similarity [15] between the \(i\)-th HQ image's recognition result \(pred_{i}\) of a recognizer \(R\) and the corresponding ground truth \(gt_{i}\) to measure its quality, and then utilize the quality values of all HQ images to compute the final loss:
\[\mathcal{L}_{LR}=\mathcal{L}_{o}[NS(pred_{1},gt_{1}),...,NS(pred_{B},gt_{B})] ^{\top}/B, \tag{6}\]
where \(NS(\cdot,\cdot)\) denotes the Levenshtein similarity, which has the following two advantages: 1) its value falls between 0 and 1; 2) it has a smooth response, thus can gracefully capture character-level errors [58]. These advantages make it suitable to weight the losses of HQ images.
### _The Usage of HiREN_
In this section, we introduce the usage of HiREN. As mentioned above, there are two ways to deploy it. One way is called "online", which can be easily implemented by plugged the HR enhancement branch to the training procedure of the LR recovery branch. The online installation algorithm of HiREN is given in Alg. 1. As shown in Alg. 1, the first thing we should do is to develop the HR enhancement branch (_i.e.,_ L4\(\sim\)L10). Specifically, given a STISR dataset \(\mathcal{D}\), we
Fig. 3: The structure of the HR enhancement branch, which consists of two components: (a) the kernel predictor \(P\), and (b) the kernel-guided enhancement network \(f_{ke}\).
first sample HR images and their corresponding text-level annotations from \(\mathcal{D}\) (L5), then generate the enhanced images \(I_{HQ}\) (L6). Finally, recognition loss and style loss described in Sec. III-C4 are computed to optimize the loss \(f_{HR}\). After that, we plug the developed HR enhancement branch to the training procedure of the LR recover branch (L11\(\sim\)L16). In particular, after sampling LR and HR images from the dataset \(\mathcal{D}\) (L12), we use the HR enhancement branch to generate the HQ image \(I_{HQ}\) (L13). Finally, the HQ image, rather than the HR image used in typical works, and the SR image are utilized to compute the text-specific loss \(\mathcal{L}_{l}\) to supervise the LR recovery branch (L11\(\sim\)L12).
The other way is called "offline", which can be implemented by caching all the enhanced HQ images. As can be checked in Alg. 2, after developing the HR enhancement branch \(f_{HR}\), we sample all the LR-HR image pairs in the old dataset \(\mathcal{D}\). Then, the corresponding HQ images are generated and then add to the new dataset \(\mathcal{\tilde{D}}\) (L6). During training the LR recovery branch, what we need to do is to sample LR-HQ image pairs to compute the loss \(L_{o}\) for the optimization of the model. Such an installation does not introduce any additional training cost to the LR recovery branch. It is worth mentioning that the HR enhancement branch is removed during inference. That is, HiREN does not introduce any additional inference cost.
```
1:Input: Training dataset \(\mathcal{D}\) and the developed HR enhancement branch \(f_{HR}\)
2:Initialize \(f_{LR}\)
3:\(\mathcal{\hat{D}}=\emptyset\)
4:for\(I_{LR},I_{HR}\sim\mathcal{D}\)do
5:\(I_{HQ}=f_{HR}(I_{HR})\)
6: Add \((I_{HQ},I_{LR})\) to \(\mathcal{\hat{D}}\)
7:while\(f_{LR}\) is not converged do
8:\(I_{HQ},I_{LR}\sim\mathcal{\hat{D}}\)
9:\(I_{SR}=f_{LR}(I_{LR})\)
10: Compute \(\mathcal{L}_{o}\) according to \(I_{SR}\) and \(I_{HQ}\)
11: Optimize \(f_{LR}\) with respect to \(\mathcal{L}_{o}\)
12:return\(f_{LR}\)
```
**Algorithm 2** The offline usage of HiREN.
## IV Performance Evaluation
In this section, we first introduce the dataset and metrics used in the experiments and the implementation details. Then, we evaluate HiREN and compare it with several state-of-the-art techniques to show its effectiveness and superiority. Finally, we conduct extensive ablation studies to validate the design of our method.
### _Dataset and Metrics_
Two groups of datasets are evaluated in this paper: low-resolution scene text dataset TextZoom and regular scene text recognition datasets.
#### Iv-A1 Low-resolution scene text dataset
The **TextZoom**[7] dataset consists of 21,740 LR-HR text image pairs collected by lens zooming of the camera in real-world scenarios. The training set has 17,367 pairs, while the test set is divided into three settings based on the camera focal length: easy (1,619 samples), medium (1,411 samples), and hard (1,343 samples).
#### Iv-A2 Regular STR datasets
These datasets are used to check the generalization power of our model trained on TextZoom when being adapted to other datasets. In particular, three regular STR datasets are evaluated in our paper to further check the advantage of HiREN: IC15-352 [8], SVT [59], and SVTP [60]. In what follows, we give brief introductions on these datasets.
The **IC15-352** dataset is first divided in [8]. This dataset consists of 352 low-resolution images collected from the IC15 [61] dataset.
Street View Text (**SVT**) [59] is collected from the Google Street View. The test set contains 647 images. Many images in SVT are severely suffered from noise, blur, and low-resolution.
SVT-Perspective (**SVTP**) [60] is proposed for evaluating the performance of reading perspective texts. Images in SVTP are picked from the side-view images in Google Street View. Many of them are heavily distorted by the non-frontal view angle. This dataset contains 639 images for evaluation.
The major metric used in this paper is word-level recognition accuracy that evaluates the recognition performance of STISR methods. Following the settings of previous works [9], we remove punctuation and convert uppercase letters to low-crease letters for calculating recognition accuracy. Besides, _Floating-point **O**perations **P**er **S**econd_ (FLOPS) is used to evaluate the computational cost of various methods. Following [9, 32], we only report _Peak Signal-to-Noise Ratio_ (PSNR) and _Structure Similarity Index Measure_ (SSIM) [62] as the auxiliary metrics to evaluate the fidelity performance because of the quality issue of the HR images.
### _Implementation Details_
All experiments are conducted on 2 NVIDIA Tesla V100 GPUs with 32GB memory. The PyTorch version is 1.8. The HR enhancement branch is trained using Adam [63] optimizer with a learning rate of 0.0001. The batch size \(B\) is set to 48. The LR recovery branch is trained with the same optimizer and batch size but a higher learning rate of 0.001, which is suggested in [12]. The recognizer \(R\) used in our method is proposed in [8]. The hyper-parameters in HiREN are set as follows: \(\alpha\) is set to 0.1, which is determined through grid search. The number of SRB blocks is set to 5 (_i.e.,_\(N=5\)) and \(C^{\prime}\) is set to 32, which is the same as in [7]. The size of kernel \(k\) is set to 32 (_i.e.,_\(d=32\)), which is similar to that suggested in [52]. Our training and evaluation are based on the following protocol: save the averagely best model during training with CRNN as the recognizer, and use this model to evaluate the other recognizers (MORAN, ASTER) and the three settings (easy, medium, hard).
### _Performance Improvement on SOTA Approaches_
#### Iv-C1 Recognition performance improvement
Here, we evaluate our method on **TextZoom**. Since HiREN is a framework that can work with most existing methods, we plug HiREN to the training of several typical super-resolution methods to
check the universality and effectiveness of HiREN, including one generic method SRCNN [20], two recent proposed STISR methods TSRN [7], TG [9], and one iterative-based and clue-guided STISR method TPGSR [12]. To show that HiREN can support various recognizers, we follow previous works [12, 8, 9] and evaluate the recognition accuracy on three recognizers: CRNN [36], MORAN [40] and ASTER [38]. We re-implement these methods to unify hardware, software, and evaluation protocols for fair comparison. Generally, our results are higher than those in the original papers. For example, with CRNN the averaged accuracy of TG is boosted from 48.9% to 49.6%. All the results are presented in Tab. II.
We first check the universality of HiREN. As can be seen in Tab. II, HiREN significantly boosts the recognition performance in almost all the cases, except for one case on TPGSR, which means that HiREN can work well with various existing techniques. As for the performance improvement of HiREN, taking a non-iterative method for example. The state-of-the-art TG [9] achieves 49.6%, 57.6% and 61.2% averaged accuracy respectively with the three recognizers (see the 9th row). After equipping our method HiREN, the accuracy is lifted to 51.1%, 58.6% and 61.7% (increasing by 1.5%, 1.0%, and 0.5%) respectively (see the 10th row). This demonstrates the effectiveness of our method. Results on more datasets and recognizers are given in the supplementary materials to demonstrate its universality.
It is worth mentioning that our HR enhancement branch can also be applied to weakly supervising the enhancement of LR and HR images to lift their recognition accuracies, as shown in the 3rd and 5th rows of Tab. II. This further supports the universality of our technique. Results above show the promising application potential of our method -- not only work with STISR methods, but also pioneer weakly supervised enhancement of LR and HR text images.
Furthermore, to better demonstrate the universality of HiREN, we conduct more experiments on more STR datasets and recently proposed STR datasets. We first evaluate our method on three STR datasets, including IC15-352, SVT, and SVTP. We use the STISR models (TSRN, TG, TPGSR, and our techniques performed on them) developed on the TextZoom dataset to evaluate these datasets. The experimental results on IC15-352, SVT, and SVTP are given in Tab. III. As shown in Tab. III, HiREN also works well on them and achieve lifted performance in almost all the cases. In particular, the performance of TPGSR on three datasets are lifted from 66.2%, 77.4%, 62.8% to 66.8%, 78.7%, and 63.6%, respectively, which demonstrates the advantage of HiREN.
Apart from that, we also give the experimental results on more recently proposed recognizers including SEED [46] and ABINet [48]. The experimental results are given in Tab. IV. As can be checked in Tab. IV, these recent recognizers still find difficulty in recognizing low-resolution text images. For example, SEED and ABINet can only correctly read 45.8% and 61.0% of LR images, which are inferior to performance of reading HR images (_i.e._, 84.8% and 89.8%). Our method HiREN can also achieve boosted performance on these recognizers in almost all the cases.
#### Iv-B2 Fidelity improvement
We also report the results of fidelity improvement (PSNR and SSIM) on major existing methods in Tab. V. Notice that these fidelity metrics have the following limitations. On the one hand, PSNR and SSIM globally measure the similarity between SR image and the ground truth image, including both characters and background. With the goal of lifting the recognition ability and readability of the scene text images, STISR should put more emphasis on recovering characters rather than the background [9, 32]. On the other hand, as pointed out by our paper, HR images are suffering various quality issues. Ergo, it is inappropriate to measure the pixel similarity between erroneous HR images
\begin{table}
\begin{tabular}{c||c c c c|c c c c|c c c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{CRNN [36]} & \multicolumn{3}{c}{MORAN [40]} & \multicolumn{3}{c}{ASTER [38]} \\ \cline{2-13} & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average \\ \hline \hline LR & 37.5\% & 21.4\% & 21.1\% & 27.3\% & 56.2\% & 35.9\% & 28.2\% & 41.1\% & 64.0\% & 42.0\% & 31.7\% & 47.2\% \\ +HiREN & 37.7\% & **27.9\%** & **23.5\%** & **30.2\%** & **57.9\%** & **38.2\%** & **28.7\%** & **42.6\%** & **66.4\%** & **43.4\%** & **32.3\%** & **48.5\%** \\ \hline HR & 76.4\% & 75.1\% & 64.6\% & 72.4\% & **89.0\%** & 83.1\% & 71.1\% & 81.6\% & 93.4\% & 87.0\% & 75.7\% & 85.9\% \\ +HiREN & **77.5\%** & **75.4\%** & **65.0\%** & **72.9\%** & 88.8\% & **83.7\%** & **71.9\%** & **82.0\%** & **93.5\%** & **87.5\%** & **76.2\%** & **86.3\%** \\ \hline \hline SRCNN & 39.8\% & 23.4\% & 21.7\% & 29.0\% & 57.7\% & 36.1\% & 28.5\% & 41.8\% & 65.5\% & 41.9\% & 31.7\% & 47.5\% \\ +HiREN & 41.6\% & **24.0\%** & **23.7\%** & **30.4\%** & **61.1\%** & **38.6\%** & **29.3\%** & **44.0\%** & **67.5\%** & **44.7\%** & **32.8\%** & **49.5\%** \\ \hline TSRN & 52.8\% & 39.8\% & 31.6\% & 42.1\% & 64.5\% & 49.3\% & 36.7\% & 51.1\% & 69.7\% & 54.8\% & 41.3\% & 56.2\% \\ +HiREN & **56.5\%** & **44.1\%** & **32.2\%** & **45.0\%** & **68.5\%** & **52.5\%** & **38.6\%** & **54.2\%** & **73.5\%** & **56.3\%** & **39.2\%** & **57.4\%** \\ \hline TG & 60.5\% & 49.0\% & 37.1\% & 49.6\% & 72.0\% & 57.6\% & 40.0\% & 57.6\% & 76.0\% & 61.4\% & 42.9\% & 61.2\% \\ +HiREN & **62.4\%** & **51.2\%** & **37.5\%** & **51.1\%** & **73.4\%** & **58.4\%** & **41.0\%** & **58.6\%** & **77.5\%** & **61.5\%** & **43.0\%** & 61.7\% \\ \hline TPGSR & 63.1\% & 52.0\% & 38.6\% & 51.8\% & **74.9\%** & 60.5\% & 44.1\% & **60.5\%** & **78.9\%** & 62.7\% & 44.5\% & 62.8\% \\ +HiREN & **63.5\%** & **52.7\%** & **38.8\%** & **52.4\%** & 74.7\% & **60.9\%** & **44.1\%** & **60.5\%** & 78.3\% & **63.5\%** & **45.6\%** & **63.5\%** \\ \hline \end{tabular}
\end{table} TABLE II: Performance (recognition accuracy) improvement on TextZoom.
\begin{table}
\begin{tabular}{c|c c} \hline Method & SEED [46] & ABINet [48] \\ \hline LR & 45.8\% & 61.0\% \\ HR & 84.8\% & 89.8\% \\ \hline TSRN & 56.3\% & **64.0\%** \\ +HiREN & **56.5\%** & 63.8\% \\ \hline TG & 60.7\% & **66.0\%** \\ +HiREN & **60.9\%** & 65.9\% \\ \hline TPGSR & 61.7\% & 67.5\% \\ +HiREN & **62.2\%** & **68.1\%** \\ \hline \end{tabular}
\end{table} TABLE IV: Performance of recent recognizers on TextZoom.
\begin{table}
\begin{tabular}{c||c c c} \hline Method & IC15-352 & SVT \\ \hline LR & 49.4\% & 74.8\% & 60.8\% \\ \hline TSRN & 48.9\% & 72.6\% & **61.4\%** \\ +HiREN & **52.3\%** & **74.8\%** & 60.3\% \\ \hline TG & 59.1\% & 74.2\% & 60.2\% \\ +HiREN & **61.7\%** & **76.5\%** & **68.5\%** \\ \hline TPGSR & 66.2\% & 77.4\% & 62.8\% \\ +HiREN & **66.8\%** & **78.7\%** & **63.6\%** \\ \hline \end{tabular}
\end{table} TABLE III: Performance comparison on three STR datasets with CRNN as recognizer.
whose pixels are not trustworthy. Therefore, we only present PSNR and SSIM as auxiliary metrics to roughly draw some conclusions.
Notice that existing methods utilize SR-HR image pairs to calculate PSNR and SSIM. However, as mentioned above, the HR images are suffering from quality issues. Hence, we additionally provide the fidelity results of calculating PSNR and SSIM between SR and HQ images. The experimental results are given in Tab. V. As can be seen in Tab. V, 1) A higher PSNR does not means a higher recognition accuracy. For example, the PSNR of TG in SR-HR is inferior to that of TSRN (_i.e.,_ 21.47 v.s. 21.84) but TG performs better on recognition accuracy (_i.e.,_ 49.6% v.s. 42.1%). The reason lies in that TG is a stroke-focused technique, focusing on recovering fine-grained stroke details rather than the whole image quality including background that is minor to recognition. This is consistent with the results in [9]. 2) Comparing with the original models, after applying HiREN, the SR-HQ fidelity performance of the new models are boosted in almost all cases. 3) HiREN gets a low performance on the PSNR and SSIM of SR-HR images but obtains an improved recognition performance, which supports the quality issue of HR images.
#### Iv-B3 Visualization
Here, we visualize several examples in Fig. 4 to better demonstrate the performance of our technique. We can see that HiREN can help the existing methods to recover the blurry pixels better (see the 2nd \(\sim\) 6th cases). In particular, a better "ee" in the 2nd and 3rd cases,'m' in the 4th case, 'f' in the 5th case, and 'e' in the 6th case are obtained by our technique. Besides, in some extremely tough cases where even with the HR images the recognition is hard, HiREN can still achieve better recovery (see the 7th case). These results show the power of HiREN.
#### Iv-B4 Training and inference cost
We have discussed the high performance of our technique above. In this section, we provide the results of training and inference costs to show the efficiency of HiREN. Specifically, We take TG and TPGSR
\begin{table}
\begin{tabular}{c||c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Metrics} \\ \cline{2-5} & \multicolumn{2}{c|}{SR-HR} & \multicolumn{2}{c|}{SR-HQ} & \multicolumn{2}{c}{Avg} \\ \cline{2-5} & PSNR & SSIM(\(\times 10^{-2}\)) & PSNR & SSIM(\(\times 10^{-2}\)) & Acc \\ \hline \hline LR & 20.35 & 69.61 & 20.73 & 68.76 & 27.3\% \\ \hline TSRN & 21.84 & 76.34 & 21.08 & 74.76 & 42.1\% \\ \hline \(\star\)HiREN & **22.01** & **76.60** & **21.46** & **76.23** & **45.0\%** \\ \hline TG & **21.47** & **73.57** & **20.89** & 72.59 & 49.6\% \\ \(\star\)HiREN & 21.12 & 73.43 & 20.84 & **73.78** & **51.1\%** \\ \hline TPGSR & **22.05** & **76.71** & 21.05 & **76.77** & 51.8\% \\ \(\star\)HiREN & 21.69 & 75.97 & **21.15** & 76.44 & **52.4\%** \\ \hline \end{tabular}
\end{table} TABLE V: Fidelity and recognition results on major existing methods. The results are obtained by averaging three settings (easy, medium and hard).
Fig. 4: Examples of generated images. Here, GT indicates ground truth. We use CRNN as the recognizer. Red/black characters indicate incorrectly/correctly recognized.
\begin{table}
\begin{tabular}{c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Metrics} \\ \cline{2-3} & Training cost & Inference cost \\ \hline TG & 19.60 & 0.91 \\ +HiREN(Online) & 20.59 & 0.91 \\ +HiREN(Offline) & 19.60 & 0.91 \\ \hline TPGSR & 7.20 & 7.20 \\ +HiREN(Online) & 8.19 & 7.20 \\ +HiREN(Offline) & 7.20 & 7.20 \\ \hline \end{tabular}
\end{table} TABLE VI: The training and inference costs of our method. The cost is measured by the FLOPs(G).
as baselines and add HiREN to them and count their FLOPS during training and inference. The experimental results are presented in Tab. VI. In terms of training cost, we can see that the offline deployment of HiREN does not incur any additional cost. As for online version, we can see that the additional computational cost caused by HiREN is negligible (_e.g,_ from 19.60G to 20.59G, only 0.99G). What is more, neither of the two variants introduce any additional inference cost. In conclusion, the offline deployment not only saves training and inference cost, but also significantly boosts the performance. These results validate the efficiency of our method.
### _Ablation Study_
We conduct extensive ablation studies to validate the design of our method. Since our method is designed to enhance HR images during training, the metric used in this section is the recognition accuracy measured by the average accuracy of CRNN on training set, denoted as \(Acc_{train}\).
#### Iv-D1 Design of the HR enhancement branch
Here, we check the design of the HR enhancement branch. As mentioned above, two techniques are developed to promote the enhancement of HR images: kernel-guided enhancement network \(f_{ke}\) and the loss \(\mathcal{L}_{HR}\). We conduct experiments to check their effects. The experimental results are presented in Tab. VII. Visualization of the effect of the HR enhancement branch is given in the supplementary materials.
_The effect of the HR enhancement branch._ Comparing the results in the 1st and 7th rows of Tab. VII, we can see that the HR enhancement branch lifts the accuracy from 66.9% to 74.1%, which proves the effect of the branch as a whole.
_The effect of kernel-guided enhancement network._ To check the power of the kernel-guided enhancement network, we design a variant that removes the kernel predictor. Comparing the results of the 2nd and 7th rows in Tab. VII, we can see that the variant without the kernel predictor is inferior to that with the kernel predictor (72.7% v.s. 74.1%). This demonstrates the effectiveness of the proposed kernel-guided enhancement network.
_The design of loss function._ Here, we check the design of the loss function used in the HR enhancement branch. We first remove the recognition loss \(\mathcal{L}_{rec}\) and the style loss \(\mathcal{L}_{sty}\) separately. As can be seen in the 3rd, 4th, and 7th rows in Tab. VII, comparing with the combined one, the performance of using only one single loss is degraded. Next, we check the selection of style loss. Specifically, we consider three candidates (MSE, Charbonnier and L1) for the style loss function. As can be seen in the 5th, 6th, and 7th rows of Tab. VII, MSE loss outperforms Charbonnier loss [64] and L1 loss. The reason lies in that MSE penalizes large errors and is more tolerant to small errors, which is more suitable for HiREN to enhance the blurry or missed character details and keep the style unchanged [65]. Ergo, MSE is selected as the style loss in HiREN.
#### Iv-D2 Hyper-parameter study
Here, we provide the grid search results of the hyper-parameter \(\alpha\) introduced in HiREN for balancing the two losses. The results are presented in Tab. VIII. As can be seen in Tab. VIII, the best performance is achieved when \(\alpha\)=0.1 and 0.05.
#### Iv-D3 The effect of loss quality estimation module
Here, we compare the performances of different models w/o the quality estimation module. As can be seen in Tab. IX, without \(f_{QE}\), all methods are degraded, which demonstrates the effect of the quality estimation module.
## V Discussion
In this section, we discuss some issues to better demonstrate the advantages of HiREN and point out some limitations of the proposed method.
### _Which kind of quality issues do HR images have?_
We conduct a visualization study to demonstrate the quality issues of HR images. As can be checked in Fig. 5, HR images are suffering from including but not limited to low-contrast (1st, 2nd and 6th cases), blurry (3rd and 4th cases) and motion blur (5th case). These unknown degradations obviously threaten the recognition of HR images and subsequently provide erroneous supervision to the recovery of the LR images.
### _How does HiREN lift the quality of supervision information?_
To cope with various quality problems of HR images, HiREN generates HQ images through different strategies. In particular, HiREN makes the texts more prominent to solve low-contrast (e.g. the 1st and 2nd cases in Fig. 5). With respect to the blurry issue, HiREN makes the incorrectly recognized texts more distinguishable (e.g. "e" in the 3rd case and "ri" in the 4th case in Fig. 5). HiREN also tries to reduce the motion blur in the 5th case of Fig. 5. Although in some tough cases, HiREN fails to generate a correct HQ image (e.g. the 6th case in Fig. 5), our quality estimation module weights its loss to a small value to suppress the erroneous supervision information.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Method & SRCNN & TSRN & TG & TFGSR \\ \hline without \(f_{QE}\) & 30.2\% & 44.2\% & 51.0 & 51.9\% \\ with \(f_{QE}\) & **30.4**\% & **45.0**\% & **51.1** & **52.4**\% \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Ablation study on the quality estimation module. The metric is the recognition accuracy of CRNN on the test set of TextZoom.
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline \multirow{2}{*}{ID} & \multirow{2}{*}{Kernel-guided} & \multicolumn{3}{c|}{Loss functions} & \multirow{2}{*}{\(Acc_{train}\)} \\ \cline{2-2} \cline{4-5} & & \(\mathcal{L}_{rec}\) & & \(\mathcal{L}_{sty}\) \\ \hline \hline
1 & ✗ & ✗ & ✗ & 66.9 \\ \hline
2 & ✗ & ✓ & MSE & 72.7 \\
3 & ✓ & ✓ & ✗ & 66.1 \\
4 & ✓ & ✗ & MSE & 67.4 \\
5 & ✓ & ✓ & Charb & 67.5 \\
6 & ✓ & ✓ & L1 & 67.3 \\
7 & ✓ & ✓ & MSE & 74.1 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: The ablation studies of the HR enhancement branch. Here, ✗ means the corresponding module is not applied, and Charbonnier Loss [64].
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{6}{c}{\(\alpha\)} \\ \cline{2-7} & 0.5 & 0.2 & 0.1 & 0.05 & 0.025 & 0.01 & 0.005 \\ \hline \(Acc_{train}\) & 73.6 & 73.4 & **74.1** & **74.1** & 72.3 & 72.2 & 71.2 \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: The determination of \(\alpha\). The metric is \(Acc_{train}\).
### _Error Analysis_
In this section, we perform an error analysis of HiREN to provide possible research directions for further works. Concretely, we provide some error cases in Fig. 6 to illustrate the limitations of recent works and HiREN. As can be seen in the 1st\(\sim\)2nd cases, recent methods usually rely on a vocabulary [66], which makes the models guess the blurry pixels via the corpus that can be learned from the training dataset. This degrades the models' ability to recover numbers and punctuation. As a result, although HiREN recovers more characters than the original TPGSR, the word-level recovery still fails. Besides, as shown in the 3rd case, in some tough cases where the LR and HR images are extremely difficult to read, TPGSR and HiREN also fail to effectively do the recovery. This indicates the challenge of STISR.
### _Limitations of HiREN_
On the one hand, HiREN may introduce some noise to the HR images and worsen their quality. However, such noise is very minor compared to the advantage brought by HiREN. Specifically, we find that 9,565 erroneously recognized images in TextZoom dataset are successfully enhanced by HiREN, which leads to correct recognition results, while only 128 images are deteriorated from correct to wrong. On the other hand, the training of the HR enhancement branch requires the feedback of a scene text recognizer and text-level annotations. This indicates that HiREN still needs some weak supervision information for supervision.
## VI Conclusion
In this paper, we present a novel framework called HiREN to boost STISR performance. Different from existing works, HiREN aims at generating high-quality text images based on high-resolution images to provide more accurate supervision information for STISR. Concretely, recognizing the difficulty in catching the degradation from HQ to HR and obtaining the supervision information from HR images, we explore degradation kernel-guided super-resolution and the feedback of a recognizer as well as text-level annotations as weak supervision to train a HR enhancement branch. What is more, to suppress erroneous supervision information, a novel quality estimation module is designed to evaluate the qualities of images, which are used to weight their losses. Extensive experiments demonstrate the universality, high-performance and efficiency of HiREN. Our work provides a new solution for the STISR task.
In the future, we will try to explore more advanced models to further advance the proposed technique. One the one hand, we will try to further improve the recovery ability of the HR enhancement branch or address the vocabulary reliance issue. On the other hand, we plan to apply HiREN to self-supervised or unsupervised settings when the recognizer and text-level annotations are not trustworthy or text-level annotations are lack during training. Last but not least, we will extend the idea of the proposed quality enhancement branch to build a new noisy learning algorithm for STISR.
| scenes text image 超高解像 (STISR) は、低解像度風景画像からのテキスト認識にとって重要な前処理技術です。現在、高解像度 (HR) 映像からテキストに特化した情報を抽出し、STISR モデルの訓練を監督する方法は、さまざまな方法が提案されています。しかし、人工撮影によるHR画像の撮影条件 (撮影機器、フォーカス、環境など) の不確実性により、HR画像の品質が保証されないため、STISRの性能に影響を及ぼします。この研究では、HR画像の品質に関する観察に基づき、STISRの性能向上を目的とした、新しいアイデアを提案します。具体的には、本論文では、HR画像の品質を向上させ、向上したHR画像をSTISRの監督として使用することを目的として、新しいSTISRフレームワーク「High-Resolution ENhancement (HiREN)」を開発しました。HiRENは、2つの枝と品質 |
2309.14973 | Linking Network and Neuron-level Correlations by Renormalized Field
Theory | It is frequently hypothesized that cortical networks operate close to a
critical point. Advantages of criticality include rich dynamics well-suited for
computation and critical slowing down, which may offer a mechanism for dynamic
memory. However, mean-field approximations, while versatile and popular,
inherently neglect the fluctuations responsible for such critical dynamics.
Thus, a renormalized theory is necessary. We consider the
Sompolinsky-Crisanti-Sommers model which displays a well studied chaotic as
well as a magnetic transition. Based on the analogue of a quantum effective
action, we derive self-consistency equations for the first two renormalized
Greens functions. Their self-consistent solution reveals a coupling between the
population level activity and single neuron heterogeneity. The quantitative
theory explains the population autocorrelation function, the single-unit
autocorrelation function with its multiple temporal scales, and cross
correlations. | Michael Dick, Alexander van Meegen, Moritz Helias | 2023-09-26T14:46:44 | http://arxiv.org/abs/2309.14973v2 | # Linking network and neuron-level correlations by renormalized field theory
###### Abstract
It is frequently hypothesized that cortical networks operate close to a critical point. Advantages of criticality include rich dynamics well-suited for computation and critical slowing down, which may offer a mechanism for dynamic memory. However, mean-field approximations, while versatile and popular, inherently neglect the fluctuations responsible for such critical dynamics. Thus, a renormalized theory is necessary. We consider the Sompolinsky-Crisanti-Sommers model which displays a well studied chaotic as well as a magnetic transition. Based on the analogue of a quantum effective action, we derive self-consistency equations for the first two renormalized Greens functions. Their self-consistent solution reveals a coupling between the population level activity and single neuron heterogeneity. The quantitative theory explains the population autocorrelation function, the single-unit autocorrelation function with its multiple temporal scales, and cross correlations.
## I Introduction
### Critical Neural Dynamics
Both experiments and models of cortical networks suggest that the brain is operating close to a phase transition Beggs and Plenz [1], Chialvo [2], Priesemann _et al._[3], Fontenele _et al._[4]. Indicators for this phenomenon are for example found in parallel recordings of neuronal cell cultures for which the number of coactive neurons shows power law distributions [1]. The pattern of neuronal activity, in this case referred to as an avalanche, looks identical on several length and time scales, which suggests a continuous phase transition [5]. The transition point of a continuous phase transition is synonymous with fluctuations on all time scales dominating the system's behavior. This makes it difficult to obtain systematic approximations, rendering continuous phase transitions notoriously hard to treat.
More recent work [3] suggests that the measured critical behavior could be due to the inherent sub-sampling in neuronal recordings which are so far only able to capture a fraction of a network's neurons. Even though this work shows that the observed critical exponents are influenced through measurement effects, it still suggests that the system is slightly sub-critical. Closeness to such a transition comes with numerous benefits. Critical slowing down, the effect of increasing and, at the transition point, even diverging decay constants makes a large spectrum of time constants available to the network. This leads to optimal memory capacity as has been shown using stochastic artificial neuronal networks [6; 7], and maximal computational performance [8].
So far it is unclear what phase transition the brain operates close to. However, there are two popular candidates: The first is a transition into a chaotic regime, meaning that infinitesimal changes in the neuron dynamics are progressively amplified [9; 10; 11]. The other is known as avalanche-like criticality [3; 12]. Avalanches can be viewed through the lens of branching processes, treating the propagation of neuronal activity as the children and further descendants of a spontaneously emitted spike. Below criticality each spike has on average less than one child, leading to activity being driven by external input and a quick decay of all child processes. Above criticality each neuron is on average responsible for more than one spike, leading to escalating activity. At the critical point itself, where there is on average one child spike, long transients are possible and complex behavior can emerge.
Both transitions are well studied in isolation in different models, making direct comparisons difficult. For both, the transition to chaos and avalanches, models exist which show critical behavior, but there has not yet been a study of a model supporting both phase transitions.
### Model and Renormalized Theory
We want to pave the way to a comparison of the two phase transitions in this paper. To this end we focus on an adaptation of the popular and simple model by Sompolinsky, Crisanti, and Sommers [11]. It models the activity of \(N\) neurons in a randomly connected recurrent neural network. The activity of a single neuron \(i\) is denoted by \(x_{i}(t)\) and it is governed by the coupled system of stochastic differential equations
\[\dot{x}_{i}+x_{i}=\sum_{j=1}^{N}J_{ij}\phi_{j}+\xi_{i}, \tag{1}\]
where we use the abbreviation \(\phi_{i}\equiv\phi(x_{i})\) and the driving noise \(\xi_{i}\) is assumed to be Gaussian white noise with zero mean and covariance \(\left\langle\xi_{i}(t)\xi_{j}(s)\right\rangle=D\,\delta_{ij}\,\delta(t-s)\). Here \(\phi\) is an arbitrary activation function for most of this paper; in simulations we chose an error function \(\phi(x)=\mathrm{erf}(\frac{\sqrt{\pi}}{2}x)=\int_{0}^{x}\,e^{-\frac{\pi}{4}z^ {2}}\,dz\) where the scaling ensures that the slope at the origin is unity. In the absence of the right hand side in (1), the activity decays exponentially with unit time constant. The right hand side represents the input to the neuron. The first part comes from all other neurons determined via the activation function \(\phi\) and the connectivity \(J\), whose \(N\times N\) weights are distributed according to a Gaussian with mean \(\bar{g}/N\) (which is often set to zero) and variance \(g^{2}/N\). We will refer to \(\bar{g}\) as the mean and \(g^{2}\) as the variance of the connectivity as the factor \(N^{-1}\) is simply chosen such that mean and fluctuations of the input to a neuron do not scale with the total number of neurons. The second source of input is a random white-noise \(\xi_{i}\) with noise intensity \(D\) modeling external input from other brain areas.
To link this model to the two forms of criticality mentioned above, we consider \(\bar{g}\) as the control parameter for avalanche-like activity; if non-zero and positive it controls the strength by which the average population activity at a certain instant excites and maintains the activity at the next point in time. More formally, in the limit of large \(N\), the parameter \(\bar{g}\) controls a single real outlier eigenvalue of the connectivity matrix, \(\bar{\lambda}\simeq\bar{g}\), with corresponding eigenvector \((1,\ldots,1)\). The latter is a mode in which all neurons act in unison, a cartoon of what happens in a neuronal avalanche. If this eigenvalue \(\bar{\lambda}\) crosses unity, the silent fixed point of the noiseless (\(D=0\)) model becomes unstable in this very direction [13]. The transition to chaos, in contrast, is predominantly controlled by the parameter \(g\). Studying the eigenvalues of the connectivity, \(g\) controls the radius of the bulk of eigenvalues which are uniformly distributed in a circle with radius \(g\) around the origin of the complex plane. Again, a critical point is reached if this radius reaches unity, because then all eigenmodes with \(\mathfrak{R}(\lambda_{i})\simeq 1\) show critically slow dynamics. In the noiseless case \(D=0\) (and for \(\bar{g}=0\)) this points marks the onset of chaotic dynamics [11].
The theoretical approach to the disordered system described by (1) is based on mean-field approximations on auxiliary fields like
\[R(t):= \frac{\bar{g}}{N}\sum_{j=1}^{N}\phi_{j}(t), \tag{2}\] \[Q(s,t):= \frac{g^{2}}{N}\sum_{j=1}^{N}\phi_{j}(s)\phi_{j}(t), \tag{3}\]
since they give a way to obtain an effective low-dimensional set of equations describing the collective behavior. This approach has been used to show a transition to chaos with \(g^{2}\) acting as control parameter [11], which has been studied extensively [9; 14]. As discussed above, the mean \(\bar{g}\) of the connectivity can also take the form of a control parameter [13]: as seen in Figure 1a the network exposes large fluctuations in its population-averaged activity as \(\bar{g}\) approaches the transition point given by \(\bar{g}\) close to unity for \(g<1\). Close to this criticality the fluctuations of \(R\) will influence \(Q\) as can be seen in Figure 1b, leading on average to a larger autocorrelation. In this work we derive an analytical way to analyze the network's behavior close to this transition using \(\bar{g}\) as a control parameter, taking into account the fluctuations of population-averaged activity and its effect on the autocorrelation function.
The proper treatment of fluctuations comes with some technical difficulties. Mean-field approaches, albeit being very popular in the field [15; 16; 9; 17], neglect fluctuations of the auxiliary fields. This effect can be seen in Figure 1c, which shows the population averaged autocorrelation simulated for several values of \(\bar{g}\) close to unity compared to the analytical mean-field solution, which corresponds in this case to a network with \(\bar{g}=0\). One clearly sees that the mean-field results (black) are not sufficient to describe the second time constant, which grows with rising \(\bar{g}\) (plotted in shades of red).
One way of taking these fluctuations into account is by the means of Legendre transformation methods [18]. These provide a way to derive a set of self-consistent equations that resum these fluctuations and are therefore able to describe the observed behavior.
### Outline
We will derive a set of self-consistent equations for the mean and the fluctuations of the auxiliary fields (2) and (3). Such self-consistent schemes are commonplace in
Figure 1: **(a)** Population-averaged activity \(R(t)\) for \(\bar{g}=0.5\) (gray) and \(\bar{g}=1\) (red). **(b)** Auxiliary fields \(Q\) and \(R\), proportional to population-averaged output autocorrelation and activity, respectively, binned for each point in time. **(c)** Time-lagged, population-averaged, stationary autocorrelation \(Q(t,t+\tau)\) simulated for different values of \(\bar{g}\) (shades of red) and mean field prediction (black) plotted logarithmically. Remaining network parameters: \(\phi(x)=\mathrm{erf}(\frac{\sqrt{\pi}}{2}x)\), \(N=1000\), \(g=0.5\), and \(D=0.1\).
other fields of physics [18; 19]. These approximations are typically formulated in the language of a field theory. As a first step, we therefore formulate the dynamical equations in this language. Initially we will leave the activation function \(\phi\) general; all we ask of it is to vanish at zero and to possess a Fourier transform. This set of self-consistency equations in particular exposes how the fluctuations of the population-averaged activity \(R\) influence the population-averaged autocorrelation \(Q\), as shown empirically in Figure 1b and Figure 1c. The theory also allows us to compute pairwise correlations averaged across all pairs of neurons in the network. Lastly, the theory proposes that stimulations that excite the population-averaged activity \(R\) also influence the heterogeneity of the response across neurons, as measured by \(Q\).
## II Self-consistent second order statistics
### Action for Auxiliary Fields
First, we translate (1) into the language of field theory. To this end, it is instructive to first look at the noise expectation value of an operator \(G[\mathbf{x}]\) constrained to the dynamics of (1). This can be achieved with help of the Martin-Siggia-Rose-de Dominicis-Janssen formalism [20; 21; 22] (for pedagogic reviews see [23; 24; 25]) and results in
\[\langle G[\mathbf{x}]\rangle_{\mathbf{x}|\mathbf{J}} =\int_{\mathbf{x}}\langle\delta[\hat{\mathbf{x}}+\mathbf{x}-\mathbf{J}\phi(\mathbf{x })-\mathbf{\xi}]\rangle_{\mathbf{\xi}}\,G[\mathbf{x}]\] \[=\int_{\mathbf{x},\mathbf{\tilde{x}}}e^{S_{0}[\mathbf{x},\mathbf{\tilde{x}}]- \mathbf{\tilde{x}}^{\mathrm{T}}\mathbf{J}\phi(\mathbf{x})}\,G[\mathbf{x}]. \tag{4}\]
Here, \(\int_{\mathbf{x}}\) denotes an integral over the trajectories of all neurons and we used \(\delta(x)=\frac{1}{2\pi i}\int_{-i\infty}^{i\infty}e^{\pm x}\,d\tilde{x}\) for every time step and neuron and defined the action
\[S_{0}[\mathbf{x},\mathbf{\tilde{x}}]:=\mathbf{\tilde{x}}^{\mathrm{T}}\left(\partial_{t}+1 \right)\mathbf{x}+\frac{D}{2}\mathbf{\tilde{x}}^{\mathrm{T}}\mathbf{\tilde{x}} \tag{5}\]
with the short hand notations \(\mathbf{a}^{\mathrm{T}}\mathbf{b}=\sum_{i=1}^{N}\int_{0}^{t}ds\,a_{i}(s)b_{i}(s)\) and \(\mathbf{a}^{\mathrm{T}}\mathbf{M}\mathbf{b}=\sum_{i,j=1}^{N}\int_{0}^{t}ds\,a_{i}(s)M_{ij} b_{j}(s)\).
This allows the definition of a characteristic functional \(Z[\mathbf{l}]\) by setting \(G[\mathbf{x}]=\exp(\mathbf{l}^{\mathrm{T}}\mathbf{x})\). The source \(\mathbf{l}\) in the exponent allows us to take derivatives which in turn yield properly normalized moments after evaluating at the physical value \(\mathbf{l}=0\) of the sources. These sources need not be linear in \(\mathbf{x}\) and could even couple to entirely different quantities. Until we need them we will leave them out and first consider only the partition function.
Eventually, we are interested in self averaging quantities like the mean (2) and the autocorrelation function (3); thus, we further average over realizations of the connectivity \(J_{ij}\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(\bar{g}/N,g^{2}/N)\) which only affects the term \(-\mathbf{\tilde{x}}^{\mathrm{T}}\mathbf{J}\phi(\mathbf{x})\) and yields
\[\langle e^{-\mathbf{\tilde{x}}^{\mathrm{T}}\mathbf{J}\phi(\mathbf{x})}\rangle_{\mathbf{J}}= \int_{y}\exp\left(\frac{N}{2}\,y^{\mathrm{T}}Ky+\sum_{i=1}^{N}y^{\mathrm{T}}f[ z_{i}]\right). \tag{6}\]
Here, we introduced the population-averaged auxiliary fields \(R\) defined in (2) and \(Q\) defined in (3) via Hubbard-Stratonovich transformations and their respective response fields \(\tilde{R}\) and \(\tilde{Q}\) analogously to the introduction of \(\mathbf{\tilde{x}}\). Furthermore, we introduced several shorthand notations: First, we denote \(\mathbf{x}\) and \(\mathbf{\tilde{x}}\) in combination as \(\mathbf{z}=(\mathbf{x},\mathbf{\tilde{x}})\) and \(R\), \(\tilde{R}\), \(Q\), and \(\tilde{Q}\) in combination as \(y=(R,\tilde{R},Q,\tilde{Q})\). Second, we abbreviate \(y^{\mathrm{T}}f[z_{i}]=-\tilde{x}_{i}^{\mathrm{T}}R-\tilde{g}\tilde{\phi}_{i} ^{\mathrm{T}}\tilde{R}+\frac{1}{2}\tilde{x}_{i}^{\mathrm{T}}Q\tilde{x}_{i}-g^ {2}\tilde{\phi}_{i}^{\mathrm{T}}\tilde{Q}\phi_{i}\). Third, we define \(K=\left(\begin{array}{cc}\sigma_{x}&0\\ 0&\sigma_{x}\end{array}\right)\) where \(\sigma_{x}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\), leading to \(\frac{1}{2}\,y^{\mathrm{T}}Ky=\tilde{R}^{\mathrm{T}}R+\tilde{Q}^{\mathrm{T}}Q\). In summary, the introduced notation allow us to write
\[\langle\langle G(\mathbf{x})\rangle_{\mathbf{x}|\mathbf{J}}\rangle_{\mathbf{J}}=\int_{y}e^{ \frac{N}{2}\,y^{\mathrm{T}}Ky}\prod_{i=1}^{N}\int_{z_{i}}e^{S_{0}[z_{i}]+y^{ \mathrm{T}}f[z_{i}]}\,G(x_{i})\]
for any factorizing \(G(\mathbf{x})=\prod_{i=1}^{N}G(x_{i})\).
We see that the part of the partition function that describes individual neurons factorizes into \(N\) identical factors. This leaves a partition function for the four auxiliary fields interacting with a single neuron
\[\int_{y}e^{\frac{N}{2}\,y^{\mathrm{T}}Ky}\prod_{i=1}^{N}\int_{z_{i}}e^{S_{0}[z_{ i}]+y^{\mathrm{T}}f[z_{i}]}=\int_{y}\exp\left(N\,S[y]\right),\]
where we defined the action for the auxiliary fields as
\[S[y] :=\frac{1}{2}y^{\mathrm{T}}Ky+\mathcal{W}[y], \tag{7}\] \[\mathcal{W}[y] :=\ln\int_{z}\exp\left(S_{0}[z]+y^{\mathrm{T}}f[z]\right), \tag{8}\]
reducing the dimensionality of the problem from \(N\) neurons to the six fields \(y\) and \(z\). We note that \(\mathcal{W}[y]\) has the form of a cumulant-generating functional for \(f[z]\).
### Mean-Field Phase Diagram
As the lowest order (mean-field) approximation one can treat the path integrals \(\int_{y}\) in saddle point approximation, replacing the auxiliary fields with their most likely values obtained from the condition \(\delta S[y]/\delta y_{i}\overset{!}{=}0\), which yields [7; 13; 25; 14]
\[y^{*} =(R^{*},\tilde{R}^{*},Q^{*},\tilde{Q}^{*})\] \[=(\bar{g}\mu_{\phi},0,g^{2}C_{\phi\phi},0),\]
with
\[\mu_{\phi}(t) =\langle\phi(t)\rangle,\] \[C_{\phi\phi}(t,s) =\langle\phi^{2}(s,t)\rangle,\]
where \(\langle\ldots\rangle\) is the measure determined by the action (7) and \(\phi^{2}(s,t):=\phi(t)\phi(s)\).
We are now left with path integral for a single neuron and its response field which corresponds to the stochastic differential equation
\[\dot{x}+x=\xi+\eta, \tag{9}\]
where \(\eta\) is a Gaussian Process with
\[\langle\!\langle\eta(t)\rangle\!\rangle =\bar{g}\mu_{\phi}(t), \tag{10}\] \[\langle\!\langle\eta(s)\eta(t)\rangle\!\rangle =g^{2}C_{\phi\phi}(s,t), \tag{11}\]
where we use \(\langle\!\langle\ldots\rangle\!\rangle\) to denote cumulants (connected correlation functions). For an error function as the nonlinearity these expectations can be calculated analytically in terms of statistics of the neuron activity [26]. Thus (9) can be solved efficiently in a self-consistent manner.
For the case of vanishing noise (\(D=0\)) the saddle-point approximation recovers the phase diagram from [13, Fig. 1B] (see also Figure 2 a): The system exhibits a transition from a state with a vanishing order parameter \(R=0\) to a state with a broken symmetry where \(|R|>0\) at a critical value \(\bar{g}=\bar{g}_{c}\). For the case with noise (\(D>0\)), the point of transition in addition depends on the noise amplitude \(\bar{g}=\bar{g}_{c}(g,D)\); see (32) for an explicit expression for \(D=D(\bar{g}_{c},g)\) which can be solved for \(\bar{g}_{c}=\bar{g}_{c}(g,D)\).
### Equations of State to 1-loop Order
We are especially interested in the transition to structured activity \(|R|>0\) driven by the mean connectivity \(\bar{g}\). We expect this transition to be accompanied by fluctuations of the auxiliary field (2) and thus aim to derive a description treating population wide fluctuations systematically. The population level activity is captured by the auxiliary fields. It is thus natural to introduce sources for these fields and for their square to measure their fluctuations. This leads us to the definition of a moment generating functional
\[Z[j,k]=\int_{y}e^{N\mathcal{W}[y]+j^{\mathrm{T}}y+\frac{1}{2}y^{\mathrm{T}}k^ {\mathrm{T}}y}, \tag{12}\]
which yields the first and second moment of \(y\) upon differentiation by \(j\) and \(k\), respectively, at the physically relevant value of the sources \(j=0\) and \(k=N\,K\), by comparison to (7). Our aim is to obtain self-consistency equations for the first two moments. It is therefore helpful to define an ensemble where these two moments are fixed. This is achieved by performing a second order Legendre transform to the effective action
\[\Gamma[\alpha_{1},\alpha_{2}] =\mathrm{extr}_{j,k}j^{\mathrm{T}}\alpha_{1}+\frac{1}{2}k^{ \mathrm{T}}\alpha_{2}-\ln Z[j,k]\] \[=\mathrm{extr}_{j,k}-\ln\int_{y}e^{N\mathcal{W}[y]+j^{\mathrm{T} }(y-\alpha_{1})+\frac{1}{2}k^{\mathrm{T}}(y^{2}-\alpha_{2})},\]
which fixes the system's first two moments \(\alpha_{1},\alpha_{2}\) of the auxiliary fields \(y\); here \(k^{\mathrm{T}}y^{2}\) is meant as a bilinear form in \(y\). The equations of state then yield self-consistency equations
\[\frac{\delta\Gamma}{\delta\alpha_{1}} =j=0, \tag{13}\] \[\frac{\delta\Gamma}{\delta\alpha_{2}} =\frac{1}{2}k=\frac{1}{2}N\,K.\]
Below, we will perform a fluctuation expansion of \(\Gamma\). To ensure that only connected diagrams appear in the expansion Vasiliev [18], we describe the system via its cumulants \(\beta_{1}=\alpha_{1}\) and \(\beta_{2}=\alpha_{2}-\alpha_{1}^{2}\) and define an effective action in these new coordinates (see Appendix IV.2)
\[\Gamma[\beta_{1},\beta_{2}]=\mathrm{extr}_{\hat{j},k}-\ln\,\int_{y}e^{N \mathcal{W}[y]+j^{\mathrm{T}}(y-\beta_{1})+\frac{1}{2}k^{\mathrm{T}}[(y- \beta_{1})^{2}-\beta_{2}]},\]
where \(\hat{j}:=j+k\beta_{1}\). Following Vasiliev [18] we here use the notation of \(\alpha_{n}\) for the \(n\)-th moment and \(\beta_{n}\) for the \(n\)-th cumulant. We thus have \((\beta_{1})_{1}=\langle\!\langle R\rangle\!\rangle=:R^{*}\) and \((\beta_{1})_{3}=\langle\!\langle Q\rangle\!\rangle=:Q^{*}\). The other two components of \(\beta_{1}\) are zero, as they are cumulants of response fields. For \(\beta_{2}\) we will use the notation \(\beta_{ij}=(\beta_{2})_{ij}\) as it comes up frequently. So we have \(\beta_{11}\) as the autocorrelation of \(R\), \(\beta_{12}\) and \(\beta_{21}\) as its response functions and again \(\beta_{22}=0\) as a cumulant of only response fields. The equations of state (13) in the new coordinates take the form
\[\frac{\delta\Gamma[\beta_{1},\beta_{2}]}{\delta\beta_{1}} =j+\beta_{1}k=\beta_{1}NK, \tag{14}\] \[\frac{\delta\Gamma[\beta_{1},\beta_{2}]}{\delta\beta_{2}} =\frac{1}{2}k=\frac{1}{2}NK. \tag{15}\]
Writing the problem in this way uses the yet unknown fluctuation-corrected self-consistent values for the first
Figure 2: Mean-field phase diagram spanned by \(\bar{g}\) and \(g\) for (**a**) the noiseless case (\(D=0\)) and (**b**) noise-driven dynamics (\(D=0.1\)). The red shading quantifies the absolute population activity \(|R|\), which is the order parameter for ferromagnetic activity and the gray shading quantifies the dynamic variability \(Q\), which for \(D=0\) is the order parameter indicating the onset of chaotic activity. The black curves show where these values become nonzero. The dynamic variability \(Q\) does not vanish in the presence of noise.
and second order statistics which become accessible via the equations of state.
Solving the equations of state is difficult in general but as \(S[y]\propto N\) a loop-wise expansion becomes meaningful. Up to one-loop order and neglecting additive constants we get by expanding \(\mathcal{W}[y]=\mathcal{W}[\beta_{1}]+\frac{1}{2}(y-\beta_{1})^{\mathrm{T}} \mathcal{W}^{(2)}[\beta_{1}](y-\beta_{1})\) and performing the resulting Gaussian integral over the fluctuations \(\delta y=y-\beta_{1}\)
\[\Gamma_{\text{1-loop}}[\beta_{1},\beta_{2}]= -N\,\mathcal{W}[\beta_{1}]+\frac{1}{2}k^{\mathrm{T}}\beta_{2}\] \[+\frac{1}{2}\ln\det(-N\mathcal{W}^{(2)}[\beta_{1}]-k).\]
Note that the terms linear in the fluctuations do not contribute to one-loop order. Using the stationarity condition \(\frac{\delta}{\delta k}\Gamma_{\text{1-loop}}[\beta_{1},\beta_{2}]=0\), we obtain \(\beta_{2}=(-N\mathcal{W}^{(2)}[\beta_{1}]-k)^{-1}\) which simplifies the effective action to
\[\Gamma_{\text{1-loop}}[\beta_{1},\beta_{2}]= -N\,\mathcal{W}[\beta_{1}]-\frac{1}{2}N\,\mathcal{W}^{(2)}[\beta _{1}]^{\mathrm{T}}\beta_{2}\] \[+\frac{1}{2}\ln\det(\beta_{2}),\]
where we suppressed the inconsequential constant \(-\frac{1}{2}\mathrm{tr}\,\mathbb{I}\). Up to one-loop order and evaluated at their true value \(j=0\) and \(k=N\,K\) the first equation of state (14) reads
\[\frac{\delta\Gamma_{\text{1-loop}}[\beta_{1},\beta_{2}]}{\delta \beta_{1}}= -N\,\mathcal{W}^{(1)}[\beta_{1}]-\frac{1}{2}N\mathcal{W}^{(3)}[ \beta_{1}]^{\mathrm{T}}\beta_{2}\] \[= \beta_{1}N\,K. \tag{16}\]
The second equation of state (15) is
\[\frac{\delta\Gamma_{\text{1-loop}}[\beta_{1},\beta_{2}]}{\delta \beta_{2}}= -\frac{1}{2}N\,\mathcal{W}^{(2)}[\beta_{1}]+\frac{1}{2}\beta_{2}^{-1}\] \[= \frac{1}{2}NK. \tag{17}\]
The derivatives of \(\mathcal{W}\) evaluated at \(y=\beta_{1}\) by (8) take the form of the cumulants of \(f[z]\) taken with the measure
\[P[z]\propto e^{S_{0}[z]+\beta_{1}^{\mathrm{T}}f[z]}. \tag{18}\]
Two things are important to note about this measure. First, \(\beta_{1}\) has only two non-vanishing components. This means we get \(\beta_{1}^{\mathrm{T}}f[z]=-R^{\star\mathrm{T}}\tilde{x}+\frac{1}{2}\tilde{x} ^{\mathrm{T}}Q^{\star}\tilde{x}\) which is at most quadratic in \(z\), as both terms containing \(\phi(x)\) vanish. Therefore, the measure (18) is Gaussian which greatly simplifies the calculations. Second, this measure is not determined by the fluctuation-corrected statistics but the saddle point values of the auxiliary fields: \(R^{\star}\) and \(Q^{\star}\). To avoid confusion, we will use the subscript \(\star\) for cumulants taken with measure (18).
### Evaluating the 1-loop Equations of State
We will separate the different contributions to the cumulant by commas due to the third and fourth entry of \(f\) consisting of two parts with two time arguments. A quick example of this necessity is the comparison between \(\langle\!\langle f_{4}(s,t)[z]\rangle\!\rangle_{\star}\) and \(\langle\!\langle f_{2}[z](s),f_{2}[z](t)\rangle\!\rangle_{\star}\) because without a separator they look identical: \(\langle\!\langle\phi(s)\phi(t)\rangle\!\rangle_{\star}\) (neglecting prefactors) but this is of course misleading.
With this notation we now close the self-consistency loop by solving the equations of state for the cumulants. We will start by solving (17) for \(\beta_{2}\) which appears linearly,
\[\left(\beta_{2}^{-1}\right)_{i,j}=N\,K_{i,j}+N\,\langle\!\langle f[z]_{i},f[z]_ {j}\rangle\!\rangle_{\star}. \tag{19}\]
Working under the assumption of a point symmetric activation functions and under the assumption that \(\langle x\rangle=0\), we have \(\langle\!\langle\phi(x)\rangle\!\rangle_{\star}=0\) as well as \(\langle\!\langle\phi^{3}(x)\rangle\!\rangle_{\star}=0\) and \(\langle\!\langle\phi,\tilde{x}\tilde{x}\rangle\!\rangle_{\star}=0\); the latter is the response of the mean \(\langle\phi\rangle\) to a perturbation of the variance of \(x\). Taking into account that any expectation value solely composed of powers of \(\tilde{x}\) must vanish, we see that \(\langle\!\langle f[z]_{i},f[z]_{j}\rangle\!\rangle_{\star}=0\) if \(i\in\{1,2\}\) and \(j\in\{3,4\}\) or vice versa. Due to the block-diagonal shape of \(K\), \(\beta_{2}^{-1}\) is block-diagonal as well. We can therefore invert these blocks independently. The upper left block of (19) takes the form
\[(\beta_{2}^{-1})_{11}(t,s) =N\,\langle\!\langle\tilde{x}(t),\tilde{x}(s)\rangle\!\rangle_{ \star}=0\] \[(\beta_{2}^{-1})_{12}(t,s) =N\delta(t-s)+N\bar{g}\,\langle\tilde{x}(t)x(s)\rangle_{\star} \langle\phi^{\prime}\rangle_{\star}\] \[(\beta_{2}^{-1})_{21}(t,s) =N\delta(t-s)+N\bar{g}\,\langle\tilde{x}(s)x(t)\rangle_{\star} \langle\phi^{\prime}\rangle_{\star}\] \[(\beta_{2}^{-1})_{22}(t,s) =N\bar{g}^{2}\,\langle\!\langle\phi(t),\phi(s)\rangle\!\rangle_{ \star},\]
which we can rewrite in momentum-space
\[(\beta_{2}^{-1})_{12}(\omega) =N-N\bar{g}\frac{\langle\phi^{\prime}\rangle_{\star}}{1+i\omega},\] \[(\beta_{2}^{-1})_{21}(\omega) =N-N\bar{g}\frac{\langle\phi^{\prime}\rangle_{\star}}{1-i\omega},\] \[(\beta_{2}^{-1})_{22}(\omega) =N\bar{g}^{2}\,\langle\!\langle\phi,\phi\rangle\!\rangle_{\star} (\omega).\]
Here we used the results from Appendix IV.3 to rewrite \(\langle\!\langle\tilde{x}\phi\rangle\!\rangle_{\star}=\langle\phi^{\prime} \rangle_{\star}\langle\tilde{x}\tilde{x}\rangle_{\star}\) and the Fourier representation of the response functions \(\langle\tilde{x}x\rangle_{\star}(\omega)=-1/(1+i\omega)\), i.e., the response of a neuron to a \(\delta\) perturbation with regard to the measure (18), which has the same form as for isolated neuron, because the additional term \(\beta_{1}^{\mathrm{T}}f(z)\) in the action corresponds to an additional input which does not affect the response. Finally, we invert this matrix (greatly simplified due to \((\beta_{2}^{-1})_{11}(t,s)=0\)) to find
\[\beta_{12}(\omega) =\left((\beta_{2}^{-1})_{12}(\omega)\right)^{-1}\] \[=N^{-1}\,\frac{1+i\omega}{1-\bar{g}(\phi^{\prime})+i\omega}\] \[\beta_{21}(\omega) =\beta_{12}(-\omega)\] \[\beta_{11}(\omega) =\beta_{12}(\omega)\,(\beta^{-1})_{22}(\omega)\,\beta_{21}(\omega)\] \[=\frac{1+\omega^{2}}{(1-\bar{g}(\phi^{\prime}))^{2}+\omega^{2}}\, \frac{\bar{g}^{2}}{N}\langle\!\langle\phi,\phi\rangle\!\rangle_{\star}(\omega).\]
Here we see the first clear sign of the emerging large time constant in \(\beta_{11}(\omega)\). When \(\bar{g}\) approaches \(\langle\phi^{\prime}\rangle^{-1}\) a
pole emerges at \(\omega=0\). This implies that \(\beta_{11}(\tau)\), the autocorrelation of the population averaged activity, decays slower and slower to zero as a function of \(t-s\) and thus obtains a large decay constant. We can also see that \(\beta_{22}=0\) as it should since it is the second cumulant of \(\bar{R}\) which is a response field. By the same argument \(\beta_{44}=(\!(\bar{Q}^{2})\!)\) must vanish. This implies that one could apply the same method to invert the lower right block; here we refrain from doing this because our main interest lies in studying the effect of fluctuations of the population-averaged activity \(R\), which is described by the upper left block.
Next, we solve for the mean via the first equation of state (16) which takes the form
\[(K\beta_{1})_{i}=-\langle\!(f_{i}[z])\!\rangle_{*}-\frac{1}{2}\! \sum_{l,m\in\mathbb{4}}\!\langle\!(f_{i}[z],f_{l}[z],f_{m}[z])\!\rangle_{*} \beta_{lm}, \tag{20}\]
where \(\mathbb{4}=\{1,2,3,4\}\). Note that the multiplication with \(K\), defined in (12), does nothing but switch indices \(1\leftrightarrow 2\) and \(3\leftrightarrow 4\). In principle, (20) determines all mean values of the population dynamic. We are, however, especially interested in corrections to \(R\) and \(Q\), the auxiliary fields used in mean field. Thus, we consider the cases \(i=2\) and \(i=4\). The first shows that correction on the population activity \(R\) caused by its own fluctuations \(\beta_{11}\) is mediated by \(\langle(\phi\tilde{x}\tilde{x})\rangle\propto\langle\phi^{\prime\prime}\rangle\) (for details see Appendix IV.3), which vanishes in the paramagnetic regime. This means that that there is no influence of fluctuations of \(R\) on the transition to the ferromagnetic state. For the second we need the product of \(\beta_{2}\) and the third cumulant of \(f\). This leads to 16 different combinations of \(l\) and \(m\). As discussed above, \(\beta_{2}\) has several vanishing entries: the off diagonal blocks and the auto-correlations of response fields, \(\beta_{22}\) and \(\beta_{44}\). This already reduces the number of terms from 16 to 6. Additionally, the term involving
\[\langle\!(f_{4}[z],f_{3}[z],f_{3}[z])\!\rangle_{*}\propto\langle\!(\phi^{2}, \tilde{x}^{2},\tilde{x}^{2})\!\rangle_{*}\]
vanishes. This can be shown by methods from Appendix IV.3, which work similar to Wick's theorem to express those moments as a polynomial of second cumulants of \(x\) and \(\tilde{x}\), results in a formula where every term is at least proportional to \(\langle\!(\tilde{x},\tilde{x})\!\rangle_{*}=0\).
For the fourth component of (20), this leaves us with
\[Q^{*}(s,t)= (\beta_{1}(s,t))_{3}=(K\beta_{1}(s,t))_{4} \tag{21}\] \[= g^{2}\langle\!(\phi^{2}(s,t))\!\rangle_{*}\] (22) \[+\frac{1}{2}g^{2}\int_{u,v}\!\langle\!(\phi^{2}(s,t),\tilde{x}(u ),\tilde{x}(v))\!\rangle_{*}\beta_{11}(u,v)\] (23) \[+g^{2}\bar{g}\int_{u,v}\!\langle\!(\phi^{2}(s,t),\tilde{x}(u), \phi(v))\!\rangle_{*}\beta_{12}(u,v)\] (24) \[-\frac{g^{4}}{2}\int_{u_{1,2},v_{1,2}}\!\langle\!(\phi^{2}(s,t), \tilde{x}^{2}(u_{1},u_{2}),\phi^{2}(v_{1},v_{2}))\!\rangle_{*}\] \[\beta_{34}(u_{1},u_{2},v_{1},v_{2}). \tag{25}\]
This equation lends itself nicely to interpretation using the intuitive picture of a mean-field neuron embedded in a 'bath' of activity due to the network (akin to the cavity method [27]). The first contribution (22) is identical to the mean-field approximation. The next contribution (23) contains \(\beta_{11}\), the autocorrelation of the population averaged activity \(R\). This term can be interpreted as the effect of fluctuations of \(R\) measured by \(\beta_{11}\) contributing to the variance of the input of the representative mean-field neuron. Term (24) shows how a fluctuation of the neuronal activity \(\phi(v)\) is echoed in the network and transmitted back by the response function \(\beta_{12}\) of the bath, affecting the mean input by coupling to \(\tilde{x}(u)\) which, in turn, modifies the variance of the mean-field neuron's input by changing the second moment \(\langle\phi^{2}(s,t)\rangle\). Similarly (25) shows an echo effect: A fluctuation of \(\phi^{2}(v_{1},v_{2})\) propagates through the bath with the response \(\beta_{34}(u_{1},u_{2},v_{1},v_{2})\) to time points \(u_{1},u_{2}\) and causes a change of the variance in the input of the mean-field neuron by coupling to \(\tilde{x}^{2}(u_{1},u_{2})\), which in turn affects \(\langle\phi^{2}(s,t)\rangle\).
## III Results
For this section we consider the regime \(g<1\) and set \(\phi(x)=\operatorname{erf}(\sqrt{\pi}x/2)\) which makes all involved expectation values of \(\phi\) and its derivatives as they appear in Appendix IV.3 solvable analytically [28; 26] while staying close to the popular choice of a hyperbolic tangent. Furthermore, we only consider the corrections (23) due to \(\beta_{11}\) which empirically dominates the other contributions (for an explicit) expression for \(Q^{*}\) including the contributions due to \(\beta_{12}\) in linear networks see Appendix IV.4).
Figure 3 shows the autocorrelation for a network close to the phase transition. In the simulation results we observe the critical slowing down already visible in Figure 1c, which our self-consistent theory describes quite well. Above all, we see the emerging time constant corresponding to the decay of the network activities' auto-correlation \(\beta_{11}(t-s)=\langle(R(t)\,R(s))\!\rangle\). Also the autocorrelation function features two different time scales: The fast time-scale dominates the initial decay for time lags close to zero; this part is identical to the mean-field result neglecting fluctuations. The second time scale dominates the behavior of the autocorrelation function at large time lags. Its is caused by the fluctuations of \(R\) as quantified by \(\beta_{11}\).
Figure 4 shows how a network's response to constant input changes close to the transition for different values of \(g\). The population activity of a network with no variance in its connection (\(g=0\)) behaves like a capacitor. For \(g>0\), the increase of the population activity due to the transient input is suppressed compared to \(g=0\). This highlights that close to the transition to the chaotic regime, a rise in the population-averaged activity \(R\) is counteracted by the increase of the variance measured by \(Q\); formally this can be seen from the effective slope
of the noise-averaged activation function (cf. (27)) to decrease with increasing \(Q\), which in turn reduces the positive feedback that controls the dynamics of \(R\) by (10). This stronger variability and the coupling of \(R\) and \(Q\) can be seen in Figure 4c in the higher curvature for larger \(g\). Our theory captures the resulting slightly elevated average of \(Q\), as can be seen by the analytical crosses indicating \(Q(\tau=0)\) lying slightly above the parabolas' low points.
Direct access to \(Q\) and the fluctuations of \(R\) also allows us to conveniently calculate the pairwise averaged cross-correlation of the output
\[C^{x}_{\phi\phi}(t-s):=\frac{1}{N^{2}}\sum_{isj}\phi_{i}(s)\phi_{j}(t)= \frac{\beta_{11}(s,t)}{\bar{g}^{2}}-\frac{Q(s,t)}{Ng^{2}} \tag{26}\]
as can be seen in Figure 5. (26) highlights the large time constant present in the cross correlation induced by network level correlation \(\beta_{11}\) which was also shown by Clark _et al._[29] using cavity methods.
## IV Discussion
In this paper we investigated the critical behavior close to the structured (ferromagnetic) regime of the Sompolinsky-Crisanti-Sommers model with non-zero mean connectivity and noise. After first reproducing the phase diagram using (dynamical) mean-field theory [14], we derive a self-consistent set of equations to one loop order, systematically taking corrections of order \(1/N\) into account. Our theory explains the emergence of long time scales in the decay of the population averaged autocorrelation function \(Q\), which we show to be caused by fluctuations of the population-averaged population activity \(R\). The theory furthermore links these network level effects to pairwise correlations on the single neuron scale. We thus successfully bridge between the emerging large timescales of the autocorrelation on the single neuron scale and finite size effects on the network level. Lastly, our analytical results explain how fluctuations of the population-averaged activity lead to a higher population averaged autocorrelation, showing a correlation in the two auxiliary fields that span the phase space of recurrent networks and are conventionally studied in mean-field theory.
With regard to the study of criticality in neuronal networks, we have provided a model that features two critical transitions. First, the transition between the regular
Figure 4: (**a**) Transient of \(R\) in response to a stimulation provided as common input of \(0.01\) to each neuron (additive constant on right hand side of (1)) within the time span indicated by the shaded region; \(\bar{g}=1\) and different values of \(g\) (colors given in legend) (**b**) Transient of \(Q\) under same conditions as in a. (**c**) 2D histogram of \(Q\) over \(R\) with crosses at the zero time lag predicted as \(Q^{\star}(t,t)\) from theory (21). Other parameters as in Figure 1.
Figure 5: Population averaged cross correlation \(C^{x}_{\phi\phi}(\tau)\) (26) over time lag given by (26) (black) compared to simulation (red) for \(\bar{g}=0.5\). Other parameters as in Figure 1.
Figure 3: Time lagged population-averaged autocorrelation \(Q(t,t+\tau)\) (3) simulated (red) and self consistent solution (21) (black) together with autocorrelation \((\bar{R}(t+\tau)R(t))=\beta_{11}(\tau)\) of population-averaged activity \(R\) (2) (dashed, self consistent in black, empirical in red) plotted logarithmically for \(\bar{g}=1.0\). Other parameters as in Figure 1.
regime and the chaotic phase, which is predominantly controlled by the amount of disorder in the connectivity quantified by \(g\) and, in the absence of driving noise, indicated by the order parameter \(Q\). This transition has been studied extensively in many previous works [7; 30; 11]. Our analysis here focuses on the "ferromagnetic" transition mainly controlled by the parameter \(\bar{g}\), for which \(R\) plays the role of an order parameter. Our theory explicitly demonstrates critical slowing down of the dynamics at the point of the continuous phase transition and allows the computation of the time scale. The theory, moreover, exposes that the two transitions cannot be studied in isolation, because we find a tight interplay of the two order parameters: fluctuations of \(R\) directly affect the order parameter \(Q\), in particular the latter inherits the slow temporal decay from the critical fluctuations of the former. Also vice versa, the response of \(R\) is found to be multi-phased, which appears to be caused by the back influence of \(Q\) on \(R\).
On the side of network theory, the proposed method of second order Legendre transform to obtain a renormalized theory in the form of a set of self-consistency equations for the first and second order statistics of the population activity may be useful to study other network properties. For example, within the framework of Bayesian inference [31; 32], one cornerstone of contemporary theory of deep neuronal networks [33; 34; 35], the presented theory may be useful to compute the network prior. An interesting feature in this regard is that the neurons in our renormalized theory do not decouple, in contrast to the case of the large \(N\)-limit for deep and recurrent networks with centered prior distributions on the weights [36]. We hope that the presented framework will be useful to understand the functional consequences of this finding and that it will open the door to studying the finite-size properties of recurrent stochastic networks in continuous time in general.
###### Acknowledgements.
We are grateful for helpful discussions with Andrea Crisanti in the early stages of this project. This project has received funding from the European Union's Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement No. 945539 (Human Brain Project SGA3); the Helmholtz Association: Young investigator's grant VH-NG-1028; the German Federal Ministry for Education and Research (BMBF Grant 01IS19077A to Julich); Open access publication funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 491111487. MD received funding as Vernetzungsdoktorand: "Dynamic characteristics of reservoir computing"
| cortical network の活動は、臨界点に非常に近いと仮定されています。臨界性の利点は、計算に適した豊かなダイナミクスと、dynamic memory のメカニズムを提供する批判的遅延です。しかしながら、平均場近似は、臨界的なダイナミクスを説明するために必要な、変則的で一般的な方法ですが、それらのダイナミクスを説明するために必要な、変動を無視しています。したがって、再帰化理論が必要になります。
we consider the Sompolinsky-Crisanti-Sommers model which displays a well studied chaotic as well as a magnetic transition. 基礎となる量子有効作用のアナロジーに従い、最初の 2 つの再帰化されたグリーン関数の自己適合方程式を導き出します。その自己適合の解は、集団レベルの活動と単一ニューロンの異質性との間の相互作用を示しています。この量 |
2307.16376 | When Large Language Models Meet Personalization: Perspectives of
Challenges and Opportunities | The advent of large language models marks a revolutionary breakthrough in
artificial intelligence. With the unprecedented scale of training and model
parameters, the capability of large language models has been dramatically
improved, leading to human-like performances in understanding, language
synthesizing, and common-sense reasoning, etc. Such a major leap-forward in
general AI capacity will change the pattern of how personalization is
conducted. For one thing, it will reform the way of interaction between humans
and personalization systems. Instead of being a passive medium of information
filtering, large language models present the foundation for active user
engagement. On top of such a new foundation, user requests can be proactively
explored, and user's required information can be delivered in a natural and
explainable way. For another thing, it will also considerably expand the scope
of personalization, making it grow from the sole function of collecting
personalized information to the compound function of providing personalized
services. By leveraging large language models as general-purpose interface, the
personalization systems may compile user requests into plans, calls the
functions of external tools to execute the plans, and integrate the tools'
outputs to complete the end-to-end personalization tasks. Today, large language
models are still being developed, whereas the application in personalization is
largely unexplored. Therefore, we consider it to be the right time to review
the challenges in personalization and the opportunities to address them with
LLMs. In particular, we dedicate this perspective paper to the discussion of
the following aspects: the development and challenges for the existing
personalization system, the newly emerged capabilities of large language
models, and the potential ways of making use of large language models for
personalization. | Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, Enhong Chen | 2023-07-31T02:48:56 | http://arxiv.org/abs/2307.16376v1 | # When Large Language Models Meet Personalization: Perspectives of Challenges and Opportunities
###### Abstract
The advent of large language models marks a revolutionary breakthrough in artificial intelligence. With the unprecedented scale of training and model parameters, the capability of large language models has been dramatically improved, leading to human-like performances in understanding, language synthesizing, and common-sense reasoning, etc. Such a major leap-forward in general AI capacity will fundamentally change the pattern of how personalization is conducted. For one thing, it will reform the way of interaction between humans and personalization systems. Instead of being a passive medium of information filtering, like conventional recommender systems and search engines, large language models present the foundation for active user engagement. On top of such a new foundation, user's requests can be proactively explored, and user's required information can be delivered in a natural, interactable, and explainable way. For another thing, it will also considerably expand the scope of personalization, making it grow from the sole function of collecting personalized information to the compound function of providing personalized services. By leveraging large language models as a general-purpose interface, the personalization systems may compile user's requests into plans, calls the functions of external tools (e.g., search engines, calculators, service APIs, etc.) to execute the plans, and integrate the tools' outputs to complete the end-to-end personalization tasks. Today, large language models are still being rapidly developed, whereas the application in personalization is largely unexplored. Therefore, we consider it to be right the time to review the challenges in personalization and the opportunities to address them with large language models. In particular, we dedicate this perspective paper to the discussion of the following aspects: the development and challenges for the existing personalization system, the newly emerged capabilities of large language models, and the potential ways of making use of large language models for personalization.
Large Language Models, Personalization Systems, Recommender Systems, Tool-learning, AIGC
## 1 Introduction
The emergence of large language models [1], which have demonstrated remarkable progress in understanding human expression, is profoundly impacting the AI community. These models, equipped with vast amounts of data and large-scale neural networks, exhibit impressive capabilities in comprehending human language and generating text that closely resembles our own. Among these abilities are reasoning [2], few-shot learning [3], and the incorporation of extensive world knowledge within pre-trained models [1]. This marks a significant breakthrough in the field of artificial intelligence, leading to a revolution in our interactions with machines. Consequently, large language models have become indispensable across various applications, ranging from natural language processing and machine translation to creative content generation and chatbot development. The introduction of ChatGPT, in particular, has gained significant attention from the human community, prompting reflections on the transformative power of large language models and their potential to push the boundaries of what AI can achieve. This disruptive technology holds the promise of transforming how we interact with and leverage AI in countless domains, opening up new possibilities and opportunities for innovation. As these language models continue to advance and evolve, they are likely to shape the future of artificial intelligence, empowering us to explore uncharted territories and unlock even greater potential in human-machine collaboration.
Personalization, the art of tailoring experiences to individual preferences, stands as an essential and dynamic connection that bridges the gap between humans and machines. In today's technologically-driven world, personalization plays a pivotal role in enhancing user interactions and engagements with a diverse array of digital platforms and services. By adapting to individual preferences, personalization systems empower machines to cater to each user's unique needs, leading to more efficient and enjoyable interactions. Moreover, personalization goes beyond mere content recommendations; it encompasses various facets of user experiences, encompassing user interfaces, communication styles, and more. As artificial intelligence continues to advance, personalization becomes increasingly sophisticated in handling large volumes of interactions and diverse user
intents. This calls for the development of more advanced techniques to tackle complex scenarios and provide even more enjoyable and satisfying experiences. The pursuit of improved personalization is driven by the desire to better understand users and cater to their ever-evolving needs. As technology evolves, personalization systems will likely continue to evolve, ultimately creating a future where human-machine interactions are seamlessly integrated into every aspect of our lives, offering personalized and tailored experiences that enrich our daily routines.
Large language models, with their deep and broad capabilities, have the potential to revolutionize personalization systems, transforming the way humans interact and expanding the scope of personalization. the interaction between humans and machines can no longer be simply classified as active and passive, just like traditional search engines and recommendation systems. However, these large language models go beyond simple information filtering and they offer a diverse array of additional functionalities. Specifically, user intent will be actively and comprehensively explored, allowing for more direct and seamless communication between users and systems through natural language. Unlike traditional technologies that relied on abstract and less interpretable ID-based information representation, large language models enable a more profound understanding of users' accurate demands and interests. This deeper comprehension paves the way for higher-quality personalized services, meeting users' needs and preferences in a more refined and effective manner. Moreover, the integration of various tools is greatly enhanced by the capabilities of large language models, significantly broadening the possibilities and scenarios for personalized systems. By transforming user requirements into plans, including understanding, generating, and executing them, users can access a diverse range of information and services. Importantly, users remain unaware of the intricate and complex transformations happening behind the scenes, as they experience a seamless end-to-end model. From this point, the potential of large language models in personal is largely unexplored.
This paper addresses the challenges in personalization and explores the potential solutions using large language models. In the existing related work, LaMP [4] introduces a novel benchmark for training and evaluating language models in producing personalized outputs for information retrieval systems. On the other hand, other related surveys [5, 6, 7] focus mainly on traditional personalization techniques, such as recommender systems. From the perspective of learning mechanisms, LLM4Rec [5] helps into both Discriminative LLM for Recommendation and Generative LLM for Recommendation. Regarding the adaptation of LLM for recommender systems in terms of 'Where' and 'How', Li et al [6] concentrate on the overall pipeline in industrial recommender phases. Fan et al [7], on the other hand, conduct a review with a focus on pre-training, fine-tuning, and prompting approaches. While these works discuss pre-trained language models like Bert and GPT for ease of analysis, they dedicate limited attention to the emergent capabilities of large language models. This paper aims to fill this gap by examining the unique and powerful abilities of large language models in the context of personalization, and further expand the scope of personalization with tools.
The remaining of this survey is organized as follows: we review the personalization and large language models in Section 2 to overview the development and challenges. Then we carefully discuss the potential actors of large language models for personalization from Section 3, following the simple utilization of emergent capabilities and the complex integration with other tools. We also discuss the potential challenges when large language models are adapted for personalization.
## 2 Background Overview
### _Personalization Techniques_
Personalization, a nuanced art that tailors experiences to the unique preferences and needs of individual users, has become a cornerstone of modern artificial intelligence. In this section, we explore the captivating world of personalized techniques and their profound impact on user interactions with AI systems. We will delve into three key aspects of personalization: recommender systems, personalized assistance, and personalized search. These techniques not only enhance user satisfaction but also exemplify the evolution of AI, where machines seamlessly integrate with our lives, understanding us on a profound level. By tailoring recommendations, providing customized assistance, and delivering personalized search results, AI systems have the potential to create a truly immersive and individualized user experience.
#### 2.1.1 Recommender Systems
Recommender systems play a pivotal role in personalization, revolutionizing the way users discover and engage with content. These systems aim to predict and suggest items of interest to individual users, such as movies, products, or articles, based on their historical interactions and preferences.
Regarding the development of recommender systems, they have evolved significantly over the years, with collaborative filtering [8, 9] being one of the earliest and most influential approaches. Collaborative filtering relies on user-item interaction data to identify patterns and make recommendations based on users with similar preferences. Traditional solutions, such as matrix factorization [10] and user/item-based approaches [11], extract potentially interesting items based on the idea that users who have shown similar preferences in the past are likely to have similar preferences in the future. While effective, collaborative filtering has limitations, such as the "cold start" problem for new users and items. To address these limitations, content-based filtering [12] emerged, which considers the content of items to make recommendations. It leverages the features and attributes of items to find similarities and make personalized suggestions. These features can be grouped into user-side information, such as user profiles, item-side information [13, 14], such as item brands and item categories, and interaction-based information [15], such as reviews and comments. However, content-based filtering may struggle to capture complex user preferences and discover diverse recommendations restricted by the limited feature representations.
In recent years, deep learning has gained significant attention in the field of recommender systems due to its ability to model complex patterns and interactions in user-item data [16]. Deep learning-based methods have shown
promising results in capturing sequential, temporal, and contextual information, as well as extracting meaningful representations from large-scale data. With the introduction of deep networks, high-order interactions between features of users and items are well captured to extract user interest. Deep learning-based methods offer approaches to capture high-order interactions by employing techniques like attention mechanisms [17, 18] and graph based networks [19] to mining complex relationships between user and item. These methods have been shown to enhance recommendation performance by considering higher-order dependencies and inter-item relationships. Another area of deep learning-based recommender systems is sequential recommenders, specifically designed to handle sequential user-item interactions, such as user behavior sequences over time. Self-Attections [20] and Gated Recurrent Units (GRUs) [21] are popular choices for modeling sequential data in recommender systems. These models excel in capturing temporal dependencies and context, making them well-suited for tasks like next-item recommendation and session-based recommendation. Sequential-based models can take into account the order in which items are interacted with and learn patterns of user behavior that evolve over time. Furthermore, the rise of language models like BERT has further advanced recommender systems by enabling a better understanding of both natural language features and user sequential behaviors [22]. These language models can capture deep semantic representations and world knowledge, enriching the recommendation process and facilitating more personalized and context-aware recommendations. Overall, the application of deep learning techniques in recommender systems has opened new avenues for research and innovation, promising to revolutionize the field of personalized recommendations and enhance user experiences.
#### 2.1.2 Personalized Assistance
Personalization Assistance refers to the use of artificial intelligence and machine learning techniques to tailor and customize experiences, products, or content based on individual preferences, behavior, and characteristics of users. By analyzing individual preferences, behaviors, and characteristics, it creates a personalized ecosystem that enhances user engagement and satisfaction. In contrast to traditional recommender systems, which rely on predicting user interests passively, personalized assistance takes a more proactive approach. It ventures into the realm of predicting users' next intentions or actions by utilizing contextual information, such as historical instructions and speech signals. This deeper level of understanding enables the system to cater to users' needs in a more anticipatory and intuitive manner. At the core of this capability lies the incorporation of cutting-edge technologies like natural language processing (NLP) and computer vision. These advanced tools empower the system to recognize and interpret user intentions, whether conveyed through spoken or written language, or even visual cues. Moreover, the potential of personalized assistance extends beyond static recommendations to dynamic and context-aware interactions. As the system becomes more familiar with a user's preferences and patterns, it adapts and refines its recommendations in real-time, keeping pace with the ever-changing needs and preferences of the user.
Conversational Recommender Systems mark a remarkable stride forward in the realm of personalized assistance. By engaging users in interactive conversations, these systems delve deeper into their preferences and fine-tune their recommendations accordingly. Leveraging the power of natural language understanding, these conversational recommenders adeptively interpret user queries and responses, culminating in a seamless and engaging user experience. Notable instances of personalized assistance products, such as Siri and Microsoft Cortana, have already proven their effectiveness on mobile devices. Additionally, the integration of large language models like ChatGPT further elevates the capabilities of conversational recommenders, promising even more enhanced user experiences. As this technology continues to progress, we can anticipate its growing significance across diverse industries, including healthcare, education, finance, and entertainment. While the growth of conversational recommenders and personalized assistance promises immense benefits, it is imperative to develop these products responsibly. Upholding user privacy and ensuring transparent data handling practices are essential to maintain user trust and safeguard sensitive information.
### _Large Language Models_
Language models perform the probabilistic modeling for the generation of natural language, i.e., presented with one specific context, the language models make predictions for the words which are to be generated for the future steps. Nowadays, the language models are mostly built upon deep neural networks, where two features need to be emphasized. First of all, the majority of language models are based on transformers or its close variations [23]. Such types of neural networks are proficient at modeling context dependency within natural languages, and exhibit superior and consistently improved performances when being scaled up. Secondly, the language models are pre-trained at scale with a massive amount of unlabeled corpus. The pre-trained models are further fine-tuned with task-oriented data so as to adapt to different downstream applications.
There have been tremendous progresses about language models in recent years, where the emergent of large language models, represented by GPT-3, marks an important milestone for the entire AI community. The large language models (LLMs), as the name suggests, are massively scaled-up derivatives of conventional language models. Particularly, the backbone networks and the training data have been largely magnified. For one thing, although there is no specific criteria for the minimum number, a typical LLM usually consists of no less than several billions and up-to trillions of model parameters, which are orders of larger than before. For another thing, the pre-training is conducted based on much more unsupervised corpora, with hundreds of billions or trillions of tokens carefully filtered from sources like Common Crawl, GitHub, Wikipedia, Books, ArXiv, etc. The impact of scaling is illustrated by the scaling laws [24, 25], which numerically uncover the power-law relationship between model size, data volume, training scale and the growth of model's performance.
The scaling up of network and training data lead to the leap-forward of large language models' capability. They
not only become more proficient at conventional skills, like understanding people's intent and synthesising human-like languages, but also process capabilities which are rarely exhibited by those smaller models. Such a phenomenon is referred as the emergent abilities of LLMs, where three representatives capabilities are frequently discussed. One is the in-context learning capability, where LLMs may quickly learn from the few-shot examples provided in the prompt. Another one is the instruct following capability. After fine-tuned with diversified tasks in the form of instruction tuning, the LLMs are made proficient to follow the human's instructions. Thus, they may handle different tasks presented in an ad-hoc manner. Last but not least, LLMs are found to be able to conduct step-by-step reasoning. With certain types of prompting strategies, like Chain-of-Thought (CoT), LLMs may iteratively approach the final answer of some complex tasks, like mathematical word problems, by breaking down the tasks into sub-problems and figuring out the plausible intermediate answers for each of the sub-problems.
Thanks to the superior capabilities on understanding, reasoning, and generating, large language models, especially the chat models produced by instruction tuning, are presented as foundamental building blocks for many personalization services. One direct scenario is the conversational search and recommendation. Once built upon large language models, the search and recommendation systems will be able to engage with user via interactions, present outputs in a verbalized and explainable way, receive feedback from the user and make adjustment on top of the feedback, etc. The above changes will bring about a paradigm shift for the personalization services, from passively making search and recommendation, to proactively figuring out user's need and seeking for user's preferred items. In broader scopes, the LLMs may go beyond simply making personalized search and recommendation, but play as personalized assistants to help users with their task completions. The LLMs may take notes of users' important information within their memory, make personalized plans based on memorized information when new demands are raised, and execute plans by leveraging tools like search engines and recommendation systems.
Yet, we have to confront the reality that applying LLMs for personalization is not a trivial problem. To name a quite few of the open challenges. Firstly, personalization calls for the understanding of user preference, which is more of domain-specific knowledge rather than the commonsense knowledge learned by LLMs. The effectively and efficiently adaptation of LLMs for personalized services remains to be resolved. Besides, the LLMs could memorize user's confidential information while providing personalized services. Thus, it raises the concerns for privacy protection. The LLMs are learned from Internet data; due to the exposure bias, it is almost inevitable to make unfair predictions for the minorities. To address the above challenges, benchmarks and evaluation datasets are needed by the research communities. However, such resources are far from complete at present. To fully support personalization with LLMs, methodological and experimental frameworks need to be systematically established for all these perspectives.
## 3 LLMs for Personalization
In the following sections, we delve into the potential of large language models for personalization, examining their evolution from simple use cases, like utilizing word knowledge as features, to more intricate integration with other tool modules to act as agents. Specifically, we focus on the progression of emergent capabilities, starting from basic world knowledge and understanding user intent, and advancing to high-level reasoning abilities. We explore how large language models can contribute to constructing a knowledge base that enriches common-sense knowledge about various items. Additionally, we discuss how the understanding capability of large language models can empower content interpreters and explainers for in-depth analysis of interactions. Furthermore, we observe attempts to leverage the reasoning ability of large language models for system reasoners to provide recommendation results. These increasingly sophisticated capabilities enable complex utilization of large language models with other tool modules, enabling them to better comprehend user intentions and fulfill user instructions. Consequently, we also explore the integration of large language models with other tools for personalization, including tool learning, conversational agents and personalized content creators. The overview of this chapter is depicted in Figure 1. Our comprehensive survey aims to provide a deeper understanding of the current landscape, shedding light on the opportunities and challenges associated with incorporating large language models into personalization.
## 4 LLMs as Knowledge Base
Knowledge base provides rich information with semantics, attracting increasing attention for the usage of the knowledge base in the recommender systems. Particularly, the knowledge graphs, where nodes represent entities and edges represent relations in the heterogeneous information graph, are the common format of knowledge bases and introduced as side information to enhance the performance of recommenders. Knowledge graphs help understand the mutual relations between users and items and also provides better explainability for recommenders. Existing methods that incorporate knowledge graphs in recommender systems can be classified into three main groups: embedding-based methods, path-based methods and the unified methods. Embedding-based methods, such as CKE [14] and DKN [26], KSR [27], SHINE [28], utilize semantic representations of users and items. These methods aim to capture the underlying semantic relationships between entities in the knowledge graph, which can improve the quality of recommendations. Path-based approaches, such as Hete-MF [29], SemRec [30], RuleRec [31], EIUM [32],exploit the semantic connectivity information present in the knowledge graph to regularize the user and item representations. These methods consider the paths between users and items in the graph and leverage them to incorporate interpretability into the recommendation process. Unified methods, such as RippleNet [33], KGCN [34], KGAT [35], AKUPM [36], IntentGC [37] refine the representations of entities in the knowledge graph by leveraging embedding propagation techniques. These methods propagate the embeddings of entities through the graph structure,
allowing information to flow across connected entities and refining the representations accordingly.
However, the knowledge graphs adopted in recommender systems is limited and with low usability. Reviewing the various knowledge graph datasets for recommender systems, covering the domains of movie, book, news, product, etc., these datasets are still significantly sparse compared to the vast amount of human knowledge, particularly the lack of facts, due to the expensive supervision to construct the knowledge graph. Building a comprehensive and accurate knowledge graph would be a complex and resource-intensive task, which would include data collection, integration, and cleaning to assure data quality and consistency. Limited by the expensive cost of labelling the knowledge graphs, there would usually exist missing entities or relations. The user preferences for these entities or paths may be ignored, and the recommendation performance suffers.
The ability of Large Language Models to retrieve factual knowledge as explicit knowledge bases [38, 39, 40, 41, 42, 43, 40, 41, 44, 45, 46] has been stirred discussed, which presents an opportunity to construct more comprehensive knowledge graphs within recommender systems. Tracing back to the work [38], large language models have shown their impressive power in storing factual information, such as entities and common-sense, and then commonsense knowledge can be reliably transferred to downtown tasks.
**Existing methods in knowledge graphs fall short of handling incomplete KGs [47] and constructing KGs with text corpus [48]** and many researchers attempt to leverage the power of LLM to solve the two tasks, i.e., the knowledge completion [49] and knowledge construction [50]. **For knowledge graph completion**, which refers to the task of missing facts in the given knowledge graph, recent efforts have been paid to encode text or generate facts for knowledge graphs. MTL-KGC [51] encoders the text sequences to predict the possibility of the tuples. MEMKGC [52] predicts the masked entities of the triple. StAR [53] utilizes Siamese textual encoders to separately encode the entities. GenKGC [54] uses the decoder-only language models to directly generate the tail entity. TagReal [55] generates high-quality prompts from the external text corpora. AutoKG [48] directly adopts the LLMs, such as ChatGPT and GPT-4, and design tailored prompts to predict the tail entity. As for the another important task, i.e., **knowledge graph construction**, which refers to creating a structured representation of knowledge, LLMs can be applied in the process of constructing knowledge graphs, including entity discovery [56, 57], coreference resolution [58, 59] and relation extraction [60, 61]. LLMs can also achieve the end-to-end construction [62, 50, 42, 63, 55] to directly build KGs from raw text. LLMs enables the knowledge distillation to construct knowledge graphs. symbolic-kg [64] distills commonsense facts from GPT3 and then finetune the small student model to generate knowledge graphs. These models have demonstrated the capacity to store large volumes of knowledge, providing a viable option for improving the scope and depth of knowledge graphs. Furthermore, these advancements have prompted research into the direct transfer of stored knowledge from LLMs to knowledge graphs, eliminating the need for human supervision. This interesting research throws light on the possibilities of automating knowledge graph completion utilizing cutting-edge big language models.
By leveraging the capabilities of LLMs, recommender systems would benefit from a more extensive and up-to-date knowledge base. Firstly, missing faculty information can be completed to construct more extensive knowledge graphs and thus the relations between entities can be extracted for better recommenders. Secondly, in contrast to the preceding exclusively in-domain data, the large language model itself contains plenty of cross-domain information that can help achieve cross-domain recommendations, such as recommending appropriate movies based on the user's favorite music songs. To sum up, the stored knowledge can be utilized to enhance recommendation accuracy, relevance, and personalization, ultimately improving the overall performance of recommender systems. Existing work [65] prompts the large language models to generate the factual knowledge about movies to enhance the performance of CTR prediction models. To better utilize the factual knowledge, a _Knowledge Adaptation_ module is adopted for better contextual information extraction.
It is worth noting that the **phantom** problem of large language models can be a challenge when applied to recommendation tasks. The inherent nature of large language models can introduce ambiguity or inaccurate provenance [66]. This issue can emerge as the introduction of extraneous information or even noise into the recommendation process. The large language models may generate responses that, while syntactically correct, lack informative context or relevance. According to the KoLA [67], a benchmark for evaluating word knowledge of LLMs, even the top-ranked GPT4 just achieves 0.012 in Precision and 0.013 in Recall
Fig. 1: The Overview of LLM for Personalization
on the task _Named Entity Recognition_, which falls far short of the performance (0.712 in Precision and 0.706 in Recall) of the task specific models PL-Marker [68]. Such a finding suggests that common sense is still far from being sufficiently captured by LLM. By aggregating the results with irrelevant or deceptive information, this can damage the usefulness of the recommendation system.
## 5 LLMs as Content Interpreter
Content-based recommenders provide an effective solution for mitigating the sparse feedback issue in recommender systems. By leveraging the attributes and characteristics of items, these systems achieve a more profound understanding of their properties, facilitating accurate matching with user preferences. However, the content features used in content-based recommendation may also exhibit sparsity. Relying solely on the recommended supervision signal, such as clicking and browsing, might not fully exploit the potential benefits of these features. To overcome this challenge, language models emerge as powerful fundamental algorithms that act as content interpreters in processing textual features. Their utilization enhances the effectiveness of recommender systems by effectively understanding and interpreting textual content, leading to improved recommendations.
### _Conventional Content Interpreter_
Conventional content interpreter includes statistical model, neural network, and advanced NLP network, as summarized in Figure 2. These approaches primarily focus on transforming content information, such as textual data, into feature embeddings to facilitate the recommendation process.
Statistical models like TF-IDF, Minimum Description Length (MDL) [69], and bag-of-words have been traditionally used to encode textual data such as news articles and documents into continuous value vectors. However, with the advancement of deep learning techniques, researchers have explored various neural network architectures to learn more expressive content representations. Instead of relying solely on statistical embeddings, some approaches initialize the vectors with bag-of-words representations and then employ autoencoder-based models to learn more powerful representations. For example, CDL [16] combines the latent vectors obtained from autoencoders with the original ID embeddings to enhance content representations. CRAE [70] introduces a collaborative recurrent autoencoder that captures the word order in texts, enabling the modeling of content sequences in collaborative filtering scenarios. Dong et al. [71] propose a stacked denoising autoencoder that reconstructs item/user ratings and textual information simultaneously, allowing for the joint modeling of collaborative and textual knowledge. CVAE [72] introduces a collaborative variational autoencoder that learns probabilistic textual features. While autoencoders are effective in learning low-dimensional representations from text data, they may struggle to capture semantic information effectively [73]. In some cases, approaches like doc2vec [74] are used to construct content embeddings [75, 76] and learn hidden representations. Okura et al. [77] evaluate different network architectures, including word-models and GRU networks, for representing user states.
Following the advancements in neural natural language processing (NLP) models, more sophisticated architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Neural Attention models have been employed as content interpreters to extract contextual information and capture user preferences. These models take sentence inputs, such as news titles, reviews, or comments, and transform them into word embedding matrices using random initialization or word2vec embeddings [78]. Various architectures, including CNNs, attention networks, and hybrid models, are utilized to learn representations of sentences. For example, NPA [79] and LSTUR [80] incorporate attention mechanisms to determine the importance of words after CNN layers. NRMS [81] and CPRS [82] utilize multi-head self-attention networks to learn word representations. These models are effective in capturing long-context dependencies and understanding the semantic information in the text. In addition to text modeling, language models are also used as content interpreters to capture user interests based on their historical interactions. For instance, WE3CN [83] employs a 3D CNN to extract temporal features from the historical data. DKN [26] utilizes an attention mechanism to aggregate historical information related to candidate items. DAN [84] proposes an attention-based LSTM to capture richer hidden sequence features. These models leverage different neural architectures to enhance the representation of text in the context of recommendation systems. It is worth noting that these models still have limitations in terms of depth and the ability to effectively generalize semantic information.
### _Language Model based Content Interpreter_
In recent years, there has been a growing interest in incorporating more powerful pre-trained language models, such as BERT and GPT, into recommendation systems. These language models have shown exceptional performance in various natural language processing tasks and have sparked researchers' inspiration to leverage them for capturing deep semantic representations and incorporating world knowledge in recommendation systems. However, applying pre-trained language models to recommendation tasks presents two main challenges. Firstly, there is a misalignment of goals between general-purpose language models and the specific objectives of recommendation systems. To address this, researchers have proposed approaches that fine-tune the pre-trained models or design task-specific pre-training tasks to adapt them to recommendation tasks. For example, U-BERT [85] employs BERT as a content interpreter and introduces masked opinion token prediction and opinion rating prediction as pre-training tasks to better align BERT with recommendation objectives. Similarly, other works [86, 87, 88, 89, 90] have utilized pre-trained BERT to initialize the news encoder for news recommendation, enhancing the representation of textual features. The pre-trained model, ERNIE, is also utilized to enhance the representation ability of queries and documents [91, 92]. The second challenge is reducing the online inference latency caused by pre-trained language models, which can be computationally expensive. Researchers have explored techniques such as knowledge distillation and model optimization to obtain lightweight and efficient models suitable for online services. For instance,
CTR-BERT [93] employs knowledge distillation to obtain a cache-friendly model for click-through rate prediction, addressing the latency issue.
Moreover, pre-trained language models have been applied beyond mainstream recommendation tasks. They have been integrated into various recommendation scenarios, including tag recommendation [94], tweet representations [95], and code example recommendation [96], to enhance the representation of textual features in those specific domains. Additionally, some recent works [97, 98, 99, 100] have explored using only textual features as inputs to recommendation models, leveraging pre-trained language models to alleviate cold-start problems and enable cross-domain recommendations. This paradigm offers advantages in alleviating cold-start problems and facilitating cross-domain recommendations based on the universality of natural language. ZESREC [97] uses BERT to obtain universal continuous representations of item descriptions for zero-shot recommendation. Unisrec [98] focuses on cross-domain sequential recommendation and employs a lightweight MoE-enhanced module to incorporate the fixed BERT representation into the recommendation task. VQ-Rec [99] further aligns the textual embeddings produced by pre-trained language models to the recommendation task with the help of vector quantization. Fu et al. [101] explore layerwise adaptor tuning to achieve parameter-efficient transferable recommendations.
While the pre-trained language models empower the text understanding with the benefit of capturing world knowledge first, the development of pre-trained large language model provides great emergency ability in the fields of reasoning and generalization. TALLe [102] explores the ability of large language models for the sequential recommendation. They observe that original language models perform poorly in zero-shot and few-shot scenarios, while recommendation-specific instruction-tuned language models demonstrate superior performance in few-shot learning and cross-domain generalization. Similarly, Kang et al. [103] propose a similar instruction tuning method for rating prediction recommendation tasks based on the T5 backbone. They find that the tuned language models, which leverage data efficiency, outperform traditional recommenders. PALR [104] further enhances the construction pipeline of recommendation-specific instruction tuning, which first employs large language models to generate reasoning as additional features based on the user's behavior history. Next, a small set of candidates is retrieved using any existing model based on the user profile. Finally, to adapt general-purpose language models to the recommendation task, they convert the generated reasoning features, user interaction history, and retrieved candidates into natural language instruction data and fine-tune a language model. Existing instruction tuning methods of language models for recommendation scenarios typically focus on a single type of recommendation task, limiting the full utilization of language models' strong generalization ability. InstructRec [105] addresses this limitation by formulating recommendation as an instruction-following procedure. They design various instruction templates to accommodate different recommendation tasks and employ GPT-3.5 to generate high-quality instruction data based on the user's historical data and templates. The language models fine-tuned using this instruction data can effectively handle a wide range of recommendation tasks and cater to users' diverse information requirements.
## 6 LLMs as Explainer
In addition to valuing the suggestions made by a recommendation model, users are also interested in the comprehensible justifications for these recommendations [106, 107]. This is crucial as most recommender systems are black boxes whose inner workings are inscrutable to human understanding [108], diminishing user trust. Taking drug recommendations, for instance, it is unacceptable to recommend drugs with good curative effects simply but fail to give reasons why they are effective. To this end, explainable recommendations aim to couple high-quality suggestions with accessible explanations. This not only helps to improve the model's transparency, persuasiveness, and reliability, but also facilitates the identification and rectification of potential errors through insightful explanations. These benefits have been extensively documented in recent work [109, 110, 111].
Fig. 2: The development of content interpreter in recommendation.
[111, 112]. For instance, [110] conducted a study that involved addressing 40 difficult tasks and evaluating the impact of explanations on zero-shot and few-shot scenarios. Their findings demonstrated that explanations have a positive effect on model performance by establishing a connection between examples and interpretation.
Traditional approaches mainly focus on template-based explanations, which can be broadly categorized into item-based, user-based, and attribute-based explanations[113]. Item-based explainable methods relate recommendations to familiar items [114], explaining that _the recommended item bears similarity to others the user prefers_, which are prevalent on platforms like Amazon [115] and Netflix [116]. However, due to its collaboration, it may underperform in personalized recommendations requiring diversity and can struggle to identify relevant items among industrial settings with vast items efficiently. In contrast, user-based explanations [117] leverage social relationships to make recommendations by explaining that _users with similar interests also favor the recommended item_. The user's social property makes these explanations more persuasive, encouraging users to try the recommendations. However, the variance in user preferences may render this approach less impactful in gauging actual preference. Lastly, attribute-based explanations focus on highlighting the attributes of recommended items that users might find appealing, essentially conveying _"these features might interest you"_. This method demands customization according to each user's interests, yielding higher accuracy and satisfaction. Thus, they are at the forefront of research [118, 119, 120, 121, 122].
Obviously, such explanations typically employ pre-defined and formulaic formats, such as explanations based on similar items or friends. Although capable of conveying essential information, such inflexible formats may diminish the user experience and satisfaction by lacking adaptability and personalization [106]. For this reason, natural language generation approaches have received increasing attention. Early work [123, 124, 122] mainly relied on recurrent neural networks (e.g., LSTM [125], GRU [126]). Limited by the model's expressiveness, they often suffer from the issue of insufficient diversity. With the excellent performance of Transformer-based models in various natural language tasks, some work attempts to integrate Transformer-based models into explainable recommendations. [127] use the position vectors corresponding to the user (item) IDs to predict interpreted tokens. Subsequent work [128] has shown that the generated explanation cannot justify the user's preference by synthesizing irrelevant descriptions. Therefore, Ni et al. [129] used such information as guided input to BERT to obtain a controllable justification. Considering that such auxiliary information is not always available in real-world scenarios, ExBERT [128] only requires historical explanations written by users, and utilizes a multi-head self-attention based encoder to capture the relevance between these explanations and user-item pairs. Recently, MMCT [130], EG4Rec [131], and KAER [132] have further carried out finer-grained modeling of information such as visual images, time series, and emotional tendencies to obtain high-quality interpretations.
Due to the limited expressive power of traditional language models, natural language generation methods are prone to long-range dependence problems [128], that is, the input of long texts will appear to generate explanations that lack diversity and coherence in content. In addition, these explanation methods are tightly coupled with specific recommendation models (e.g., NETE [124]), or directly design a new recommendation model (e.g., NRT [122], PETER [127]), and they are often powerless when faced with existing advanced recommendation models, which limits their generalizability. This is also a flaw in template-based methods. Notably, in industrial settings, recommendation algorithms frequently involve not just a single model but a cascade or integration of multiple models, and these elaborate combinations further exacerbate the difficulty of deciphering recommendations.
Thanks to LLMs' remarkable generative ability in language tasks, making them ideal for tackling the aforementioned challenges [133]. Firstly, with the leverage of extensive training data, LLMs adeptly harness human language, encompassing context, metaphors, and complex syntax. This equips them to craft customized explanations that are precise, natural, and adaptable to various user preferences [124, 127, 134], mitigating the limitations of conventional, formulate explanations. Secondly, the unique in-context learning capabilities of LLMs, such as zero-shot prompting, few-shot prompting, and chain-of-thought prompting, enable them to garner real-time user feedback during interactions, furnish recommendation outcomes, and their corresponding interpretations, fostering bidirectional human-machine alignment. Recent study [135] has demonstrated the potential of LLMs in elucidating the intricacies of complex models, as evidenced by GPT-4 autonomously interpreting the function of GPT-2's each neuron by inputting appropriate prompts and the corresponding neuron activation. This showcases an innovative approach to interpreting deep learning-based recommendation models. It's critical to highlight that this interpretation technique is agnostic to the model's architecture, distinguishing it from traditional interpretations that
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Approach & Task & LLM backbone & Tuning Strategy & Datasets \\ \hline TALLRe [102] & Sequential Recommendation & LLAMA-7B & Instruct Tuning \& Fine Tuning & MovieLens100k, BookCrossing \\ \hline LLMs-Rec [103] & Rating Prediction & Flan-T5-Base, Flan-T5-XXL & Fine Tuning & MovieLens-1M, Amazon Book \\ \hline PALR [104] & Item Recommendation & LLMa-7B & Instruction Tuning & MovieLens-1M, Amazon Beauty \\ \hline \multirow{2}{*}{InstructRec [105]} & sequential recommendation & \multirow{2}{*}{Flan-T5-XL} & \multirow{2}{*}{Instruction Tuning} & \multirow{2}{*}{Amazon-Games, CDs} \\ & personalized search & & & \\ \hline \hline \end{tabular}
\end{table} TABLE I: LLMs for Content Interpreter
are bound to specific algorithms. Thus, recommendation interpretations founded on LLMs pave the way for a versatile and scalable interpretational framework with broader applicability.
Although LLMs have inherently significant advantages in recommendation explanations, it is imperative to recognize potential issues. Firstly, akin to recommendation models, LLMs are essentially black boxes that are difficult for humans to understand. We cannot identify what concepts they give explanations based on [136]. Also, the explanation given may be insincere; that is, the explanations are inconsistent with their recommended behaviors. Some recent developments [137, 111] involve utilizing chains of thought to prompt reasoning for improved interpretability; however, the opacity of the reasoning process of each step remains a concern, and [138] has questioned the possible unfaithful explanations of chain-of-though prompting. Secondly, the extensive data utilized by LLMs may encompass human biases and erroneous content [139]. Consequently, even if the explanation aligns with the model's recommendation behavior, both the explanation and recommendation could be flawed. Monitoring and calibrating these models to ensure fairness and accuracy in explainable recommendations is essential. Lastly, generative models exhibit varying levels of proficiency across different tasks, leading to inconsistencies in performance. Identical semantic cues could yield disparate recommendation explanations. This inconsistency has been substantiated by recent studies [140, 141] focusing on the LLMs' robustness. Addressing these issues calls for exploring techniques to mitigate or even circumvent low-reliability explanatory behavior, and investigating how LLMs can be trained to consistently generate reliable recommendation explanations, especially under adversarial conditions, is a worthwhile avenue for further research.
## 7 LLMs as Common System Reasoner
With the development of large language models, there is an observation that LLMs exhibit reasoning abilities [142, 2] when they are sufficiently large, which is fundamental for human intelligence for decision-making and problem-solving. By providing the models with the 'chain of thoughts' [111], such as prompting with _'let us think about it step by step'_, the large language models exhibit emergent abilities for reasoning and can arrive at conclusions or judgments according to the evidence or logics. Accordingly, for recommender systems, large language models are capable of reasoning to help user interest mining, thus improving performance.
### _Making Direct Recommendations_
In-context learning [150, 151, 152, 153, 154, 148, 149, 150] is one of the emergent abilities of LLMs that differentiate LLMs from previous pre-trained language models, where, given a natural language instruction and task demonstrations, LLMs would generate the output by completing the word sequence without training or tuning [3]. As for in-context learning, the prompt follows by the task instruction and/or the several input-output pairs to demonstrate the task and a test input is added to require the LLM to make predictions. The input-output pair is called a _shot_. This emergent ability enables prediction on new cases without tuning unlike previous machine learning.
In the realm of recommender systems, numerous studies have explored the performance of zero-shot/few-shot learning using large language models, covering the common recommendation tasks such as rating prediction, and ranking prediction. These studies evaluate the ability of language models to provide recommendations without explicit tuning, as summarized in Table II, where all methods adopt in-context learning for direct recommenders. The general process can be attached in Figure 3. Accordingly, we have the following findings:
* The aforementioned studies primarily focused on evaluating zero-shot/few-shot recommenders using open-domain datasets, predominantly in domains such as movies and books. Large language models are trained on extensive open-domain datasets, enabling them to possess a significant amount of common-sense knowledge, including information about well-known movies. However, when it comes to private domain data, such as e-commerce products or specific locations, the ability of zero-shot recommenders lacks of validation, which is expected to be challenging.
* Current testing methods necessitate the integration of additional modules to validate the performance of zero-shot recommenders for specific tasks. In particular, for ranking tasks that involve providing a list of items in order of preference, a candidate generation module is employed to narrow down the pool of items [145] and [146]. Generative-based models like gpt-3.5-turbo generate results in a generative manner rather than relying on recall from existing memories, thus requiring additional modules to implement ID-based item recommendations.
* From the perspective of recommendation performance, zero-shot recommenders exhibit some capabilities and few-shot learners perform better than zero-shot recommenders. However, there still exists a substantial gap when compared to traditional recommendation models, particularly fine-tuned large language models designed specifically for recommenders, such as P5[155] and Mo-Rec [156]. This highlights that large language models do not possess a significant advantage in personalized modeling.
Another important emergent ability is the _'step by step'_ reasoning, where LLMs can solve complex tasks by utilizing prompts including previous intermediate reasoning steps, called the 'chain of thoughts' strategy [111]. Wang and Lim [145] design a three-step prompt, namely NIR, to capture user preferences, extract the most representative movies and rerank the items after item filtering. Such a multi-step reasoning strategy significantly improves recommendation performance.
### _Reasoning for Automated Selection_
Automated Machine Learning (AutoML) is widely applied in recommender systems to eliminate the costly manual setup with trials and errors. The search space in recommender systems can be categorized in (1) Embedding size
(2) Feature (3) Feature interaction (4) Model architecture. Embedding size search, such as [157, 158, 159, 160] seeks for appropriate embedding size for each feature to avoid resources overconsumption. Searching for features consisting of raw feature search[161, 162] and synthetic feature search[163, 164], which selects a subset from the set of original or cross features to maintain informative features to reduce both computation and space cost. Feature interaction search, such as [165, 166, 167, 168, 169], automatically filters out feature interactions that are not helpful. Model architecture search, like [170, 171, 172, 173], expands the search space to the integral architectures. The search strategy shifts from the discrete reinforcement learning process, which iteratively samples architectures for training and is time-consuming, into the differentiable searching, which adaptively selects architectures within one-shot learning to circumvent the computational burden, for more efficient convergence. The evaluation for each sampled architecture then acts as the signal to adjust the selections. That is, there is a decision maker who memorizes the prior results of previous architecture choices and analyzes the prior results to give the next recommended choice.
The emergent LLMs actually have excellent memorization and reasoning capability that would work for automated learning. Several works have attempted to validate the potential of automated machine learning with LLMs. Preliminarily, GPT-NAS [174] takes advantage of generative capability of LLMs. The architecture of networks are formulated into sequential characters, and thus the generation of network architectures can be easily achieved through the generative pre-training models. NAS-Bench-101 [175] is utilized for pre-training and the state-of-the-art results are used for fine-tuning. The generative pre-training models produce reasonable architectures, which would reduce the search space for later genetic algorithms for searching optimal architectures. The relatively advanced reasoning ability is further evaluated in GENIUS [176], where GPT-4 is employed as a black-box agent to generate potential better-performing architectures according to previous trials including tried architectures with their evaluation performance. According to the results, GPT-4 can generate good architecture networks, showing the potential for more complicated tasks. Yet it is too
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline Approach & LLM backbone & Task & Metric & Datasets & ICL & COT \\ \hline \multirow{4}{*}{[143]} & \multirow{4}{*}{gpt-3.5-turbo} & rating prediction & RMSE,MAE & \multirow{4}{*}{Amazon Beauty} & \multirow{4}{*}{✓} & \multirow{4}{*}{} \\ & & sequential recommendation & HR,NDCG & & & \\ & & direct recommendation & HR,NDCG & & & \\ & & explanation generation & BLUE4,ROUGE,Human Eval & & & \\ & & review summarization & BLUE4,ROUGE,Human Eval & & & \\ \hline \multirow{4}{*}{[144]} & text-davinci-002 & point-wise & \multirow{4}{*}{NDCG,MRR} & MovieLens-1M & \multirow{4}{*}{✓} & \multirow{4}{*}{✓} \\ & text-davinci-003 & pair-wise & & & Amazon-Book & \\ & gpt-3.5-turbo & list-wise & & & Amazon-Music & \\ & & & & & \\ & & & & & & \\ \hline \multirow{4}{*}{[103]} & Flan-U-PALM & rating prediction & RMSE,MAE & MovieLens-1M & \multirow{4}{*}{✓} & \multirow{4}{*}{✓} \\ & gpt-3.5-turbo & ranking prediction & ROC-AUC & Amazon-Books & & \\ \hline \multirow{2}{*}{[145]} & text-davinci-003 & reranking & NDCG,HR & MovieLens 100K & ✓ & ✓ \\ \hline \multirow{2}{*}{[146]} & \multirow{2}{*}{gpt-3.5-turbo} & reranking & NDCG & MovieLens-1M & \multirow{2}{*}{✓} & \multirow{2}{*}{✓} \\ & & reranking & NDCG & Amazon-Games & & \\ \hline \multirow{2}{*}{[147]} & gpt-3.5-turbo & reranking & Precision & MIND & ✓ & \\ \hline \hline \end{tabular}
\end{table} TABLE II: Zero/few-shot learners of LLMs for RS
Fig. 3: An Example of zero/few-shot learning for direct recommenders
difficult for LLMs to directly make decisions on challenging technical problems only by prompting. To balance efficiency and interpretability, one approach is to integrate the LLMs into certain search strategies, where the genetic algorithm guides the search process and LLMs generate the candidate crossovers. LLMatic [177] and EvoPrompting [178] use code-LLMs as mutation and crossover operators for a genetic NAS algorithm. During evolution, each generation has a certain probability of deciding whether to perform crossover or mutation to produce new offspring. Crossover and mutation are generated by prompting LLMs. Such a solution integrates LLM into the genetic search algorithm, which would achieve better performances than direct reasoning.
The research mentioned above brings valuable insights to the field of automated learning in recommender systems. However, there are several challenges that need to be addressed. Firstly, the search space in recommender systems is considerably more complex, encompassing diverse types of search space and facing significant volume issues. This complexity poses a challenge in effectively exploring and optimizing the search space. Secondly, compared to the common architecture search in other domains, recommender systems lack a strong foundation of knowledge regarding the informative components within the search space, especially the effective high-order feature interactions. Unlike well-established network structures in other areas, recommender systems operate in various domains and scenarios, resulting in diverse and domain-specific components. Addressing these challenges and advancing the understanding of the search space and informative components in recommender systems will pave the way for significant improvements in automated learning approaches.
## 8 LLMs as Conversational Agent
Conversational recommender system (CRS) is a specialized type of recommendation tool that aims to uncover users' interests and preferences through dialogue, enabling personalized recommendations and real-time adjustment of recommendation strategies based on user feedback. Compared to traditional recommender systems, conversational recommender systems have the advantage of real-time understanding of user intents and the ability to adapt recommendations based on user feedback. Typically, a conversational recommender system consists of two main components: a dialogue module and a recommendation module. In this section, we will primarily focus on discussing the dialogue module, which plays a crucial role in facilitating effective user-system interactions and understanding user preferences.
In a conversational recommender system, the dialogue module typically takes the form of a dialogue system. Dialogue systems can generally be classified into two main categories: chit-chat and task-oriented. The former focuses on open-domain question answering, and two major methods are commonly employed: generative and retrieval-based methods. Generative methods [179, 180, 181] utilize a sequence-to-sequence model structure to generate responses, while retrieval-based methods [182, 183, 184] transform the task of generating responses into a retrieval problem by searching for the most relevant response in a response database based on the dialogue context. In conversational recommender systems, task-oriented dialogue systems are more often required, as they are specifically designed to assist users in accomplishing specific tasks. For task-oriented dialogue systems, a common approach [185, 186] is to treat the response generation as a pipeline and handle it separately using four components: dialogue understanding [187, 188], dialogue state tracking [189, 190, 191], dialogue policy learning [192, 185], and natural language generation [193, 194]. Another approach is to employ an end-to-end method [195, 196, 197], training an encoder-decoder model to handle all the processing steps collectively. The first approach suffers from scalability issues and lacks synergy between the components, while the second approach requires a substantial amount of supervised data for training.
Based on the classification of dialogue systems, common approaches in conversational recommender systems can also be divided into two categories: attribute-based QA (question-answering) and generative methods. The attribute-based QA approach [185, 198, 199, 200] utilizes a pipeline method within the dialogue system. In each dialogue turn, the system needs to decide whether to ask the user a question or provide a recommendation. The decision-making process, particularly regarding which attribute to ask about, is typically handled by a policy network. On the other hand, generative methods do not explicitly model the decision-making process. Instead, they often employ an end-to-end training approach, where a sequence-to-sequence model generates output directly from a shared vocabulary of words and items. Whether the generated output is chit-chat, a question, or a recommendation is implicitly determined during the generation process. Compared to attribute-based QA methods, generative methods [201, 202, 203, 204] appear to be simpler and more scalable. However, they require a large amount of supervised training data. With the advancement of pre-trained language models (PLMs) in the field of natural language processing, particularly models like BERT [205] and GPT [206], the capabilities of pre-trained models in language understanding and generation have become increasingly powerful. Researchers have found that fine-tuning pre-trained models with a small amount of supervised data can yield impressive results on specific tasks. This discovery has led to the application of PLMs in generative conversational recommender systems. For example, DialoGPT [197] achieved promising dialogue intelligence by fine-tuning GPT-2 on dialogue data collected from platforms like Reddit. Subsequently, BARCOR [202], RecInDial [204], and UniCRS [203] utilized DialoGPT for constructing conversational recommender systems, with variations in their action decision strategies. While PLMs reduce the dependency of generative dialogue models on extensive data, the fine-tuning process still incurs significant computational time and requires the collection of high-quality domain-specific training data due to the large parameter space of the models.
With the increase in model parameters and training data, the intelligence and knowledge capacity of models continues to improve. OpenAI has been expanding the model parameters and training data while employing techniques such as RLHF (Reinforcement Learning from Human Feedback) and Instruction Tuning to further fine-tune GPT-3 [3]. This has led to the emergent abilities of models like InstructGPT [207] and subsequent models like ChatGPT, which exhibit incredible
intelligence and have opened the doors to new intelligent dialogue systems based on large language models (LLMs). Furthermore, Google's BARD and META's LLMaMA [208] are also large language dialogue models that have been proposed and demonstrated remarkable performance in conversational abilities. The Vicuna model, for instance, utilizes dialogue corpora shared by users in using ChatGPT to fine-tune the open-source LLMa model, with the team claiming it can achieve over 90% of ChatGPT's capability. This series of successive LLM introductions has brought new insights to conversational recommender systems. Due to the utilization of extensive open-domain corpora during LLM training, it possesses inherent conversational recommendation capabilities and can provide reasonable recommendations in open domains such as movies, music, and games.
However, there are still significant **challenges** in building an enterprise-level CRS. The first challenge is the lack of awareness of large models about private domain data. It is well known that most of the training data for LLMs, such as GPT-3, comes from publicly available sources on the internet. As a result, these models may lack visibility into the data that resides within information platforms, making their modeling and understanding capabilities of such data relatively poor. To address this challenge, there are currently two approaches being explored: fine-tuning [197] and tool learning [209, 210]. Fine-tuning involves tuning LLM using private domain-specific dialogue data. There are two major concerns in the approach. First, massive high-quality domain-specific dialogue data is required to tune the extremely large model. However, in most recommendation scenarios, data primarily consists of explicit or implicit user-item interactions, which may lack conversational context. Therefore, generating high-quality dialogue data from interaction data is a key concern in the approach. In RecLLM [210] and iEvaLM [211], researchers have proposed using LLMs to construct a user simulator for generating conversational data. Besides, the fine-tuning technique plays a crucial role in determining the ultimate quality of LLMs. A well-designed and effective fine-tuning strategy can lead to significant improvements in the model's performance and capabilities, such as instruction tuning and RLHF proposed in InstructGPT [3]. Tool learning is another approach to address this challenge, and its main idea is to treat traditional recommendation models as tools to be utilized, such as Matrix Factorization (MF) and DeepFM. For a more detailed explanation of tool learning, please refer to Section 9. Since recommendation models are domain-specific, LLM can leverage these models to obtain recommendation results and recommend them to the users in the response. In this approach, there are two main technical points: the construction of the tool model and the engineering of prompts to guide the LLM in the proper utilization of the tool. First of all, conventional recommendation models generally use id or categorical features as input, while users always give their requirements or preferences in natural language in conversations. Therefore, unstructured text features should be taken into consideration in tool construction. In ChatRec [209], a conventional recommendation model and a text embedding-based model(text-embedding-ada-002) are used as tools. RecLLM [210] adapted a language model enhanced dual-encoder model and several text retrieval methods as the recommendation engine. On the other hand, despite the strong intelligence and reasoning capabilities of LLMs, effectively harnessing these abilities requires well-crafted prompts for guidance. For instance, the Chain of Thought proposed by Jason [111] could trigger LLM to reason and engage in step-by-step thinking, which benefits the tool-using capability. Subsequent studies like ToT [212], Plan-and-Solve [213] and ReAct [214] have proposed more advanced techniques for prompt design to assist in guiding LLM to engage in deeper thinking and tool planning.
The second challenge lies in the issue of memory and comprehension in long conversations. Due to the input constraints of LLMs, models like ChatGPT can support a maximum of 4096 tokens in a single call, including both input and output. In multi-turn dialogue scenarios, longer dialogue contexts often meet the risk of exceeding this token limit. The simplest approach to tackle this challenge is to trim the dialogue by discarding earlier turns. However, in conversational recommender systems, users may express a significant amount of personal information and interests in the early stages of the conversation. The omission of such information directly impacts the accuracy of recommendations. To address this issue, several relevant works have proposed solutions. MemPrompt [215] enhances the prompt by incorporating a memory module, enabling GPT-3 to possess stronger long-dialogue memory capability. Similarly, RecLLM [210] leverages LLM to extract user profiles and store them as factual statements in user memory. When processing user queries, relevant facts are retrieved based on text similarity.
## 9 Tool-Learning and its Applications in Recommendation
### _LLM-based Tool Learning_
Tool learning is an emerging research field that aims to enhance task-solving capabilities by combining specialized tools with foundational models, which has been understood by [216] as two perspectives:
1. **Tool-augmented learning** treats specialized tools as assistants in order for improving the quality and accuracy of tasks, or **Tool for AI**;
2. **Tool-oriented learning** focuses more on training models to effectively use tools, controlling and optimizing tool-applying processes, or **AI for Tool**.
Tool learning has found applications in various fields, and this section primarily focuses on tool learning paradigms based on large language models (LLMs). While recent works often involve a combination of these two perspectives, we do not specifically categorize each work into one type. LLMs, such as GPT, are well-suited for tool learning applications [217]. With their powerful natural language processing capabilities, LLMs can break down complex tasks into smaller sub-tasks and convert them into executable instructions. Specialized tools allow LLMs to access knowledge that is beyond their own understanding. By integrating specialized tools, LLMs can better understand and address complex problems, offering more accurate and efficient solutions.
LLMs are commonly applied as controllers to select and manage various existing AI models to solve complex tasks,
which rely on user input and language interfaces on making summarizations. They act as the central component, responsible for comprehending problem statements and making decisions regarding which actions to execute. Additionally, they aggregate the outcomes based on the results of the executed actions. In that case, HuggingGPT [227] leverages existing models from the Hugging Face community1 to assist in task-solving. Visual ChatGPT[228] combines visual foundation models like BLIP [231], Stable Diffusion [232], etc. with LangChain2 to handle complex visual tasks, while the following TaskMatrix.AI [229] maintains a unified API Platform extending the capabilities of Visual ChatGPT, extends the capabilities of Visual ChatGPT by maintaining a unified API Platform, enabling input from multiple modalities and generating more complex task solutions. On the contrary, AutoGPT3 operates an augment that autonomously understands specific targets through natural language and performs all processes in an automated loop, without requiring mandatory human input. WebGPT [223] introduces a text-based web browsing interactive environment, where LLMs learn to emulate the complete process of human interaction with a web browser using behavior cloning and rejection sampling techniques. In ReAct [214], by leveraging an intuitive prompt, LLMs learn to generate both reasoning paths and task-specific actions alternately when solving a specific task. The execution of specific actions is delegated to corresponding tools, and external feedback obtained from these tools is utilized to validate and guide the reasoning process further. The motivation behind Toolformer [230] aligns closely with ReAct; however, it goes a step further by combining diverse tools within a single model. This integration provides the model with flexible decision-making abilities and improved generalization capabilities, achieved through a simple yet effective self-supervised method. In contrast to prior works, LATM [233] takes a novel approach by empowering LLMs to directly generate tools. It achieves a division of labor within the task-solving process by employing LLMs at different scales: the tool maker, tool user, and dispatcher. LATM is entirely composed of LLMs, enabling the self-generation and self-utilization of tools.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Approach & Tool usage & LLM backbone & Task \\ \hline Re3[218] & LLM & \begin{tabular}{c} gpt3-instruct-175B \\ gpt3-instruct-13B \\ \end{tabular} & Long Stories Generation \\ \hline FEER[219] & LLM & LM-Adapted T5 & Editions, Citations, Quotes \\ \hline METALM[220] & Pretrained Encoders with diverse modalities & Transformer (pretrained from scratch) & \begin{tabular}{c} language-only tasks \\ vision-language tasks \\ \end{tabular} \\ \hline Atlas[221] & Dense retriever & T5 & \begin{tabular}{c} Knowledge-Intensive Language Tasks \\ Massively-Multitask Language Understanding \\ Question Answering \\ Fact Checking \\ \end{tabular} \\ \hline LaMDA[222] & Retriever & Decoder-only Transformer & Dialog \\ & Translator & Calculator & \\ \hline WebGPT[223] & Web Browser & gpt-3 & Question answering \\ \hline Mind’s Eye[224] & \begin{tabular}{c} Physics Engine \\ Text-to-code LM \\ \end{tabular} & \begin{tabular}{c} gpt-3 \\ FaLM \\ \end{tabular} & Reasoning \\ \hline PAI[225] & Python Interpreter & CODEX(code-davinci-002) & \begin{tabular}{c} Mathematical \\ Symbolic \\ Algorithmic Reasoning \\ \end{tabular} \\ \hline SayCan[226] & Robots & PaLM & Real-world robotic tasks \\ \hline HuggingGPT[227] & \begin{tabular}{c} AI models \\ in Hugging Face Community \\ \end{tabular} & \begin{tabular}{c} gpt-3-5-turbo \\ text-davinci-003 \\ gpt-4 \\ \end{tabular} & \begin{tabular}{c} Image Classification \\ Image Captioning \\ Object Detection \\ etc. \\ \end{tabular} \\ \hline Auto-GPT & Web Browser & \begin{tabular}{c} gpt-3-5-turbo \\ text-davinci-003 \\ gpt-4 \\ \end{tabular} & User-specified Tasks \\ \hline Visual ChatGPT[228] & \begin{tabular}{c} Visual Foundation models \\ Customized models with unified API form \\ \end{tabular} & text-davinci-003 & Visual Customized Tasks \\ \hline ReAct[214] & Wikipedia API & PaLM-540B & \begin{tabular}{c} Question Answering \\ Face Verificaiton \\ \end{tabular} \\ \hline Toolformer[230] &
\begin{tabular}{c} Calculator \\ Q\&A system \\ Search Engine \\ Translation System \\ Calendar \\ \end{tabular} & GPT-J & Downstream Tasks \\ \hline \hline \end{tabular}
\end{table} TABLE III: LLM-based tool learning approaches
### _Applications in Personalization Scenarios_
Recently, LLMs have demonstrated impressive abilities in leveraging internal world knowledge and common sense reasoning to accurately understand user intent from dialogues. Moreover, LLMs can communicate with users fluently in natural language, offering a seamless and delightful user experience. These advantages make LLMs an appealing choice as recommendation agents to enhance the personalized experience.
However, despite the impressive memory capacity of LLMs, they face challenges in memorizing specific knowledge in private and specialized domains without sufficient training. For instance, storing the item corpus and all user profiles in a recommender system can be challenging for LLMs. This limitation can result in LLMs generating inaccurate or incorrect responses and makes it difficult to control their behavior within a specific domain. Furthermore, LLMs face the challenge of the _temporal generalization problem_ as external knowledge continues to evolve and change over time. To address these issues, various tools can be utilized to augment LLMs and enhance their effectiveness as recommendation agents.
**Search engine.** Search engines are widely employed to provide external knowledge to LLMs, reducing LLMs' memory burden and alleviating the occurrence of hallucinations in LLMs' responses. BlenderBot 3 [23] uses specific datasets to fine-tune a series of modules, enabling LLMs to learn to invoke the search engine at the appropriate time and extract useful knowledge from the retrieval results. LaMDA [22] learns to use a toolset that includes an IR system, a translator, and a calculator through fine-tuning to generate more factual responses. RETA-LLM [23] is a toolkit for retrieval-augmented LLMs. It disentangles IR systems and LLMs entirely, facilitating the development of in-domain LLM-based systems. [22] shows a case of applying LaMDA to content recommendation. Preconditioned on a few role-specific dialogues, LaMDA can play the role of a music recommendation agent.
**Recommendation engine.** Some works have attempted to alleviate the memory burden of LLMs by equipping them with a recommendation engine as a tool, enabling LLMs to offer recommendations grounded on the item corpus. The recommendation engine in Chat-REC[20] is further divided into two stages: retrieve and reranking, which aligns with typical recommendation system strategies. In the retrieval stage, LLMs utilize traditional recommendation systems as tools to retrieve 20 items from the item corpus as a candidate item set. Subsequently, LLMs employ themselves as tools to rerank the candidate item set. LLMs' commonsense reasoning ability, coupled with the internal world knowledge within them, allow them to provide explanations for the sorting results. The recommendation engine tool used in RecLLM[21] is highly similar to it in Chat-REC, and it is also divided into retrieval and reranking stages. RecLLM provides several practical solutions for large-scale retrievals, such as Generalized Dual Encoder Model and Concept Based Search, and so on.
**Database.** Databases are also utilized as tools to supplement additional information for LLMs. In order to better cope with the cold-start problem for new items and alleviate the temporal generalization problem of LLMs, a vector database is utilized to provide information for new items that the LLMs are unaware of in Chat-REC [20]. When encountering new items, LLMs can utilize this database to access information about them based on the similarity between the user's request embedding and item embeddings in the database. User profiles can also help LLMs better understand the user's intent. RecLLM[21] employs a user profile module as a tool to deposit meaningful and enduring facts about users exposed during historical conversations in user memory and retrieve a single fact related to the current dialogue when necessary.
Although some works have applied the concept of tool learning to personalization systems, there are still interesting and promising research topics that deserve exploration.
1) **Fine-tuning models for better tool use.** In-context learning has shown promise in teaching LLMs how to effectively use tools with a small number of demonstrations, as shown in Chat-REC and RecLLM. However, LLMs often struggled to learn strategies for handling complex contexts with limited demonstrations. Fine-tuning is a viable option for improving tool use, but it requires sufficient training data and effective techniques. RecLLM further fine-tunes some modules of it using synthetic data generated by a user simulator through RLHF[20] technique. Investigating methods to obtain sufficient training data and developing tailored fine-tuning techniques for recommendation systems is a worthwhile research direction. 2) **Developing a more powerful recommendation engine.** Traditional recommendation systems often rely on collaborative filtering signals and item-to-item transition relationships for recommendations. However, with the use of LLMs as the foundation models, user preferences can be reflected through natural language and even images. Therefore, developing a recommendation engine that supports multimodal data is a crucial research direction. Additionally, the recommendation engine should also be capable of adjusting the candidate set based on user preferences or feedback (such as querying movies of a specific genre or disliking an item in the recommendation set). 3) **Building more tools.** To provide LLMs with more authentic and personalized information, the development of additional tools is crucial. For example, APIs for querying knowledge graphs [23] or accessing users' social relationships can enhance the knowledge available to LLMs, enabling more accurate and tailored recommendations.
## 10 LLMs as Personalized Content Creator
Traditional recommender systems focus on suggesting existing items based on user preferences and historical data, where displayed content is already generated for retrieval. However, with the advancements in techniques and platforms for content creators, personalized content creator has attracted more and more attention, where more appealing content is customized generated to match the user's interests and preferences, especially in the realm of online advertising [23]. The common contents contain the visual and semantic contents [23, 24, 25], such as title, abstract, description, copywritings, ad banners, thumbnail, and videos. One more widely discussed topic is text ad generation, where the ad title and ad description
are generated with personalized information. Earlier works adopt the pre-defined templates [241, 242, 238] to reduce the extensive human effort, which, however, often fail to fully meet the user's interests and preferences. More recent data-driven methods have emerged, which incorporate user feedback as rewards in the reinforcement learning framework to guide the generation process [243, 244, 245, 246]. Furthermore, the incorporation of pre-trained language models has played a significant role in improving the generation process for multiple content items [247, 248, 249, 250]. This integration helps refine the content generation models and improve their ability to meet user preferences effectively.
As recommender systems and large language models continue to evolve, a promising technique that would bring new opportunities is the integration of AI Generated Content (AIGC). AIGC [251] involves the creation of digital content, such as images, music and natural language through AI models, with the aim of making the content creation process more efficient and accessible. Earlier efforts in this field focused on deep-learning-based generative models, including Generative Adversarial Networks (GANs) [252], Variational AutoEncoders (VAEs) [253], Normalizing Flows [254], and diffusion-based models [255] for high-quality image generation. As the generative model evolves, it eventually emerges as the transformer architecture [23], acting as the foundational blocks for BERT [256] and GPT [206] in the field of NLP, and for Vision Transformer (ViT) [257] and Swin Transformer [258] in the field of CV. Moreover, the scope of generation tasks expanded from uni-modal to multi-modal tasks, including the representative model CLIP [259], which can be used as image encoders with multi-modal prompting for generation. The multi-modal generation has become an essential aspect of AIGC, which learns the multimodal connection and interaction, typically including vision language generation [259], text audio generation [260], text graph generation [261], text Code Generation [262]. With the emergence of large language models, nowadays AIGC is achieved by extracting the human intention from instructions and generating the content according to its knowledge and intention. Representative products, including ChatGPT [263], DALL-E-2 [264], Codex [265] and Midjourney [266], have attaining significant attention from society. With the growth of data and model size, the model can learn more comprehensive information and thus leading to more realistic and high-quality content creators.
Recall to the personalized content creator, the large language models would bring opportunities from the following points. Large language models would further extend the capabilities of the pre-trained model, allowing for better reasoning of user personalized intent and interest. Previous methods [248, 249] depending on tailored pre-training models may be enhanced to better improve the reasoning abilities and few-shot prompting. Secondly, _Reinforcement Learning from Human Feedback_ (RLHF) strategy can be applied to fine-tune models to better capture the user intent information, similar to existing RL-based framework [244] for text ad generation. Last but not least, the powerful generative abilities of large language models empower realistic creation thanks to the availability of sufficient cross-modal knowledge bases. The work [267] more specifically proposes a recommendation paradigm based on ChatGPT, where the generation process receives feedback and multiple rounds of conversions to better capture the user explicit preferences. Compared to previous training paradigms, more explicit expressions of user interest can be understood by the large language models and converted into corresponding instructions to guide the generation of content, significantly alleviating the problem of extremely sparse feedback.
However, there are two major security and privacy risks for personalized content creators. One of the concerns is the reliability of models like ChatGPT in terms of factuality, as indicated in the work [268]. While these models generate content that appears reasonable, there is a risk of distributing misleading or inaccurate information, which can weaken the truthfulness of internet content. This concern becomes particularly crucial in personalized recommendations, where the model may inadvertently promote misleading information tailored to the user's interests. The second concern revolves around data privacy, encompassing both user profiles and long-term human interaction histories. In the case of large language models, these interaction histories are collected or shared, potentially leading to the large models memorizing sensitive user data. Previous work [269] has demonstrated that large language models, especially GPT-2 [270], memorize and leak individual training examples. This emphasizes the need for strict user approval and careful handling of annotator data to mitigate privacy risks. It is crucial to develop new techniques that prioritize privacy preservation during the training process.
## 11 Open Challenges
### _Industrial Challenges_
Personalization services, particularly with recommender systems, are complex industrial products that face numerous challenges when implemented in real-world scenarios. We will now summarize the key challenges as follows:
**Scaling computational resources** Existing large language models, such as BERT and GPT, demand significant computational power for training and inference. This includes high memory usage and time consumption. Fine-tuning these models to align them with personalization systems, which has shown promising results for improved personalization performance, can be computationally intensive. Several efficient finetuning strategies, e.g., option tuning in M6-Rec [156], Lora [271], QLora [272], have been developed to address this issue and pave the way for more efficient tuning.
**Significant Response time** Achieving efficient response times is crucial for online serving and greatly impacts the personalized user experience. Response time includes both the inference phase of large language models and the concurrent user requests in large numbers. The introduction of large language models can result in considerable inference time, posing a challenge for real-world deployment. One approach is to pre-compute the embeddings of intermediate outputs from language models, storing and indexing them in a vector database, particularly for methods that utilize large language models as textual encoders. Other approaches, such as distillation and quantization, aim to strike a balance between performance and latency.
### _Laborious Data Collection_
Large language models are widely known to leverage extensive amounts of open-domain knowledge during their training and fine-tuning processes. These knowledge sources include well-known references such as Wikipedia, books, and various websites [3]. Similarly, when applied in recommender systems, these models often rely on representative open-domain datasets such as MovieLens and Amazon Books. While this type of open-domain knowledge contains a wealth of common-sense information, personalized tasks require access to more domain-specific data that is not easily shareable. Additionally, the nature of user feedback in personalized tasks can be complex and sparse, often accompanied by noisy feedback. Collecting and filtering this data, in contrast to acquiring common-sense knowledge, presents challenges. It incurs higher labor costs and introduces additional training redundancy due to the need for extensive data processing and filtering. Furthermore, designing appropriate prompts to instruct or fine-tune large language models is crucial for aligning them with the distribution of in-domain inputs in personalization tasks. By carefully tailoring the prompts, researchers and practitioners can guide the model to produce outputs that better cater to personalized applications, thereby maximizing performance and effectiveness.
### _Long Text Modeling_
Large language models have a limitation on the maximum number of input tokens they can handle, typically constrained by the context window size, e.g., 4096 for ChatGPT. This poses challenges when dealing with long user behavior sequences, which are common in modern recommender systems. Careful design is necessary to generate effective and appropriate prompt inputs within this limited length. In the case of conversations with multiple rounds, accumulating several rounds of dialogue can easily exceed the token limit of models. The current approach in handling long conversations is to truncate the history, keeping only the most recent tokens. However, this truncation discards valuable historical information, potentially harming the model performance. To address these challenges, several techniques can be employed. One approach is to prioritize and select the most relevant parts of the user behavior sequence or conversation history to include in the prompt. This selection can be based on various criteria such as recency, importance, or relevance to the task at hand. Another technique involves summarizing or compressing the lengthy input while preserving essential information. This can be achieved through techniques like extractive summarization or representing the long sequence in a condensed form. Moreover, architectural modifications, such as hierarchical or memory-augmented models, can be explored to better handle long sequences by incorporating mechanisms to store and retrieve relevant information efficiently.
In addition, collaborative modeling of long text data and recommendation tasks is an emerging and pressing challenge. In conventional personalization systems, item ID information along with other categorical information is commonly used for modeling feature interactions and user preferences. With the rise of large language models, there would be a growing trend toward leveraging textual information more extensively. Textual data provides unique insights about items or users, making it valuable for modeling purposes. From the perspective of modeling, dealing with long text data requires more attention and complexity compared to categorical data, not to mention the need to match the modeling of user interests. From the perspective of implementation, reforming the entire pipeline becomes necessary to accommodate the requirements of efficient latency. Efficiently processing and incorporating long text data into recommendation models and serving them in real-time present technical challenges.
### _Interpretability and Explainability_
While large language models provide good reasoning capabilities, they are notorious for the nature of the 'black box', which is highly complex and non-linear in their enormous size and layered architecture, making it challenging to comprehend the internal workings and understand the generation process of recommendations. Without a deep understanding of how the model operates, it becomes challenging to detect and address biases or ensure fair and ethical recommendations. Once transparency about the internal mechanisms is lacking, users struggle to trust and accept the decisions made by the system. Users often desire understandable explanations for recommended choices. Addressing the challenge of model interpretability and explainability requires research involving natural language processing, explainable AI, human-computer interaction, and recommendation systems. The development of techniques that unveil the inner workings of language models, facilitate the generation of meaningful and accurate interpretations, and enable robust evaluation methods is the main focus. By providing transparent and interpretable recommendations, users can establish trust, understand the reasoning behind the recommendations, and make informed decisions.
### _Evaluation_
Conventional personalization systems typically rely on task-specific metrics such as ranking-oriented metrics, NDCG, AUC, and Recall to evaluate model performance. However, with the integration of large language models into recommender systems, the evaluation tools and metrics undergo significant changes. Traditional metrics may not sufficiently capture the performance of recommender systems powered by large language models, which introduce novel capabilities and generate recommendations in a different manner and require the development of new evaluation tools.
One crucial aspect of evaluation is considering user preferences in large language model-powered systems, which requires a user-centric approach. Metrics such as user satisfaction, engagement, and overall experience become essential considerations. For example, Liu's work [143] proposes a crowdsourcing task to assess the quality of generated explanations and review summaries, providing a way to evaluate the effectiveness of the generated content. Additionally, user satisfaction surveys and feedback questionnaires can serve as valuable options.
Another perspective to consider is the health of the system, which involves evaluating novelty and assessing factors like diversity, novelty, serendipity, and user retention
rates. These metrics help evaluate the freshness of recommendations and the long-term effects of large language models.
Furthermore, it is crucial to assess the interpretability and fairness of recommendations. The interpretability assessment focuses on measuring the clarity, understandability, and transparency of recommendations. Simultaneously, the fairness evaluation aims to address potential biases in personalized results. By prioritizing fairness, we strive to create personalized experiences that are equitable and inclusive for all users. Both of these evaluations are essential to enhance the overall user experience and build confidence in the personalized recommendations delivered by the system.
### _Trade-off between Helpfulness, Honesty, Harmlessness_
When large language models are employed for personalization, some of their disadvantages would be magnified. Striving for a more honest and harmless system may come at the expense of system performance.
First of all, the accuracy and factuality of the system must be ensured. Although large language models can generate seemingly reasonable content, there is a risk of disseminating misleading or inaccurate information. This becomes even more critical when incorporating user feedback, as the model may mimic user behaviors in an attempt to appear honest. However, this imitation can result in biased guidance for users, offering no real benefits.
Secondly, in terms of harmlessness, concerns regarding privacy, discrimination, and ethics arise. While large language models have the potential to provide highly personalized recommendations by leveraging user data, privacy, and data security become paramount. Unlike open-domain datasets, the privacy of individual data used for training should be rigorously protected, with strict user permissions for sharing their personal information. For discrimination, large language models may inevitably reflect biases inherent in the training data, leading to discriminatory recommendations. Considering the biased user and item distribution, which is much more significant in recommender systems with the long-tail effect, where biased user and item distribution can lead to decisions that favor majority choices, resulting in discrimination against certain users. The final concern revolves around ethical considerations. Harmful messages, if clicked by users unconsciously, can guide large language models toward generating similar harmful content. However, when assisting in personalized decision-making, it is essential for large language models to have the capability to minimize exposure to harmful messages and guide users in a responsible manner. Approaches like constructing a Constitutional AI [273], where critiques, revisions, and supervised Learning are adopted for better training large language models, may offer valuable insights.
By addressing these concerns, safeguarding privacy, mitigating discrimination, and adhering to ethical guidelines, recommender systems can leverage the power of large language models while ensuring user trust, fairness, and responsible recommendations.
## 12 Conclusion
In conclusion, the emergence of large language models represents a significant breakthrough in the field of artificial intelligence. Their enhanced abilities in understanding, language analysis, and common-sense reasoning have opened up new possibilities for personalization. In this paper, we provide several perspectives on when large language models adapt to personalization systems. We have observed a progression from utilizing low-level capabilities of large language models to enhance performance, to leveraging their potential in complex interactions with external tools for end-to-end tasks. This evolution promises to revolutionize the way personalized services are delivered. We also acknowledge the open challenges that come with the integration of large language models into personalization systems.
| 大規模言語モデルの登場は、人工知能における革命的な進歩をもたらします。 unprecedented scale のトレーニングとモデルパラメータの規模により、大規模言語モデルの能力は劇的に向上し、人間のように理解、言語合成、常識的思考などの能力を獲得しました。このような大きな進歩は、一般AI能力の向上に大きく貢献し、パーソナライズ化の方式を変化させます。第一に、人間とパーソナライズ化システム間の相互作用方式が改革されるでしょう。情報フィルタリングの passive な手段ではなく、大規模言語モデルは積極的にユーザーとのエンゲージメントを促す基盤となります。この新たな基盤の上にユーザーの要望が積極的に調査され、必要な情報が自然で説明可能に提供されます。第二に、パーソナライズ化の範囲も大幅に拡大し、個人情報収集の単なる機能から、パーソナライズ化サービスを提供する複合的な機能へと発展するでしょう。大規模 |
2304.00044 | On The Theory of Ring Afterglows | Synchrotron and inverse Compton emission successfully explain the observed
spectra of gamma-ray burst (GRB) afterglows. It is thought that most GRBs are
products of extremely relativistic outflows and the afterglow marks the
interaction of that ejecta with the surrounding matter. Faster decay of
afterglow light curves at late times is indicative of non-spherical geometries,
and are usually interpreted as evidence for jet geometry. Recent numerical
simulations have shown that ring-like geometries are also permissible for
relativistic outflows. We therefore extend the standard theory of afterglow
evolution to ring geometries. An analytic prescription for the light curves and
spectra produced by relativistic toroidal blast waves is presented. We compare
these to their spherical and jet-like counterparts, and show that ring
afterglows decay faster than spherical outflows but not as fast as jets. | Marcus DuPont, Andrew MacFadyen, Re'em Sari | 2023-03-31T18:02:12 | http://arxiv.org/abs/2304.00044v1 | # On The Theory of Ring Afterglows
###### Abstract
Synchrotron and inverse Compton emission successfully explain the observed spectra of gamma-ray burst (GRB) afterglows. It is thought that most GRBs are products of extremely relativistic outflows and the afterglow marks the interaction of that ejecta with the surrounding matter. Faster decay of afterglow light curves at late times is indicative of non-spherical geometries, and are usually interpreted as evidence for jet geometry. Recent numerical simulations have shown that ring-like geometries are also permissible for relativistic outflows. We therefore extend the standard theory of afterglow evolution to ring geometries. An analytic prescription for the light curves and spectra produced by relativistic toroidal blast waves is presented. We compare these to their spherical and jet-like counterparts, and show that ring afterglows decay faster than spherical outflows but not as fast as jets.
Gamma-Ray Bursts (629) -- Light curves (918) -- Relativistic Fluid Dynamics (1389) +
Footnote †: journal: ApJL
0000-0002-8861-7885]Marcus DuPont
0000-0002-4880-0885]Andrew MacFadyen
0000-0002-0788-0885]Re'em Sari
## 1 Introduction
The physics accounting for the variability and wide range in observed luminosities of gamma-ray bursts (GRBs), and the nature of their central engine are topics of deep debate. However, it is widely accepted that the dominant processes responsible for the X-ray, optical, and radio afterglow radiation are the synchrotron and inverse Compton mechanisms operating behind the blast wave that the GRB launches into the surrounding medium. Such radiative processes are expected to be applicable to just about any sufficiently relativistic outflow. This paved the way for the success of using the Blandford & McKee (1976) (BM) solution for modelling GRB afterglows and for distinguishing between isotropic and jet-like asymmetric outflows modelled as BM solutions truncated to within polar angle \(\theta_{0}\)(see Piran, 2004, and references therein). Thus far, only afterglows for spherical and jet-like outflows have been considered and it is generally believed that most GRBs are caused by jetted relativistic outflows. Currently, the key indicators cited as evidence for GRB jets are: (a) the existence of an achromatic break in the afterglow light curve either due to lateral jet spreading (Rhoads, 1999; Sari et al., 1999) or an off-axis viewing of universal structured jets (e.g., Zhang & Meszaros, 2002; Rossi et al., 2002); (b) observed net polarizations that arise from asymmetric, relativistically beamed outflows (e.g., Gruzinov & Waxman, 1999; Sari, 1999; Yonetoku et al., 2011; Mandarakas et al., 2023); (c) extremely large energetics which require the outflow to be sufficiently collimated since the average _isotropic_ energy of \(10^{55}\) erg released by GRBs is much larger than what is physically allowed by a spherical explosion of a massive star (Taylor et al., 2004; Kumar & Zhang, 2015); (d) and measurements of proper motion of the flux centroid (Czerny et al., 1997; Taylor et al., 2004; Mooley et al., 2018).
Insofar as shown by the the previous conditions and observations, many GRBs are only constrained to be _asymmetric_ outflows, but we argue they are not necessarily jet-like. This stance is valid since the current GRB afterglow catalogue is quite varied and many of them show breaks which do not fit the jet theory. Recently, it has been shown that relativistic outflows can have ring-like geometries, e.g. from the "ellipsar" mechanism (DuPont et al., 2022). Motivated by the result of DuPont et al. (2022), we consider in this Letter the dynamics and observational signatures of expanding relativistic rings, though we remain agnostic about the source and ener
gies of said rings. Our work on ring afterglows is motivated by the many time-domain surveys in progress or being planned (Barthelmy et al., 2005; Shappee et al., 2014; Chambers et al., 2016; Kochanek et al., 2017; Ivezic et al., 2019; Bellm et al., 2019), which observe a wide array of astrophysical transients -- outside of just GRBs -- that expanding relativistic rings might help explain. These transients might include X-ray flashes (XRFs), Super Luminous Supernovae (SLSNe), trans-relativistic supernovae, and Fast Blue Optical Transients (FBOTs). Therefore, we are motivated to ask how the afterglow of expanding relativistic rings differs from their spherical and jet-like counterparts.
In this Letter, we calculate the light curves and spectra due to expanding relativistic rings. We invoke the same recipes described in Sari et al. (1998) and Sari et al. (1999), which have been successful at modeling many observed GRB afterglows. We derive temporal scalings for the relevant frequencies and spectral flux and comment on their differences from the spherical and jet-like afterglows.
This Letter is organized as follows: Section 2 describes the mathematical formalism for the dynamics and synchrotron radiation from the relativistic ring, Section 3 describes the resultant light curves of the ring-like outflows, and Section 4 discusses the relevance of our work.
## 2 Formalism
### Blast wave evolution
In the early phase of evolution before the expanding blast wave begins to decelerate, if it is expanding in a medium with density obeying \(\rho=Ar^{-k}\), it has kinetic energy
\[E\approx\Gamma^{2}M=\frac{A}{3-k}\Gamma^{2}r^{3-k}\Omega, \tag{1}\]
where \(M\) is the swept up mass, \(\Gamma=(1-\beta^{2})^{-1/2}\) is the Lorentz factor of the bulk flow, \(\beta\) is velocity in units of \(c\), \(A\) is the mass-loading parameter, and \(\Omega\) is the solid angle of the blast wave which obeys
\[\Omega=\begin{cases}4\pi\sin(\theta_{0})&\text{ring},\\ 8\pi\sin^{2}(\theta_{0}/2)&\text{jets},\\ 4\pi&\text{sphere},\end{cases} \tag{2}\]
where \(\theta_{0}\) is the half-opening angle of the blast wave(s) such that \(\Omega\to 4\pi\) as \(\theta_{0}\to\pi/2\). For small opening angles, \(\Omega_{\text{ring}}\approx 4\pi\theta_{0}\), which is a factor \(2/\theta_{0}\) larger than its double-sided jet-like counterpart, making relativistic rings more likely to be observed, as compared to a jet with the same opening angle. An illustration of the asymmetric geometries considered is shown in Figure 1. As evident from Figure 1 and from Equation 2, the solid angle for a ring complements the solid angle of a jet to \(4\pi\) if \(\theta_{\text{ring}}=\pi/2-\theta_{\text{jet}}\).
Using conservation of energy, as the relativistic ring slows down such that \(\Gamma\sim\theta_{0}^{-1}\), one finds \(\Gamma\propto r^{-(3-k)}\). This happens after an observer time of:
\[t_{\text{b}}\approx[\zeta E_{\text{iso}}/4\pi A]^{1/\zeta}(\theta_{0}+\theta_{ \text{obs}})^{2(1+\zeta)/\zeta}, \tag{3}\]
where \(E_{\text{iso}}\) is the isotropic-equivalent energy and \(\zeta\equiv 3-k\). Before this break time, the afterglow from rings and from jets are identical due to a lack of causal connectivity. The crux of this Letter is that after this break time, light curves from jets and from rings diverge and their distinguishing features are discernible in the current GRB catalogue. We will explore the previous point in later sections.
As the blast wave evolves, an observer sees photons at a time
\[t=t^{\prime}(1-\vec{\beta}\cdot\hat{n})=t^{\prime}(1-\beta\mu), \tag{4}\]
where \(t^{\prime}\) is the time in the emitter frame, \(\hat{n}\) is a unit vector pointing from the observer to the emitting patch, and \(\mu\equiv\cos\theta\). Hereafter, all primed quantities signify values in the emitter frame. Assuming \(\Gamma\gg 1\) and the observer is nearly perfectly oriented with the emitting patch (i.e., \(\mu\approx 1-\theta^{2}/2\)), we have
\[t\approx\frac{t^{\prime}}{2\Gamma^{2}}[1+(\Gamma\theta)^{2}]\approx\frac{r}{2 \Gamma^{2}}[1+(\Gamma\theta)^{2}], \tag{5}\]
where we have used \(t^{\prime}\approx r\) for sufficiently relativistic flows which lie on the light sphere. Since the radiation is beamed into a typical angle \(1/\Gamma\), the quantity \(\Gamma\theta\) is of order unity, simplifying the observer time to \(t\approx r/\Gamma^{2}\). From this, we arrive at the Lorentz factor as a function of observer time for the ring, \(\Gamma\propto t^{-\zeta/(1+2\zeta)}\). Furthermore, the relativistic ring's radial evolution obeys \(r\propto t^{1/(1+2\zeta)}\) after spreading begins.
### Synchrotron Spectrum
In the observer frame, the characteristic peak frequency of the electrons is
\[\nu_{m}=\Gamma\gamma_{e}^{2}\frac{3eB^{\prime}}{16m_{e}}\propto\Gamma^{4} \propto t^{-4\zeta/(1+2\zeta)}, \tag{6}\]
where \(\gamma_{e}\) is the electron Lorentz factor, \(e\) is elementary charge, \(B^{\prime}\) is the magnetic field in the fluid frame, and \(m_{e}\) is the electron mass. Note that we have used the fact that the magnetic field in the down stream transforms from the usual jump condition \(B^{\prime}=\Gamma\sqrt{32\pi\rho\epsilon_{B}}\), where \(\epsilon_{B}\) is fraction of total energy density due to magnetic fields, and the minimum Lorentz factor of the electrons obeys \(\gamma_{e}\propto\Gamma\). In a time \(t^{\prime}\), the electrons cool at a rate
\[\langle P(\gamma_{e})\rangle=\frac{4}{3}\sigma_{T}u^{2}\gamma_{e}^{2}U_{b}. \tag{7}\]
In the above equation, \(\sigma_{T}\) is the Thompson cross section, \([u^{\mu}]=\Gamma(1,\vec{\beta})\) is the four-velocity in units where \(c=1\), and \(U_{b}=B^{2}/8\pi\) is the magnetic energy density. By inverting Equation 7, we solve for the cooling Lorentz factor,
\[\gamma_{c}=\frac{6\pi m_{e}}{\Gamma t^{\prime}\sigma_{T}B^{2}}=\frac{6\pi m_{ e}}{\Gamma^{3}t\sigma_{T}B^{2}}. \tag{8}\]
It then immediately follows that the cooling frequency obeys
\[\nu_{c}=\Gamma\gamma_{c}^{2}\frac{3eB^{\prime}}{16m_{e}}\propto\Gamma^{-4}t^{ -2}\propto t^{-2/(1+2\zeta)}. \tag{9}\]
The spectral flux from a radiating blast wave is given by
\[F_{\nu}=\frac{1+z}{4\pi d_{L}^{2}}\int_{V}\delta^{2}j^{\prime}_{\nu}d^{3}\vec{ x}, \tag{10}\]
where \(z\) is redshift, \(d_{L}\) is luminosity distance, \(\delta=1/\Gamma(1-\vec{\beta}\cdot\hat{n})\) is the Doppler beaming factor with respect to the observer, and \(j^{\prime}_{\nu}\) is the frequency-dependent emissivity. At peak emission, the emissivity is independent of \(\Gamma\) and a highly relativistic flow along the line of sight to the observer gives \(\delta=2\Gamma\), so the peak spectral flux has the scaling
\[F_{\nu,\rm max}\propto r^{3}\Gamma^{2}\propto t^{(3-2\zeta)/(1+2\zeta)}. \tag{11}\]
For completeness, we do not assume that all synchrotron photons escape the plasma on their way to the observer, meaning some are self absorbed. Moreover, the self-absorption frequency is a difficult calculation, but by extrapolating from the Granot et al. (1999a) solution we can arrive at the simple scaling,
\[\nu_{a}\propto E^{1/5}\propto\Gamma^{2/5}r^{\zeta/5}\propto t^{-\zeta/5(1+2 \zeta)}. \tag{12}\]
From this, we now have the necessary ingredients to compute light curves produced by relativistic rings.
## 3 Light curves of relativistic rings
With the necessary constraints derived in the previous section, we now turn to explicit light curve calculations. Hereafter, we compute light curves for a constant density medium (i.e., \(\zeta=3\)) to easily compare with the spherical and jet-like geometries derived in Sari et al. (1999).
Figure 1: Cartoon illustrations of the two types of asymmetric geometries considered in this Letter. The left shows the conical, jet-like outflow along the poles of the source while the right shows the ring-like outflow in the equatorial plane of the source. The half-opening angle, \(\theta_{0}\), is depicted for both geometries as well.
We start with the flux at low enough frequencies, such that some photons are self absorbed. Assuming that the time-averaged source emits at the characteristic \(\nu_{m}\), if \(\nu_{a}\ll\nu_{m}\), then because most of the electrons are emitting at typical synchrotron frequencies much larger than \(\nu_{a}\), the spectral flux is proportional to \(\nu^{2}\) as opposed to \(\nu^{5/2}\)(Katz, 1994). Thus, we have
\[F_{\nu<\nu_{a}}\propto\left(\frac{\nu}{\nu_{a}}\right)^{2}\left( \frac{\nu_{a}}{\nu_{m}}\right)^{1/3}F_{\nu,\max}\propto r^{2}\propto\begin{cases} t^{2/7}&\text{ring,}\\ \text{constant}&\text{jet,}\\ t^{1/2}&\text{spherical,}\end{cases} \tag{13}\] \[F_{\nu_{a}<\nu<\nu_{m}}\propto\left(\frac{\nu}{\nu_{m}}\right)^{ 1/3}F_{\nu,\max}\propto r^{3}\Gamma^{2/3}\propto\begin{cases}t^{1/7}&\text{ring,} \\ t^{-1/3}&\text{jet,}\\ t^{1/2}&\text{spherical,}\end{cases} \tag{14}\]
for the flux below the self-absorption frequency and the intermediate flux between the self-absorption and characteristic frequency, respectively. This indicates that slopes would rise as long as the evolution were spherical or ring-like, but the slopes are different enough to perfectly distinguish between the two geometries. Moreover, there is a stark contrast from the latter geometries when compared with the \(t^{-1/3}\) decay of the jet once it begins spreading. At high frequencies, the light curves follow
\[F_{\nu_{m}<\nu_{c}<\nu}\propto\Gamma^{2}r^{3}\left(\frac{\nu_{c} }{\nu_{m}}\right)^{-(p-1)/2}\left(\frac{\nu}{\nu_{c}}\right)^{-p/2}\propto \begin{cases}t^{-2(3p-1)/7}&\text{ring,}\\ t^{-p}&\text{jet,}\\ t^{-(3p-2)/4}&\text{spherical,}\end{cases} \tag{15}\] \[F_{\nu_{m}<\nu<\nu_{c}}\propto\Gamma^{2}r^{3}\left(\frac{\nu}{ \nu_{m}}\right)^{-(p-1)/2}\propto\begin{cases}t^{-3(2p-1)/7}&\text{ring,}\\ t^{-p}&\text{jet,}\\ t^{-3(p-1)/4}&\text{spherical,}\end{cases} \tag{16}\]
for cooling electrons and for non-cooling electrons, respectively. In Equations 15 & 16, \(p\) is the electron distribution power-law index. Here we witness that ring afterglows possess two distinct cooling breaks analogous to whats been calculated for spherical outflows. Furthermore, our calculation evidences a very clear distinction between afterglows produced by relativistic rings and jets. A graphical depiction of this distinction is shown in Figure 2. We've shown _very_ distinct features such as differences in cooling breaks, and, more importantly, ring afterglows have shallower decay slopes than jets throughout the entirety of their evolution. The consequences of these revelations are discussed in the next section.
## 4 Discussion
We have demonstrated that temporal evolution of ring afterglows is clearly distinct from their spherical and jet-like counterparts. While it is likely that classical GRBs are products of very energetic asymmetric flows, the geometry of said outflow is not well constrained. The jet model has been instrumental in its explanations of steep decays as resulting from highly collimated outflows. Yet, there exist observations which cannot be fit using the jet framework. Some light curves -- such as those produced by GRB 030329 (Stanek et al., 2003) or the more recent GRB 221009A (Williams et al., 2023) -- have very shallow breaks, which are hard to reconcile using top-hat
jet models. In particular, GRB 221009A was reported by Williams et al. (2023) to favor a broken power-law model in the X-ray with flux decay slopes steepening from \(t^{-1.498\pm 0.004}\) to \(t^{-1.672\pm 0.008}\) with a jet break time of \(t_{b,\rm X-ray}\sim 8\times 10^{4}\,\rm s\). The timing of such steepening might be due to a jet with half-opening angle of \(3.5^{\circ}\)(D'Avanzo et al., 2022). However, the light curve does not steepen beyond the decay index \(\alpha\approx 1.7\) -- where \(F_{\nu}\propto t^{-\alpha}\) -- after the break, and observers cannot match this shallow X-ray decay index with what is predicted using a simple on-axis top-hat jet. For typical values \(p\cong 2.4\), the top-hat jets predict \(\alpha>2\), but rings predict \(1.63<\alpha<1.77\), well within the required range for GRB 221009A. Therefore, one can interpret this GRB as stemming from either a more structured jet configuration or an expanding relativistic ring.
The notion of some astrophysical transients being sourced from expanding relativistic rings, rather than jets, have the following implications: (a) the probability of viewing ring afterglows is larger than that of jets by a factor \(2/\theta_{0}\). A blast wave with half-opening angle 0.1 radians, if oriented as a jet, would cover 0.5% of the sky while an expanding ring covers 10%, larger by a factor of 20. As a result, ring geometries, as compared to jet geometries, bring down the required source rates significantly; (b) as demonstrated by DuPont et al. (2022) relativistic rings can be born purely from geometrical and hydrodynamic effects as opposed to the more complex central engines required for producing classical jets; (c) the late-time evolution of the relativistic ring is much more stable than the jet since the spreading of the relativistic ring is effectively one dimensional and is therefore a candidate for light curves with shallower breaks (d) around the time of the ring break, when the emitting patch is no longer locally spherical, the specific intensity (surface brightness) is no longer symmetric about the line of sight to the observer for a general viewing angle (Granot et al., 1999, 2007; van Eerten et al., 2010). This fact can act as useful probe for the underlying dynamics of rings which can be further detailed by direct hydrodynamic simulations in the near future. Detailed analysis of scintillation patterns may be sensitive to the surface brightness distribution (Goodman, 1997) and may help distinguish jets from rings.
This Letter has considered synchrotron emission which is more readily observed in radio and optical frequencies. At higher frequencies, inverse Compton emission may dominate. Adding the inverse Compton component could be done in a similar way to Sari and Esin (2001), or its extension by Nakar et al. (2009) if Klein Nishina corrections are important.
In summary, we've considered the dynamics and observational signatures of expanding relativistic rings which preserve the notion of beaming, can account for shallow breaks as observed in many GRB light curves, and do not require a complex central engine to achieve their geometric configuration. Our investigation is inspired by the work of DuPont et al. (2022), where rings arise naturally, albeit with lower energy than needed to explain cosmological afterglows for the conditions considered in that work. Moreover, while the main focus of this work has been GRBs, we emphasize that the importance of our calculations are the unique features presented by the ring geometry, while the energetics can
Figure 2: Pictorial light curves for the spherical, ring-like, and jet-like blast waves, respectively. The left and right panels show the typical light curve behavior at low (\(\sim\) radio) and high (\(\sim\) optical and X-ray) observed frequencies, respectively. The slopes are segmented in time between the break time, \(t_{b}\), and the times when the break frequencies \(\nu_{m}\) and \(\nu_{c}\) cross the observed frequency \(\nu\). The vertical dashed line for \(t_{c}\) is broken at the jet curve in the left panel since \(\nu_{c}\) is constant for that geometry and it therefore has no corresponding \(t_{c}\). In both frequency bands, we show the divergence in flux decay rate once the break time is reached with the low frequency band showing the clearest separation between the various phases of evolution.
be scaled appropriately and applied to a broader scope of astrophysical transients. Therefore, we suggest that ring-like outflows should be considered when interpreting observations of non-spherical explosions.
| **サイクロトンの放射線と逆コンプトン放射が、γ線バースト(GRB)の残影の観測スペクトルを成功的に説明しています。GRBsがほとんどが非常に重力速度の放出物によるものだと考えられており、残影は放出物が周囲の物質と相互作用したものです。残影の光線曲線の遅時間の速い衰減は非球形幾何学的形状を示しており、これはジェット幾何学的形状の証拠として解釈されることが多いです。近年の数値シミュレーションでは、環状の幾何学も重力速度放出物にとって許容可能です。したがって、残影進化の標準理論を環状幾何学に拡張しました。重力速度の渦状放出物による光線曲線とスペクトルについて、解析的な公式が提示されました。私たちは球形とジェット状の対比と比較し、環状の残 |
2309.16802 | Axisymmetric hybrid Vlasov equilibria with applications to tokamak
plasmas | We derive axisymmetric equilibrium equations in the context of the hybrid
Vlasov model with kinetic ions and massless fluid electrons, assuming
isothermal electrons and deformed Maxwellian distribution functions for the
kinetic ions. The equilibrium system comprises a Grad-Shafranov partial
differential equation and an integral equation. These equations can be utilized
to calculate the equilibrium magnetic field and ion distribution function,
respectively, for given particle density or given ion and electron toroidal
current density profiles. The resulting solutions describe states characterized
by toroidal plasma rotation and toroidal electric current density.
Additionally, due to the presence of fluid electrons, these equilibria also
exhibit a poloidal current density component. This is in contrast to the fully
kinetic Vlasov model, where axisymmetric Jeans equilibria can only accommodate
toroidal currents and flows, given the absence of a third integral of the
microscopic motion. | D. A. Kaltsas, A. Kuiroukidis, P. J. Morrison, G. N. Throumoulopoulos | 2023-09-28T19:11:42 | http://arxiv.org/abs/2309.16802v2 | # Axisymmetric hybrid Vlasov equilibria with applications to tokamak plasmas
###### Abstract
We derive axisymmetric equilibrium equations in the context of the hybrid Vlasov model with kinetic ions and massless fluid electrons, assuming isothermal electrons and deformed Maxwellian distribution functions for the kinetic ions. The equilibrium system comprises a Grad-Shafranov partial differential equation and an integral equation. These equations can be utilized to calculate the equilibrium magnetic field and ion distribution function, respectively, for given particle density or given ion and electron toroidal current density profiles. The resulting solutions describe states characterized by toroidal plasma rotation and toroidal electric current density. Additionally, due to the presence of fluid electrons, these equilibria also exhibit a poloidal current density component. This is in contrast to the fully kinetic Vlasov model, where axisymmetric Jeans equilibria can only accommodate toroidal currents and flows, given the absence of a third integral of the microscopic motion.
## 1 Introduction
Hybrid Vlasov models play an important role in examining the complex behavior of multi-scale plasmas that feature both a fluid bulk and energetic particle populations not amenable to fluid descriptions. One specific branch of hybrid models that has received significant attention, primarily for studying phenomena in ion inertial scales such as turbulence and collisionless reconnection, focuses on electron-ion plasmas where electrons are treated as a fluid while ions are treated kinetically (e.g. [1, 2, 3, 4, 5, 6, 7, 8]). In our recent work ([9]), we employed such a hybrid model, featuring massless isothermal electrons and kinetic ions, to investigate one-dimensional Alfven-BGK (Bernstein-Greene-Kruskal) modes as stationary solutions to the model equations. We demonstrated that the one-dimensional equilibrium equations constitute a Hamiltonian system for a pseudoparticle, which can exhibit integrable or chaotic orbits, depending on the form of the distribution function. A natural extension of this work would be the construction of 2D-equilibria which can be used as reference states for studying reconnection, instabilities and wave propagation, or even macroscopic equilibria of fusion plasmas.
Plasmas in fusion devices like the tokamak, are enriched with significant populations of energetic particles. It is thus expected that the distribution of those particles in the physical and the velocity space might affect macroscopic equilibrium and stability properties. For this reason hybrid models have also found applications in the description of multiscale
dynamics of tokamak plasmas (e.g. [10, 11, 12]). However, despite the utility of hybrid and kinetic descriptions for investigating dynamical processes, there has been limited progress in constructing self-consistent equilibria within the framework of thes models. One important limitation arises from the absence of a third particle constant of motion in the full-orbit Vlasov description. Such an invariant would be crucial for constructing equilibria with characteristics relevant to tokamaks. Efforts to build such equilibria using a fully kinetic Vlasov description for both ions and electrons have been undertaken in [13, 14]. Nevertheless, due to the presence of only one momentum integral of motion for each particle species, specifically the particle toroidal angular momentum, these equilibria exhibit only toroidal current density and plasma rotation. In contrast, the magnetohydrodynamic (MHD) fluid description of toroidal plasma equilibrium can accommodate both toroidal and poloidal currents. Hence, although more fundamental, the kinetic approach appears to have limitations in describing certain classes of equilibria compared to MHD.
To combine the advantages of both descriptions, we turn to the hybrid model mentioned earlier. Even though it lacks a poloidal particle momentum invariant, it can describe equilibria featuring both toroidal and poloidal current densities, thanks to the fluid treatment of electrons, which carry the poloidal current component. It is important to note though that a limitation of the present model for a realistic description of fusion plasmas, is that it treats the entire ion population using the Vlasov equation. This is not the most efficient and effective approach, since there is also a thermal ion component and multiple ion species; thus further model improvements are required. The present model description serves as an initial step toward the development of improved models that will incorporate multi-fluid-kinetic descriptions, as exemplified in [15].
The rest of the paper is structured as follows: in Section 2 we present the hybrid equilibrium model and in Section 3 the axisymmetric equilibrium formulation is developed. In Section 4 we numerically construct particular tokamak-pertinent equilibria presenting various equilibrium characteristics and we conclude by summarising the results in Section 5.
## 2 The hybrid model
The initial hybrid-Vlasov equilibrium system employed in [9], consists of a Vlasov equation for kinetic ions, a generalized Ohm's law derived from the electron momentum equation, the Maxwell equations, and an equation of state for the fluid electrons:
\[\mathbf{v}\cdot\nabla f+\frac{e}{m}\left(\mathbf{E}+\mathbf{v}\times \mathbf{B}\right)\cdot\nabla_{\mathbf{v}}f=0\,, \tag{1}\] \[\mathbf{E}=-\frac{n}{n_{e}}\mathbf{u}\times\mathbf{B}+\frac{ \mathbf{J}\times\mathbf{B}}{en_{e}}-\frac{\nabla P_{e}}{en_{e}}\,,\] (2) \[\mathbf{E}=-\nabla\Phi\,,\quad\nabla\times\mathbf{B}=\mu_{0} \mathbf{J}\,,\] (3) \[\nabla\cdot\mathbf{B}=0\,,\quad\nabla\cdot\mathbf{E}=e(n-n_{e})\,,\] (4) \[P_{e}=n_{e}k_{B}T_{e}\,, \tag{5}\]
where
\[n(\mathbf{x},t)=\int d^{3}v\,f(\mathbf{x},\mathbf{v},t)\,, \tag{6}\] \[\mathbf{u}(\mathbf{x},t)=n^{-1}\int d^{3}v\,\mathbf{v}f(\mathbf{x}, \mathbf{v},t)\,. \tag{7}\]
Note that the ion-kinetic contribution to the current density is given by
\[\mathbf{J}_{k}=\int d^{3}v\,\mathbf{v}f\,, \tag{8}\]
and thus the first term in the right hand side of (2) can be expressed as \(-{\bf J}_{k}\times{\bf B}/(en_{e})\).
In addition to (1)-(5), an energy equation is needed to determine \(T_{e}\). Alternatively, it can be assumed that the electrons are isothermal, i.e. \(T_{e}\) is constant throughout the entire plasma volume, or it can vary with the magnetic flux function \(\psi\) if we consider isothermal magnetic surfaces, i.e., \(T_{e}=T_{e}(\psi)\). An alternative to (5) would be an isentropic closure of the form \(P_{e}=cn_{e}^{\gamma}\), or even anisotropic electron pressure under appropriate conditions for the different components of the electron pressure tensor. Here we consider isothermal electrons \(T_{e}=T_{e0}=const\).
Let us now write the system (1)-(5) in nondimensional form upon indtroducing the following dimensionless quantities
\[\tilde{x}=\frac{x}{R_{0}}\,,\quad\tilde{\mathbf{v}}=\frac{\mathbf{v}}{d_{ i}v_{A}}\,,\] \[\tilde{n}_{e}=\frac{n_{e}}{n_{0}}\,,\quad\tilde{f}=d_{i}^{3}v_{A }^{3}f/n_{0}\,,\] \[\tilde{\bf E}=\frac{{\bf E}}{d_{i}v_{A}B_{0}}\,,\quad\tilde{\bf B }=\frac{{\bf B}}{B_{0}}\,,\] \[\tilde{\bf J}_{k}=\frac{{\bf J}_{k}}{en_{0}d_{i}v_{A}}\,,\quad \tilde{P}_{e}=\frac{P_{e}}{mn_{0}d_{i}^{2}v_{A}^{2}}\,, \tag{9}\]
where \(R_{0}\) and \(B_{0}\) are the characteristic length and magnetic field modulus, respectively. Additionally,
\[v_{A}=\frac{B_{0}}{\sqrt{\mu_{0}mn_{0}}}\,,\quad\Omega=\frac{eB_{0}}{m}\,, \tag{10}\]
are the Alfven speed and the ion cyclotron frequency, respectively and
\[d_{i}=\frac{\ell_{i}}{R_{0}}\,,\quad\ell_{i}=\sqrt{\frac{m}{\mu_{0}n_{0}e^{2} }}\,, \tag{11}\]
is the nondimensional ion skin depth which is typically of the order \(10^{-2}\) in fusion devices. Notice that apart from nondimensionalizing various physical quantities, we've also scaled the nondimensional velocity by a factor of \(d_{i}^{-1}\). The rationale behind this scaling will be clarified in a subsequent explanation. What is important to stress here, is that with careful implementation of this scaling process, there are no inconsistencies in the nondimensionalization of the equations and the recovery of physical units in the final results. In view of (9) the hybrid equilibrium system then can be written in the following nondimensional form:
\[\mathbf{v}\cdot\nabla f+d_{i}^{-2}\left({\bf E}+\mathbf{v}\times{\bf B} \right)\cdot\nabla_{v}f=0\,, \tag{12}\] \[-\nabla\Phi=\frac{1}{n_{e}}\left[(\nabla\times{\bf B}-{\bf J}_{k })-d_{i}^{2}\nabla P_{e}\right]\,,\] (13) \[{\bf E}=-\nabla\Phi\,,\quad\nabla\times{\bf B}={\bf J}\,,\] (14) \[\nabla\cdot{\bf B}=0\,,\quad d_{i}^{2}\beta_{A}^{2}\nabla\cdot E =(n-n_{e})\,,\] (15) \[P_{e}=\kappa n_{e}\,, \tag{16}\]
where
\[\kappa:=\frac{k_{B}T_{e0}}{d_{i}^{2}mv_{A}^{2}}\,. \tag{17}\]
and \(\beta_{A}^{2}=v_{A}^{2}/c^{2}\). Taking the limit \(\beta_{A}^{2}\to 0\) we obtain the quasineutrality condition \(n_{e}=n\), which will be applied in the subsequent analysis.
In Section 3, we will investigate two equilibrium scenarios: one with cold electrons (\(\kappa=0\)) and the other with thermal electrons (\(\kappa=1\)). We opted for the scaled nondimensional particle
velocity \(\tilde{v}=v/(d_{i}v_{A})\) with the goal of attaining tokamak-relevant temperatures making the convenient choice \(\kappa=1\). It can be verified through (17) and using an Alfven speed calculated from tokamak-relevant values for the density and the magnetic field, that \(\kappa=1\) corresponds to \(T_{e0}\sim 10^{8}\,K\).
As a closing note for this section, it is worth highlighting that taking into account (16) and the quasineutrality condition \(n_{e}=n\), Ohm's law (13) can be expressed as follows:
\[-\nabla\Phi=\frac{1}{n}\left(\nabla\times\mathbf{B}-\mathbf{J}_{ k}\right)\times\mathbf{B}-\nabla ln\left(n^{d_{i}^{2}\kappa}\right)\,. \tag{18}\]
## 3 Axisymmetric equilibrium formulation
We consider a plasma configuration with axial symmetry with respect to a fixed axis, where all quantities depend on the coordinates \(r,z\) of a cylindrical coordinate system \((r,\phi,z)\). Note that \(z\) coincides with the axis of symmetry. In this case the divergence-free magnetic field can be written in terms of two scalar functions \(I\) and \(\psi\) as follows:
\[\mathbf{B}=I\nabla\phi+\nabla\psi(r,z)\times\nabla\phi\,, \tag{19}\]
while the corresponding current density is
\[\mathbf{J}=\nabla\times\mathbf{B}=-\Delta^{*}\psi\nabla\phi+ \nabla I\times\nabla\phi\,, \tag{20}\]
where \(\Delta^{*}\) is the Shafranov operator given by
\[\Delta^{*}=r\frac{\partial}{\partial r}\left(\frac{1}{r}\frac{ \partial}{\partial r}\right)+\frac{\partial^{2}}{\partial z^{2}}\,. \tag{21}\]
Next we will consider the three components of (18) along the magnetic field, along the \(\hat{\phi}\) direction and along the \(\nabla\psi\) direction. From the \(\mathbf{B}\) projection we readily obtain
\[\mathbf{B}\cdot\nabla\left[\Phi-ln\left(n^{d_{i}^{2}\kappa}\right) \right]=0\,, \tag{22}\]
thus
\[\Phi-ln\left(n^{d_{i}^{2}\kappa}\right)\eqqcolon G(\psi)\,, \tag{23}\]
where \(G(\psi)\) is an arbitrary function. From this equation we can solve for \(n\) to find
\[n=exp\left[\frac{\Phi-G(\psi)}{d_{i}^{2}\kappa}\right]\,. \tag{24}\]
In the case \(G(\psi)=const.\) we recover the Boltzmann distribution. Next, to take the \(\nabla\phi\) and \(\nabla\psi\) projections we need first to determine the direction of \(\mathbf{J}_{k}\).
According to Jeans' theorem [16, 17], distribution functions of the form \(f=f(C_{1},C_{2},...)\), where \(C_{i}\) are particle constants of motion, are solutions to the Vlasov equation (12). In the absence of collisions, the particle energy \(H\) is itself a first integral of motion. In nondimensional form \(H\) reads:
\[\tilde{H}=\frac{v^{2}}{2}+d_{i}^{-2}\Phi\,, \tag{25}\]
where \(\tilde{H}=H/(d_{i}^{2}mv_{A}^{2})\). Additionally, in the presence of axial symmetry a second constant of motion is the particle toroidal angular momentum
\[\tilde{p}_{\phi}=rv_{\phi}+rA_{\phi}=rv_{\phi}+d_{i}^{-2}\psi\,, \tag{26}\]
where \(\tilde{p}_{\phi}=p_{\phi}/(mR_{0}d_{i}v_{A})\). It remains an open question whether and under what conditions, additional, approximate constants of motion exist within the framework of full-orbit Vlasov description (see [18] and references therein for a discussion on the existence of a third integral of motion in axisymmetric potentials). In certain scenarios, it may be pertinent to consider adiabatic constants, such as the magnetic moment \(\mu\) as explored in [19]. It is worth noting that in the context of the hybrid model and the present analysis, some assumptions made in [19], such as \(p_{\phi}\approx\psi\), can be justified due to the presence of the significant \(d_{i}^{-2}\) factor, especially in systems like the magnetosphere. However, in this paper, which focuses on laboratory plasmas, we will not adopt this assumption. Instead, we will follow the approach outlined in [9] and [20], considering a distribution function in the form of:
\[f=exp(-H)g(p_{\phi})=exp\left[-\frac{v_{r}^{2}+v_{z}^{2}+v_{\phi}^{2}}{2}-\Phi( r,z)\right]g(p_{\phi})\,, \tag{27}\]
or
\[f=exp\left[-\frac{v_{r}^{2}+v_{z}^{2}}{2}-\frac{(p_{\phi}-\psi)^{2}}{2r^{2}}- \Phi(r,z)\right]g(p_{\phi})\,.\]
Note that the tildes have been omitted in \(H\) and \(p_{\phi}\) for convenience. For such a distribution function the kinetic current density (8) will have only a \(\phi\) component. This is because \(v_{r}f\) and \(v_{z}f\) are odd functions with respect to \(v_{r}\) and \(v_{z}\) respectively, while the integration over these variables go from \(-\infty\) to \(+\infty\). Therefore
\[\mathbf{J}_{k}=rJ_{k\phi}\nabla\phi\,,\]
and as a result
\[\mathbf{J}_{k\phi}\times\mathbf{B}=\frac{J_{k\phi}}{r}\nabla\psi\,. \tag{28}\]
Substituting (19), (20), (24) and (28) into (18) we obtain
\[-n\nabla\Phi=-\frac{\Delta^{*}\psi}{r^{2}}\nabla\psi-\frac{I}{r^{2}}\nabla I+ [I,\psi]\nabla\phi-\frac{J_{k\phi}}{r}\nabla\psi-n\nabla\Phi+nG^{\prime}(\psi )\nabla\psi\,, \tag{29}\]
where \([a,b]\coloneqq(\nabla a\times\nabla b)\cdot\nabla\phi\). It is now trivial to see that from the \(\nabla\phi\) projection of (29), we obtain
\[[I,\psi]=0\,,\quad\text{i.e.}\quad I=I(\psi)\,. \tag{30}\]
Finally, the \(\nabla\psi\) projection yields
\[\Delta^{*}\psi+II^{\prime}(\psi)+r^{2}\mathcal{Z}(r,\psi)=0\,, \tag{31}\]
where
\[\mathcal{Z}(r,\psi):=\frac{1}{r}\int d^{3}v\,v_{\phi}f-G^{\prime}(\psi)\int d ^{3}v\,f\,.\]
Equation (31) is a Grad-Shafranov (GS) equation determining the magnetic field through the flux function \(\psi\) in axisymmetric hybrid Vlasov equilibria. Let us now work out the velocity space integrals in (31). The particle density is
\[n=\int d^{3}v\,f=\frac{e^{-\Phi/d_{i}^{2}}}{r}\int_{-\infty}^{+ \infty}dv_{r}\int_{-\infty}^{+\infty}dv_{z}\int_{-\infty}^{+\infty}dp_{\phi} \,e^{-\frac{v_{r}^{2}}{2}-\frac{v_{r}^{2}}{2}-\frac{(p_{\phi}-\psi/d_{i}^{2})^ {2}}{2r^{2}}}g(p_{\phi})\] \[=\frac{2\pi e^{-\Phi/d_{i}^{2}}}{r}\int_{-\infty}^{\infty}dp_{ \phi}\,e^{-\frac{(p_{\phi}-\psi/d_{i}^{2})^{2}}{2r^{2}}}g(p_{\phi})\,. \tag{32}\]
We have shown that \(\Phi=ln(n^{d_{i}^{2}\kappa})+G(\psi)\), therefore
\[n=\left[\frac{2\pi e^{-G(\psi)/d_{i}^{2}}}{r}\int_{-\infty}^{+\infty}dp_{\phi}\,e ^{-\frac{(p_{\phi}-\psi/d_{i}^{2})^{2}}{2r^{2}}}g(p_{\phi})\right]^{\frac{1}{ \kappa+1}}\,. \tag{33}\]
Similarly, for the toroidal component of the kinetic current density we find
\[J_{k\phi}=2\pi e^{-G(\psi)/d_{i}^{2}}\left[\frac{2\pi e^{-G(\psi )/d_{i}^{2}}}{r}\int_{-\infty}^{+\infty}dp_{\phi}\,e^{-\frac{(p_{\phi}-\psi/d_ {i}^{2})^{2}}{2r^{2}}}g(p_{\phi})\right]^{-\frac{\kappa}{\kappa+1}}\times\] \[\times\int_{-\infty}^{+\infty}dp_{\phi}\,\frac{(p_{\phi}-\psi/d_ {i}^{2})}{r^{2}}e^{-\frac{(p_{\phi}-\psi/d_{i}^{2})^{2}}{2r^{2}}}g(p_{\phi})\,. \tag{34}\]
Therefore, the current density depends on two arbitrary functions, i.e. \(G(\psi)\) and \(g(p_{\phi})\). The latter function that determines the ion distribution function, can either be specified a-priori, together with \(G(\psi)\) and then the GS equation (31) can be solved to determine \(\psi\), or can be identified by fixing \(J_{k\phi}\) and \(G(\psi)\). Following the formalism of [9] we can show that the function \(\mathcal{Z}(r,\psi)\) can be derived by a "pseudopotential" function \(V(r,\psi)\) which takes the form
\[V(\psi,r)=d_{i}^{2}(\kappa+1)\left[\frac{2\pi e^{-G(\psi)/d_{i}^{2}}}{r}\int_{ -\infty}^{+\infty}dp_{\phi}\,e^{-\frac{(p_{\phi}-\psi/d_{i}^{2})^{2}}{2r^{2}} }g(p_{\phi})\right]^{\frac{1}{\kappa+1}}=d_{i}^{2}(\kappa+1)n\,. \tag{35}\]
We can easily verify that
\[\mathcal{Z}=\frac{\partial V}{\partial r}\,,\]
thus, the Grad-Shafranov equation can be written in the familiar form
\[\Delta^{*}\psi+II^{\prime}(\psi)+r^{2}\frac{\partial V}{\partial\psi}=0\,. \tag{36}\]
Note that equation (36) is reminiscent of the MHD GS equation with toroidal flow [21], where an effective pressure function associated with the thermodynamic pressure and the plasma flow, appears instead of \(V(r,\psi)\).
To solve Eq. (36) we can specify \(V(\psi,r)\) to be a known mathematical function or it can be inferred by experimental data from the particle density \(n\) or the toroidal current density profile. Note that the particle density and the total toroidal current density can be expressed in terms of \(V\) as follows
\[n=\frac{V}{d_{i}^{2}(\kappa+1)}\,,\quad J_{\phi}=\frac{II^{\prime}}{r}+r\frac {\partial V}{\partial\psi}. \tag{37}\]
Also note that that the electron contribution to \(J_{\phi}\) is given by
\[J_{e\phi}=\frac{II^{\prime}}{r}-rnG^{\prime}(\psi)\,. \tag{38}\]
Knowing \(V\) enables the solution of the partial differential equation (36) to determine \(\psi\) and of the integral equation (35) to determine \(g(p_{\phi})\).
Now, we demonstrate that when the product \(V^{\kappa+1}e^{G(\psi)/d_{i}^{2}}\) can be expressed as a power series expansion of \(\psi\), it becomes possible to determine the function \(g(p_{\phi})\) in terms of Hermite polynomials. To illustrate this, let us invoke that Hermite polynomials \(H_{n}(x)\) serve as coefficients in the following power series expansion [22]
\[e^{-(x-y)^{2}/2}=\sum_{n=0}^{\infty}\frac{e^{-x^{2}/2}}{n!}H_{n}\left(\frac{x} {\sqrt{2}}\right)\left(\frac{y}{\sqrt{2}}\right)^{n}\,, \tag{39}\]
therefore (35) can be written as
\[\left[\frac{V}{d_{i}^{2}(\kappa+1)}\right]^{\kappa+1}e^{G(\psi)/d_{i}^{2}}=\sum_{ n}\frac{2\pi}{n!}\int_{-\infty}^{+\infty}d\zeta\,e^{-\zeta^{2}/2}H_{n}\left(\frac{ \zeta}{\sqrt{2}}\right)\left(\frac{\psi}{d_{i}^{2}\sqrt{2}r}\right)^{n}g(r \zeta)\,, \tag{40}\]
where \(\zeta:=p_{\phi}/r\). As Hermite polynomials form a complete orthogonal basis we can expand \(g(r\zeta)\) as
\[g(r\zeta)=\sum_{m}c_{m}H_{m}\left(\frac{r\zeta}{\sqrt{2}}\right)\,. \tag{41}\]
We now make use of the multiplication theorem for Hermite polynomials [23]
\[H_{m}(\gamma x)=\sum_{\ell=0}^{\lfloor m/2\rfloor}\gamma^{m-2\ell}(\gamma^{2}- 1)^{\ell}\frac{m!}{\ell!(m-2\ell)!}H_{m-2\ell}(x)\,,\quad\forall\gamma\in \mathbb{R}\,, \tag{42}\]
to write
\[g(r\zeta)=\sum_{m}\sum_{\ell=0}^{\lfloor m/2\rfloor}c_{m}\frac{m!}{\ell!(m-2 \ell)!}r^{m-2\ell}(r^{2}-1)^{\ell}H_{m-2\ell}\left(\frac{\zeta}{\sqrt{2}} \right)\,. \tag{43}\]
Substituting (43), the right hand side (rhs) of (40) becomes
\[\sum_{m,n}\sum_{\ell=0}^{\lfloor m/2\rfloor}\frac{2\pi}{n!} c_{m}\frac{m!}{\ell!(m-2\ell)!}r^{m-2\ell}(r^{2}-1)^{\ell}\left( \frac{\psi}{d_{i}^{2}\sqrt{2}r}\right)^{n}\times\] \[\times\int_{-\infty}^{+\infty}d\zeta\,e^{-\zeta^{2}/2}H_{n}\left( \frac{\zeta}{\sqrt{2}}\right)H_{m-2\ell}\left(\frac{\zeta}{\sqrt{2}}\right)\,. \tag{44}\]
Further, exploiting the orthogonality condition
\[\int_{-\infty}^{+\infty}dx\,H_{n}(x)H_{m}(x)e^{-x^{2}}=\sqrt{\pi}2^{n}n!\delta _{mn}\,, \tag{45}\]
we can see that Eq. (40) with rhs given by (44), becomes
\[\left[\frac{V}{d_{i}^{2}(\kappa+1)}\right]^{\kappa+1}e^{G(\psi)/d_{i}^{2}}= \sum_{m}\sum_{\ell=0}^{\lfloor m/2\rfloor}2^{m-2\ell+1}\pi^{3/2}c_{m}\frac{m! }{\ell!(m-2\ell)!}(r^{2}-1)^{\ell}\left(\frac{\psi}{d_{i}^{2}\sqrt{2}}\right)^ {m-2\ell}\,. \tag{46}\]
Our aim is to solve (46) for \(c_{m}\) in order to determine \(g(p_{\phi})\) as an expansion of orthogonal Hermite polynomials (see Eq. (41)). This is possible if the left hand side (lhs) of (46) can be expressed as a power series expansion. In this work we consider the special case
\[\left[\frac{V}{d_{i}^{2}(\kappa+1)}\right]^{\kappa+1}e^{G(\psi)/d_{i}^{2}}=V_{ 0}(r)+V_{1}(r)\psi+V_{2}(r)\psi^{2}\,, \tag{47}\]
and deal with two classes of equilibria corresponding to cold electrons with \(\kappa=1\) and thermal electrons with \(\kappa=1\).
By equations (47) and (46) we see that
\[V_{0}+V_{1}\psi+V_{2}\psi^{2}=c_{0}2\pi^{3/2}+c_{1}\frac{(2\pi)^{3/2}}{d_{i}^ {2}}\psi+c_{2}\frac{4\pi^{3/2}}{d_{i}^{4}}\psi^{2}+c_{2}4\pi^{3/2}(r^{2}-1)\,, \tag{48}\]
and therefore the coefficients \(c_{0}\), \(c_{1}\) and \(c_{2}\) are
\[c_{0} = \frac{V_{0}(r)}{2\pi^{3/2}}-2c_{2}(r^{2}-1)\,, \tag{49}\] \[c_{1} = \frac{d_{i}^{2}V_{1}}{2\pi^{3/2}}\,,\] (50) \[c_{2} = \frac{d_{i}^{4}V_{2}}{4\pi^{3/2}}\,. \tag{51}\]
In order for \(c_{0},c_{1},c_{2}\) to be constants we should select \(V_{1}=const.\)\(V_{2}=const.\) and
\[V_{0}(r)=C_{0}+d_{i}^{4}V_{2}(r^{2}-1)\,,\]
where \(C_{0}\) is a constant. Therefore, the ion distribution function in both the cold and thermal electron limits reads as follows
\[f(H,p_{\phi})=\left[c_{0}+\sqrt{2}c_{1}p_{\phi}+c_{2}(2p_{\phi}^{2}-2)\right]e ^{-H}\,. \tag{52}\]
To ensure the positivity of the distribution function (52) it suffices to require \(c_{0}+\sqrt{2}c_{1}p_{\phi}+c_{2}(2p_{\phi}^{2}-2)>0\), \(\forall p_{\phi}\), which holds true for
\[c_{0}>\frac{c_{1}^{2}-8c_{2}^{2}}{4c_{2}}\,,\quad c_{2}>0\,.\]
## 4 Tokamak equilibria
To fully define the plasma equilibria we further need to specify the free functions \(I(\psi)\) and \(G(\psi)\). In this work we adopt
\[I(\psi) = (I_{0}+I_{1}\psi+I_{2}\psi^{2})e^{-(\psi-\psi_{a})^{2}/\eta}\,, \tag{53}\] \[G(\psi) = \alpha(\psi-\psi_{a})^{2}\,. \tag{54}\]
Here, \(I_{0}\), \(I_{1}\), \(I_{2}\), \(\eta\), and \(\alpha\) are constants, and \(\psi_{a}\) represents the value of the flux function \(\psi\) at the magnetic axis, corresponding to an elliptic O-point of \(\psi\) where the magnetic field is purely toroidal.
We address the fixed-boundary equilibrium problem within a tokamak-relevant, D-shaped computational domain denoted as \(\mathcal{D}\). In this context, we solve the Grad-Shafranov equation (36) while specifying \(V\) as
\[V=d_{i}^{2}e^{-G(\psi)/d_{i}^{2}}\left[V_{0}(r)+V_{1}\psi+V_{2} \psi^{2}\right]\,, \tag{55}\] \[V=2d_{i}^{2}\left\{e^{-G(\psi)/d_{i}^{2}}\left[V_{0}(r)+V_{1} \psi+V_{2}\psi^{2}\right]\right\}^{1/2}\,, \tag{56}\]
for the \(\kappa=0\) and \(\kappa=1\) cases, respectively. The boundary condition is of Dirichlet type given by \(\psi|_{\partial\mathcal{D}}=0\). For cold electrons the Grad-Shafranov equation (36) takes the familiar form
\[\Delta^{*}\psi+II^{\prime}(\psi)+d_{i}^{2}e^{-G/d_{i}^{2}}\left[ (V_{1}+2V_{2}\psi)-\frac{G^{\prime}(\psi)}{d_{i}^{2}}\left(C_{0}-d_{i}^{4}V_{2 }+V_{1}\psi+V_{2}\psi^{2}\right)\right]r^{2}\] \[-d_{i}^{2}V_{2}e^{-G/d_{i}^{2}}G^{\prime}(\psi)r^{4}=0\,. \tag{57}\]
Note that a Grad-Shafranov equation of similar structure describes axisymmetric equilibria with incompressible flows of arbitrary direction, as shown in [24]. In the MHD context the \(r^{4}\) term is associated with the non-parallel component of the flow.
We solve both boundary value problems, corresponding to \(\kappa=0\) and \(\kappa=1\), using the Finite Element Method (FEM), which is conveniently implemented in Mathematica. The boundary \(\partial\mathcal{D}\) of the computational domain \(\mathcal{D}\) is defined as a polygon with a large number of vertices. The vertex coordinates can be boundary points extracted by some parametric formula or by experimental data. The boundary is characterized by an inverse aspect ratio \(\epsilon=0.32\), triangularity \(\delta=0.34\) and elongation equal to \(1.6\). The characteristic values of length, magnetic field and number density used for unit recovery are, respectively \(R_{0}=6.2\,m\), \(B_{0}=5\,T\) and \(n_{0}=2.1\times 10^{19}\). The algorithm performs several iterations as the position of the magnetic axis has to be found because it is required for determining the function \(G=\alpha(\psi-\psi_{a})^{2}\), until the convergence criterion \(max(|\psi_{new}-\psi_{old}|)<tol\) is satisfied. For
Figure 1: Magnetic surfaces of an equilibrium with cold electrons (blue dashed lines) and an equilibrium with thermal electrons (red solid lines).
Figure 2: Variation of the flux functions \(\psi\) (left) and the \(z\) component of the magnetic field (right) along the \(r\) axis on the equatorial plane \(z=0\). The dashed blue lines correspond to the cold electron equilibrium and the red solid lines correspond to thermal electrons.
our calculations we have set \(tol=10^{-7}\).
The contours of constant \(\psi\) (magnetic surfaces) for both equilibria are shown in Fig. 1. These equilibria are calculated by solving (36) with the ansatz (53) for \(I(\psi)\). In the case of cold electrons, the function \(V\) is given by (55), while for thermal electrons, \(V\) is specified by
Figure 4: The toroidal rotation velocity profile (left) and the corresponding profile of the \(r\) component of the electric field on \(z=0\).
Figure 5: Particle density profiles for \(\kappa=0\) and \(\kappa=1\) (left panel) and the electron pressure for \(\kappa=1\) (right panel).
Figure 3: The toroidal current density profiles for the two equilibrium classes. The total current density profile is displayed in the left panel, while in the right panel the electron and the ion kinetic contributions are drawn separately.
Figure 6: Variation of the parallel (left panel) and perpendicular (right panel) components of the ion pressure tensor along \(r\) axis on \(z=0\) for both \(\kappa=0\) and \(\kappa=1\).
(56). The values of the free parameters in the functions \(I\) and \(V\) are identical for both cases. For the specific examples presented here, we have chosen \(I_{0}=0.5\), \(I_{1}=10^{-1}\), \(I_{2}=-14\), \(V_{1}=15\), \(V_{2}=1.02\times 10^{4}\), \(\alpha=7.5\), and \(\eta=10\).
The characteristics of the equilibrium can be deduced from Figures 2 to 6, which display variations of various physical quantities of interest along the \(r\) axis on the \(z=0\) plane. Two-dimensional density plots of the same quantities are presented in Figures 7 to 14. Notably, the
Figure 8: Variation of \(J_{\phi}\) component on the \(r-z\) plane for \(\kappa=0\) (left) and \(\kappa=1\) (right).
Figure 7: Two dimensional density plots for the particle densities in \(\kappa=0\) (left panel) and \(\kappa=1\) (right panel) case.
Figure 9: The variation of the toroidal rotation velocity \(u_{\phi}\) on the plane \(r-z\), for cold (left) and thermal electrons (right).
particle density in both equilibria does not vanish at the boundary (Figures 5 and 7), implying that this equilibrium model is suitable for describing internal plasma regions bounded by a closed magnetic surface which defines the computational domain and does not coincide with the actual plasma boundary.
Additionally, we observe that the toroidal plasma rotation velocity profile exhibits a hollow shape with significant flow shear and radial electric field (\(E_{r}\)) in the plasma edge (Figs. 4, 9,
Figure 11: The parallel component (\(P_{\parallel}\)) of the ion pressure tensor for cold (left panel) and thermal (right panel) electrons.
Figure 12: The perpendicular component (\(P_{\perp}\)) of the ion pressure tensor for cold (left panel) and thermal (right panel) electrons.
Figure 10: The variation of the electric field magnitude \(|\mathbf{E}|\) on the plane \(r-z\), for cold (left) and thermal electrons (right).
and 10). Such edge sheared flows have been associated with the reduction of radial turbulent transport and the transition to high (H) confinement modes in large tokamaks (e.g., [25, 26]). Moreover, the toroidal current density profile for the \(\kappa=1\) equilibrium shows a reduction in the central region of the plasma (Figs. 3, 8).
in the thermal electron case. As a consequence, the effective pressure defined as \((P_{\parallel}+P_{\perp})/2\) also forms a pedestal due to the \(P_{\parallel}\) contribution.
In addition to the previously mentioned physical quantities, we calculate two figures of merit for both cold and thermal electron equilibria: the plasma \(\beta\) and the anisotropy function \(\sigma\) (defined in Appendix A, Eq. (63)). In nondimensional form the expression for calculating the plasma \(\beta\) is:
\[\beta=d_{i}^{2}\frac{P_{e}+\langle P\rangle}{B^{2}}\,, \tag{58}\]
where \(\langle P\rangle:=P_{rr}+P_{zz}+P_{\phi\phi}\) with \(P_{rr},P_{zz},P_{\phi\phi}\) being the diagonal components of the pressure tensor (see Appendix A). The presence of the \(d_{i}^{2}\) factor arises owing to the specific scaling we have adopted for the normalized pressure in Eqs. (9). Figures 13 and 14 illustrate that the plasma \(\beta\) ranges from approximately \(0.5-1.0\%\) and increases from the plasma boundary towards the core, while the ion pressure anisotropy is more pronounced on the low-field side of the configuration.
We conclude our presentation of equilibrium results with Fig. 15, which illustrates the variation of ion distribution functions as a function of the toroidal particle component \(v_{\phi}\), for both \(\kappa=0\) and \(\kappa=1\) cases at two distinct locations: the magnetic axis \((r_{ax},z_{az})\) and an edge point with coordinates \((r=1.3,z=0.0)\). In both cases, the dependence on \(v_{r}\) and \(v_{z}\) has been eliminated by integrating the distribution functions over the \(v_{r}-v_{z}\) plane. The two distribution functions are presented alongside the corresponding normalized Maxwellian distributions \(f_{0}e^{-v_{\phi}^{2}}\), where \(f_{0}\) is an appropriate normalization constant. In both cases, the distributions exhibit a shift towards positive \(v_{\phi}\), resulting in finite macroscopic toroidal flows. At the edge point \((1.3,0)\), where the toroidal flow appears to reach a maximum, the distributions significantly deviates from the Maxwellian, displaying a bump-on-tail form. The bump arises from ions rotating in the opposite direction of the macroscopic flow.
## 5 Summary
In this work, we have presented the axisymmetric equilibrium formulation of the hybrid Vlasov equilibrium model introduced in [9], featuring massless electrons and kinetic ions. We derived a general form of the Grad-Shafranov equation and outlined a method for determining ion distribution functions in terms of Hermite polynomials based on the knowledge of the total and the electron current density profile. Our formulation allowed us to solve the equilibrium problem for specific choices of the arbitrary functions involved in the Grad-Shafranov equation. The results demonstrate the model's capability to describe plasmas with geometric and profile characteristics relevant to tokamaks. Notably, these equilibria exhibit some features reminiscent of H-mode phenomenology, including strongly sheared edge flows and significant edge radial electric fields. Building upon these results, more refined descriptions of plasma equilibria with kinetic effects stemming from kinetic particle populations are possible. Thus, future research will focus on improving the model to incorporate realistic electron temperature distribution and fluid ion components. An intriguing open question is whether this equilibrium model can be derived through a Hamiltonian energy-Casimir (EC) variational principle, as explored in [15, 27]. Identifying the complete set of Casimir invariants of the dynamical system is crucial for such a variational formulation of the equilibrium problem and for establishing stability criteria within the Hamiltonian framework. Note that, in general, there are not enough Casimirs to recover all the possible classes of equilibria due to rank changing of the Poisson operator (see [28]). However, instead of the EC variational principle, one can apply an alternative Hamiltonian variational method that recovers all equilibria upon utilizing dynamically accessible variations [28].
## Acknowledgements
This work has received funding from the National Fusion Programme of the Hellenic Republic - General Secretariat for Research and Innovation. P.J.M. was supported by the U.S. Department of Energy Contract No. DE-FG05-80ET-53088.
## Appendix A Calculation of the ion pressure tensor components
The ion pressure tensor is defined by
\[\mathbf{P}=\int d^{3}v\,(\boldsymbol{v}-\mathbf{u})(\boldsymbol{v}-\mathbf{u})f\,, \tag{59}\]
where \(\mathbf{u}\) is calculated by (7). In our case \(\mathbf{u}=u_{\phi}\hat{\phi}\). Selecting the \((v_{r},v_{z},v_{\phi})\) basis in the velocity space we calculate below the following diagonal pressure components \(P_{rr},P_{zz},P_{\phi}\). Note that the non-diagonal components \(P_{rz}=P_{r\phi}=P_{z\phi}=0\) vanish owing to the fact that \(f\) is an even function of the velocity components \(v_{r}\), \(v_{z}\). The diagonal elements are calculated as follows
\[P_{rr} = \int d^{3}v\,v_{r}^{2}f\,, \tag{60}\] \[P_{zz} = \int d^{3}v\,v_{z}^{2}f\,,\] (61) \[P_{\phi\phi} = \int d^{3}v\,(v_{\phi}-u_{\phi})^{2}f\,. \tag{62}\]
An average value of the ion pressure is given by \(\langle P\rangle=Tr(P_{ij})/3\).
It is evident that \(P_{rr}=P_{zz}\) and since the non-diagonal components are zero, the ion pressure tensor is gyrotropic and can be written in the form
\[\mathbf{P}=\sigma\mathbf{B}\mathbf{B}+P_{\perp}\mathbf{I}\,, \tag{63}\]
where
\[\sigma\coloneqq d_{i}^{2}\frac{P_{\parallel}-P_{\perp}}{B^{2}}\,,\]
is an anisotropy function. Note that the factor \(d_{i}^{2}\) appears due to the specific scaling of the particle velocity adopted in (9). The parallel and the perpendicular to \(\mathbf{B}\) components of the pressure tensor can be calculated by the following relations
\[P_{\parallel} = \frac{\mathbf{P:}\mathbf{B}\mathbf{B}}{B^{2}}=\frac{P_{ij}B_{i}B _{j}}{B^{2}}\,, \tag{64}\] \[P_{\perp} = \frac{1}{2}\mathbf{P:}\left(\mathbf{I}-\frac{\mathbf{B}\mathbf{B }}{B^{2}}\right)=\frac{1}{2}P_{ij}\left(\delta_{ij}-\frac{B_{i}B_{j}}{B^{2}} \right)\,. \tag{65}\]
| **静磁場とイオン分布関数の平衡を計算するための、混合Vlasovモデルを使用します。このモデルでは、運動的なイオンと質量のない流体電子を考慮し、等温電子と変形されたマクスウェル分布関数を仮定します。平衡系には、グラッド・シャフランの偏微分方程式と積分方程式が含まれます。これらの方程式は、粒子の密度またはイオンと電子トーрои的電流密度プロファイルが与えられている場合に、平衡磁場の計算とイオン分布関数の計算に使用できます。これらの結果の解は、トーрои的等流とトーрои的電流密度を特徴とする状態を記述します。さらに、流体電子の存在により、これらの平衡は also poloidal 電流密度成分を示します。これは、完全的なキネティクスなVlasovモデルとは異なり、軸対称のJeans平衡は、微視的な運動の積 |
2309.03599 | Chasing Consistency in Text-to-3D Generation from a Single Image | Text-to-3D generation from a single-view image is a popular but challenging
task in 3D vision. Although numerous methods have been proposed, existing works
still suffer from the inconsistency issues, including 1) semantic
inconsistency, 2) geometric inconsistency, and 3) saturation inconsistency,
resulting in distorted, overfitted, and over-saturated generations. In light of
the above issues, we present Consist3D, a three-stage framework Chasing for
semantic-, geometric-, and saturation-Consistent Text-to-3D generation from a
single image, in which the first two stages aim to learn parameterized
consistency tokens, and the last stage is for optimization. Specifically, the
semantic encoding stage learns a token independent of views and estimations,
promoting semantic consistency and robustness. Meanwhile, the geometric
encoding stage learns another token with comprehensive geometry and
reconstruction constraints under novel-view estimations, reducing overfitting
and encouraging geometric consistency. Finally, the optimization stage benefits
from the semantic and geometric tokens, allowing a low classifier-free guidance
scale and therefore preventing oversaturation. Experimental results demonstrate
that Consist3D produces more consistent, faithful, and photo-realistic 3D
assets compared to previous state-of-the-art methods. Furthermore, Consist3D
also allows background and object editing through text prompts. | Yichen Ouyang, Wenhao Chai, Jiayi Ye, Dapeng Tao, Yibing Zhan, Gaoang Wang | 2023-09-07T09:50:48 | http://arxiv.org/abs/2309.03599v1 | # Chasing Consistency in Text-to-3D Generation from a Single Image
###### Abstract
Text-to-3D generation from a single-view image is a popular but challenging task in 3D vision. Although numerous methods have been proposed, existing works still suffer from the inconsistency issues, including 1) semantic inconsistency, 2) geometric inconsistency, and 3) saturation inconsistency, resulting in distorted, overfitted, and over-saturated generations. In light of the above issues, we present **Consist3D**, a three-stage framework Chasing for semantic-, geometric-, and saturation-**Consistent** Text-to-**3D** generation from a single image, in which the first two stages aim to learn parameterized consistency tokens, and the last stage is for optimization. Specifically, the semantic encoding stage learns a token independent of views and estimations, promoting semantic consistency and robustness. Meanwhile, the geometric encoding stage learns another token with comprehensive geometry and reconstruction constraints under novel-view estimations, reducing overfitting and encouraging geometric consistency. Finally, the optimization stage benefits from the semantic and geometric tokens, allowing a low classifier-free guidance scale and therefore preventing over-saturation. Experimental results demonstrate that Consist3D produces more consistent, faithful, and photo-realistic 3D assets compared to previous state-of-the-art methods. Furthermore, Consist3D also allows background and object editing through text prompts.
## 1 Introduction
Recently, text-to-3D generation from a single image has emerged as an active research area, with the goal of personalizing 3D assets using a reference image. This field has been explored extensively in previous literature [14, 15], often relying on shape estimation or few-shot finetuning as the prior and score distillation sampling [16] as the optimizer. Even though numerous methods have been proposed, they still suffer from inconsistency issues. For instance, 1) misguided semantics caused by inaccurate shape estimations. 2) distorted geometric caused by overfitting on the reference view. 3) oversaturated color caused by score distillation sampling.
Shape estimation methods [12, 13, 14, 15], including point cloud estimation, sketch estimation, etc., aim to aid text-to-3D generation by providing an estimated 3D prior for each novel view. However, they often inaccurately estimate the 3D priors, which results in misguided semantics, especially when a single-view image is the only input, because, in such a situation, there is no enough information provided for estimating 3D priors. Few-shot fine-tuning methods [13, 14] aim at personalizing a text-to-image generative model with several images with a common subject. However, when only one single input image is provided for training, they often lead to geometric inconsistency across novel views because of overfitting on the reference single view. The score distillation sampling method [16] aims to lift the 2D generations to 3D assets. This lifting process needs to ensure that the generation under each viewing angle is stable enough, so the fidelity of saturation is sacrificed for stability with a high classifier-free guidance scale applied. Therefore, current methods for generating 3D assets from a single image face challenges with inconsistency in semantics, geometry, and saturation, often resulting in distorted and over-saturated generations, as illustrated in Fig. 1. Enhancing semantic and
Figure 1: **Inconsistency issues.** (a) The semantic inconsistency: the generated object looks like a box instead of hat. (b) The geometric consistency: the generated cat’s face exists in the back view of a cat whose face originally towards the front view. (c) The saturation inconstancy: the rendering of the generated teapot is oversaturated compared with the original teapots color.
geometric consistency across seen and unseen views while being robust to inaccurate shape estimations, and mitigating color distortion in optimization, is imperative for achieving satisfactory 3D generation results.
In this paper, we present **Consist3D**, a semantic-geometric-saturation **Consistent** approach for photo-realistic and faithful text-to-**3D** generation from a single image. We address the three inconsistency issues (as shown in Fig. 1) by introducing a three-stage framework, including a semantic encoding stage, a geometric encoding stage, and an optimization stage. In the first stage, a parameterized identity token is trained independently of shape priors, enhancing robustness to misguidance and relieving the semantic-inconsistency problem. In the second stage, a geometric token is trained with comprehensive geometry and reconstruction constraints, overcoming single-view over-fitting issues and further enhancing geometric consistency between different views. In the third stage, the optimization process benefits from the semantic token and geometric token, allowing low classifier-free guidance (CFG) scales, therefore addressing the saturation-inconsistency issue and enabling background and object editing through text prompt.
The experiments highlight the strengths of Consist3D in generating high-fidelity 3D assets with robust consistency, while remaining faithful to the input single image and text prompts. As shown in Fig. 2 (a), compared to baseline methods, our generated results exhibit improved consistency and more reasonable saturation. Notably, our approach enables background editing (Fig. 2 (b)) and object editing (Fig. 2 (c)) through text prompt, without changing the input image. We summarize our contribution as follows:
* To our knowledge, we are the first to explore the semantic-geometric-saturation consistency problems in text-to-3D generation, and accordingly, we propose **Consist3D**, an approach for consistent text-to-3D generation, background and object editing from a single image.
* Our Consist3D consists of a three-stage framework, including a semantic encoding stage, a geometric encoding stage, and an optimization stage, and can generate robust, non-overfitted, natural-saturated 3D results under a low classifier-free guidance scale.
* Extensive experiments are conducted. Compared with prior arts, the experimental results demonstrate that Consist3D produces faithful and photo-realistic 3D assets with significantly better consistency and fidelity.
## 2 Related Works
### Personalized Text-to-Image Generation
Text-to-image (T2I) generative models [12, 13, 14, 15] have significantly expanded the ways we can create 2D images with text prompt in multi-modality field [13]. With text-to-image (T2I) synthesis enhanced by controllable denoising diffusion models [16, 15, 17, 18], personalizing text-to-image generation has become the emerging focus of research, which aims to generate images faithful to a specific subject. This area has seen considerable exploration [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. For example, textual inversion methods [15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 320, 321, 322, 333, 34, 35, 36, 37, 38, 39, 31, 33, 33, 39, 32, 34, 35, 36, 37, 38, 39, 32, 39, 31, 33, 39, 32, 34, 35, 36, 38, 39, 30, 31, 33, 32, 34, 37, 38, 39, 32, 35, 39, 33, 36, 39, 34, 37, 38, 39, 35, 39, 36, 37, 39, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 19, 13, 14, 15, 17, 19, 14, 18, 19, 15, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 62, 63, 64, 65, 66, 67, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11, 12, 13, 14, 15, 16, 17, 19, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
Huang et al. 2023c; Voynov et al. 2023) learn parameterized textual descriptions from a set of images sharing a common subject. This extends the T2I tasks to image-to-image generation, essentially realizing subject-driven personalization. To enhance textual inversions efficiently, there currently emerges a lot of few-shot finetuning approaches. Typically, DreamBooth (Ruiz et al. 2022) learns parameterized adapters (_i.e._, LoRA (Hu et al. 2021)) for the generative network, instead of parameterizing the textual descriptions. In another direction, ControlNet (Zhang and Agrawala 2023), Composer (Huang et al. 2023a), and T2I-Adapter (Mou et al. 2023) offer guidance to diffusion models, facilitating custom-defined constraints over the generation process, and yielding controllable personalization.
### Personalized Text-to-3D Generation
Personalized text-to-3D generation has gained interest by extending successful personalized T2I models, aiming at generating 3D assets from a few images (Raj et al. 2023; Xu et al. 2023). Most current approaches (Metzer et al. 2022; Raj et al. 2023) apply few-shot tuning (e.g., DreamBooth) for personalization and score distillation sampling (SDS) (Poole et al. 2022) for optimization. A generalized DreamFusion approach combines few-shot tuning on a few images for personalization and estimations from Zero-1-to-3 (Liu et al. 2023) as the shape priors, followed by SDS optimization. However, shape priors estimated by Zero-1-to-3 are often view-inconsistent, resulting in low-quality generations. Another work, DreamBooth3D (Raj et al. 2023) enables personalized 3D generation from 3-5 images via joint few-shot tuning and SDS optimization. However, when the input views are decreased to 1, overfitting on limited views leads to reconstruction failures and geometric inconsistency for novel views. Generating personalized 3D assets from only one single input image remains challenging (Cai et al. 2023; Gu et al. 2023a; Deng et al. 2023; Gu et al. 2023b; Xing et al. 2022; Lin et al. 2022). 3DFuse enables one-shot tuning on a single image and an estimated point cloud as guidance for a ControlNet for personalization, which performs together with SDS optimization. However, semantic and geometric inconsistency across views persists, as the point cloud estimation lacks accuracy, and one-shot tuning overfits the given view. This results in blurred, low-fidelity outputs. Score distillation sampling (SDS) optimizes 3D volume representations using pretrained text-to-image models, first introduced in DreamFusion (Poole et al. 2022). With the introduction of SDS, high-quality text-to-3D generation has been achieved in many previous works (Lin et al. 2023; Tang et al. 2023; Tsalicoglou et al. 2023; Chen et al. 2023; Wang et al. 2023). The insight of SDS is that under high classifier-guidance (CFG) scale, the generation of T2I model is stable enough under each text prompt, therefore enabling the 3D volume to converge. However, current works find that high CFG scale harms quality of the generations, leading to over-saturated results (Wang et al. 2023).
## 3 Method
### Overview
The input to our approach is a single image \(I_{ref}\) and a text prompt \(y\). We aim to generate a \(\theta\) parameterized 3D asset that captures the subject of the given image while being faithful to the text prompt. To achieve consistent 3D generation in the encoding process, we learn semantic consistency token and geometric consistency token, parameterized by \(\varphi_{1}\) and \(\varphi_{2}\), respectively. Overall, the parameters we need to optimize are \(\varphi_{1},\varphi_{2},\theta\), and the optimization goal can be formulated as follows,
\[\min_{\varphi_{1},\varphi_{2},\theta}\mathcal{L}(g(\theta,c),\epsilon(I_{ref},y,y_{\varphi_{1},\varphi_{2}},c)), \tag{1}\]
where \(\mathcal{L}\) is the loss function, \(c\) is the camera view, \(g\) is a differential renderer, and \(\epsilon\) is the diffusion model used to generate image using both text prompt \(y\) and learned prompt \(y_{\varphi_{1},\varphi_{2}}\) under the given view.
To facilitate the optimization of the parameters \(\varphi_{1},\varphi_{2},\theta\), we adopt two encoding stages and one score distillation sampling stage in our pipeline (Fig. 3). In the first stage, we propose semantic encoding and fine-tune a pretrained diffusion
Figure 3: **Pipeline.** Stage I. A single-view image is input to the semantic encoding module, and a semantic token is trained with sem loss. Stage II. The single-view image is the input and used to estimate a point cloud as the shape guidance to apply condition on the geometric encoding module, and a geometric token is trained with warp loss and rec loss. Stage III. A randomly initialized 3D volume is the input and the two tokens trained previously is utilized together with tokenized text prompt as the condition, and this 3D volume is trained into a 3D model faithful to the reference single image.
model \(\epsilon_{pretrain}\) to learn a semantic token parameterized by \(\varphi_{1}\), aiming at encapsulating the subject of the given image. In the second encoding stage, we propose geometric encoding to learn a geometric token parameterized by \(\varphi_{2}\), with carefully designed geometry constraints and reconstruction constraints. In the score distillation sampling stage, we propose a low-scale optimization for \(\theta\) parameterized 3D volume presentations, benefited specifically from the enhanced consistency with the proposed tokens.
### Semantic Encoding
The semantic encoding stage aims to learn the semantic token parameterized by \(\varphi_{1}\). The semantic token can be further incorporated with the text prompt to faithfully reconstruct the reference view image \(I_{ref}\) with consistent semantics. Specifically, we use the single image \(I_{ref}\) as the input to do one-shot fine-tuning to obtain the semantic token parameterized by \(\varphi_{1}\) to represent the given image as follows,
\[\begin{split}\min_{\varphi_{1}}\mathcal{L}_{sem}(\varphi_{1}):= \\ \mathbb{E}_{x,\epsilon,t}[w(t)\cdot\|\epsilon_{pretrain}(I_{ref},t,y _{\varphi_{1}})-\epsilon_{t}\|_{2}^{2}],\end{split} \tag{2}\]
where \(y_{\varphi_{1}}\) is a prompt containing the semantic token, \(\epsilon_{pretrain}\) represents the pretrained stable diffusion model, \(\epsilon_{t}\) is the noise scheduled at time step \(t\), and \(w(t)\) is the scaling factor which will be discussed in detail in Section 3.4.
We use the same training setting to DreamBooth [14], which enables few-shot personalization of text-to-image models using multiple reference images of a subject. Specifically, we adopt DreamBooth for one-shot personalization, which optimizes \(\varphi_{1}\) by Eq. 2 to identify the single-view image. Notably, with only one image \(I_{ref}\) as the input, naive DreamBooth tends to overfit not only the subject but also the view of the reference image, leading to inconsistent generations under novel views. To address this, we propose the second encoding stage to improve the geometric consistency.
### Geometric Encoding
In the second stage, we propose geometric encoding (Fig. 4), which aims to solve the overfitting and inconsistency issues by encapsulating warp and reconstruction consistency into what we term geometric token, parameterized by \(\varphi_{2}\).
To achieve warp and semantic consistency, the overall objective \(\mathcal{L}_{geometric}\) combines the two terms \(\mathcal{L}_{warp}\) and \(\mathcal{L}_{rec}\) (Eq. 3). Notably, the consistency token from this encoding stage does not contain standalone semantics. Due to depth guidance, its semantic is conditioned on view \(c\), encapsulating inherent 3D consistency of generation. By incorporating this token into prompts, we enhance geometric consistency of diffusion model outputs across different views.
\[\mathcal{L}_{geometric}(\varphi_{2}\mid c_{I})=\mathcal{L}_{warp}(\varphi_{2} \mid c_{I})+\mathcal{L}_{rec}(\varphi_{2}\mid c_{ref}), \tag{3}\]
where \(c_{I}\) defines the sampled camera view for image \(I\), \(c_{ref}\) is the given input reference view. The warp loss and the reconstruction loss are demonstrated as follows.
Warp LossThe warp loss aims to ensure a consistent transition between two camera views, \(c_{I}\) and \(c_{J}\), with a learnable geometric token parameterized by \(\varphi_{2}\). The loss is formulated as follows,
\[\begin{split}\min_{\varphi_{2}}\mathcal{L}_{warp}(\varphi_{2} \mid c_{I}):=\mathbb{E}_{x,\epsilon,t,c_{I}}[w(t)\cdot\\ \|(\hat{J}_{\epsilon,t,y_{\varphi_{2}}}-\mathcal{W}_{I\to J}(I, D))\cdot M\|_{2}^{2}],\end{split} \tag{4}\]
where \(\hat{J}_{\epsilon,t,y_{\varphi_{2}}}\) is the generated image from the diffusion model under the view \(c_{J}\) guided by the learnable geometric token \(\varphi_{2}\), \(\mathcal{W}_{I\to J}(I,D)\) is the warp operator that transfers the image \(I\) from the view \(c_{I}\) to the view \(c_{J}\) based on the depth map \(D\), and \(M\) is the warp mask indicating the visible points in both views. Note that the warped \(\mathcal{W}_{I\to J}\) is a deterministic function when the two views and the depth map are known.
The novel view image \(\hat{J}_{\epsilon,t,y_{\varphi_{2}}}\) is generated from the input view \(c_{I}\) based on the pretrained diffusion model \(\epsilon_{pretrain}\) as follows,
\[\hat{J}_{\epsilon,t,y_{\varphi_{2}}}=\alpha_{t}I+\sigma_{t}\epsilon_{pretrain} (I,t,y_{\varphi_{2}},D_{J}), \tag{5}\]
where \(\alpha_{t}\) and \(\sigma_{t}\) are predefined parameters in the pretrained diffusion model \(\epsilon_{pretrain}\) conditioned on time step \(t\), \(D_{J}\) is the estimated depth map under view \(c_{J}\). Here, ControlNet [13] is adopted as the pretrained diffusion model with depth map as conditions. With the warp loss, the geometric token enables the diffusion model to have the capability of cross-view generation with the learnable parameter \(\varphi_{2}\).
In the implementation, we use Point-E [10] to generate the 3D point cloud and then obtain the depth map of the input image. Initially, we use the input reference view \(c_{ref}\) as \(c_{I}\), and then sample a neighboring view as \(c_{J}\) with a small view change. After multiple steps, views from 360 degrees will be sampled.
Reconstruction LossThe reconstruction loss ensure the geometric token \(\varphi_{2}\) to retain subject semantics under the ref
Figure 4: **Geometric encoding. We adopt ControlNet with depth guidance for the generation. The training object is \(\mathcal{L}_{warp}\) and \(\mathcal{L}_{rec}\). The \(\mathcal{L}_{warp}\) calculated loss between two neighboring views with warp mask under novel views, and the \(\mathcal{L}_{rec}\) calculated loss between the single input image and the generation with reference mask under reference view.**
erence view \(c_{ref}\) with the reference image \(I_{ref}\) as follows,
\[\begin{split}\min_{\varphi_{2}}&\mathcal{L}_{rec}( \varphi_{2}\mid c_{ref}):=\mathbb{E}_{x,\epsilon,t}[w(t)\cdot\\ &\|\epsilon_{pretrain}(I_{ref}\cdot M_{ref},t,y_{\varphi_{2}},D_{ ref})-\epsilon_{t}\cdot M_{ref}\|_{2}^{2}],\end{split} \tag{6}\]
where \(D_{ref}\) is the depth map image and \(M_{ref}\) is the object mask. This enforces the model to generate the ground truth image when guided by the true depth, ensuring consistent subject identity.
### Low-scale Score Distillation Sampling
In the score distillation sampling stage (Fig. 5), we use prompts \(y_{\varphi_{1},\varphi_{2}}\) with both \(\varphi_{1}\) parameterized semantic token and \(\varphi_{2}\) parameterized geometric token, guided by the depth map \(D_{c}\) under the sampled view \(c\). The aim of this stage is to learn a 3D volume parameterized by \(\theta\). Specifically, we adopt the deformed SDS formulation as follows:
\[\begin{split}\nabla_{\theta}\mathcal{L}_{SDS}(\theta):=\mathbb{E }_{t,\epsilon,c}[w(t)\cdot\\ &(\epsilon_{pretrain}(x_{t},t,y_{\varphi_{1},\varphi_{2}},D_{c})- \epsilon_{t})\frac{\partial g(\theta,c)}{\partial\theta}],\end{split} \tag{7}\]
where the time step \(t\sim\mathcal{U}\left(0.02,0.98\right)\), noise \(\epsilon_{t}\sim\mathcal{N}(0,\mathcal{I})\), and \(g(\theta,c)\) is the rendered image from the 3D volumes parameterized by \(\theta\) under camera view \(c\), \(x_{t}=\alpha_{t}g(\theta,c)+\sigma_{t}\epsilon\).
The scaling factor \(w(t)\) in Eq. 7 allows flexibly tuning the degree of conditionality in classifier-free guidance (CFG) for text-to-image generation [10]. Higher scales impose stronger conditional constraints, while lower scales enable more unconditional generation. In 2D text-to-image, the CFG scale is typically set between 7.5 and 10 to balance quality and diversity.
Typically, high CFG scales (up to 100) are required for text-to-3D optimization as DreamFusion proposed. However, excessively high scales can impair image quality and diversity [23]. Our consistency tokens learned in the first two stages enhance semantic and geometric consistency, allowing high-quality distillation even at low scales (\(<25\)). This achieves photo-realistic, natural-saturated 3D generations faithfully adhering to the subject.
## 4 Experiments
### Implementation Details
We use Stable Diffusion [14] as the generative model with CLIP [11] text encoder and LoRA [12] as the adapter technique for fine-tuning. As for the representation of the 3D field, we adopt Score Jacobian Chaining (SJC) [23]. Encoding takes half an hour for each of the two stages, and distillation takes another hour. Specifically, Stage I semantic encoding uses \(1k\) optimization steps with LoRA. Stage II geometric encoding uses \(2k\) optimization steps with LoRA. Stage III uses \(10k\) optimization steps for SJC.
### Datasets
We evaluate Consist3D on a wide range of image collections, where the categories include animals, toys, and cartoon characters, and each subject is presented with a single-view capture. The sources of the image collections includes in-the-wild images selected from ImageNet, cartoon characters collected from the Internet, and images from the DreamBooth3D dataset. We optimize each 3D asset corresponding to the given single-view image with several different prompts and backgrounds, demonstrating faithful and photo-realistic 3D generation results in diversity.
### Performance Comparison
Text-to-3D Generation from a Single ImageWe compare our results with two baselines (_e.g._, DreamFusion [13] and 3DFuse [13]) in single-image based text-to-3D generation task, because they are the most related to our method and are representative works in the field of personalized 3D generation. Notably, the original implementation of DreamFusion is a text
Figure 5: **Score distillation sampling. A rendered image of a 3D volume is utilized as the input and a depth ControlNet with low CFG scales is utilized for generation. For the text condition, we combine the semantic token and geometric token with tokenized texts, which enables background editing and object editing through prompt.**
Figure 6: **Comparison with baselines (text-to-3D generation from a single image). The first column is the single input image. The following columns are results of 3DFuse, DreamFusion and Consist3D(Ours), separately. DreamFusion cannot correctly synthesize photo-realistic images and thin objects. 3DFuse is strongly troubled by inconsistency issues. However, the generation results of our method are not only faithful to the reference but also natural saturated, with good consistencies.**
-to-3D structure. Therefore, we initially utilized a single-view image along with DreamBooth for one-shot tuning, and then incorporated the shape prior estimated by Zero-1-to-3 for DreamFusion to produce 3D assets, following the implementation of the official open source code. Our results are bench-marked against the baseline DreamFusion and 3DFuse, as Fig. 6 shows. In the case of 3DFuse, we adhere to the original configuration established by its authors. We notice that DreamFusion suffers from incorrect shape prior estimations and gives unnatural results (Fig. 6), due to the super-high guidance scale. The generation quality of 3DFuse is strictly limited by the estimated point cloud prior, and even a slightly incorrect depth map can lead to completely wrong generations. Furthermore, its generated objects are not faithful enough to the given reference view, most time with blurred edges and overfitting problems. In contrast, our work can generate objects that are faithful to the input image, and achieve a more natural and photo-realistic reconstruction, not leaning towards the artistic model.
Background and Object EditingFor background editing, we compare our results with 3DFuse, since DreamFusion does not provide an option to edit the background. As Fig. 7 (a) shows, 3DFuse is unable to correctly generate the correct background, even if we have turned on the background generation option on. In contrast, our model is capable of editing the background for the reconstructed object by diverse prompts. For object editing, we compare our results with 3DFuse as well. As Fig. 7 (b) shows, 3DFuse is also unable to correctly change the object with the unchanged input image while our model can make it.
### Quantitative Evaluation
We compare our method with two baselines under 12 categories as shown in Tab. 1. The data for these 12 categories include unseen-domain anime from Internet, in-the-wild images from ImageNet, and synthetic datasets from the DreamBooth Dataset. CLIP Score [10] is a measure of image-text similarity. The higher the value, the more semantically the image and text match. We extend it to image-to-image similarity, which measures the similarity between the generated result and a given single-view image. To extend and calculate CLIP Score, we first re-describe the reference image with BLIP-2, and for fairness, remove the background description. Then for the reconstruction results of the 3D generation methods, we sample 100 camera positions, and for each position's sampled image, calculate the CLIP Score with the previously obtained description, and take the average separately for each method. Our method far surpasses the baselines in most categories, which means our generations are more faithful to the reference subject.
### User Study
We have conducted a user study with 137 participants, which is shown in Table. 2. We have asked the participants to choose their preferred result in terms of saturation consistency, geometric consistency, semantic consistency, overall fidelity, and overall clarity among DreamFusion, 3DFuse, and Consist3D (Ours). The results show that Consist3D generates 3D scenes that the majority of people judge as having better consistency and overall quality than other methods.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c|c} \hline \hline Methods & anya & banana & bird & butterfly & cat & clock & duck & hat & horse & shark & sneaker & sunglasses & Average \\ \hline DreamFusion & **80.71** & 69.80 & 71.93 & 65.76 & 72.76 & 69.38 & 80.34 & 71.10 & 75.40 & 67.31 & 63.46 & 60.59 & 70.71 \\
3DFuse & 70.31 & 71.72 & 73.41 & 75.12 & 67.19 & 64.60 & 78.84 & 62.63 & 68.52 & 71.81 & 67.33 & 74.05 & 70.46 \\ Consist3D (ours) & 80.05 & **77.22** & **80.99** & **82.00** & **72.85** & **71.47** & **84.13** & **87.34** & **77.96** & **72.04** & **73.59** & **77.04** & **78.06** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **CLIP score. The bold ones in the figure are the highest CLIP Scores in each category. The performance of our method surpasses baselines comprehensively.**
\begin{table}
\begin{tabular}{l|c c c c c|c|c} \hline \hline Method & saturation & geometric & semantic & fidelity & clarity & Average \\ \hline DreamFusion & 65.40 & 72.20 & 76.07 & 79.29 & 76.10 & 73.81 \\ MPuse & 69.87 & 73.51 & 63.58 & 72.51 & 77.54 & 71.40 \\ Consist3D (ours) & **89.73** & **77.29** & **85.35** & **82.20** & **79.36** & **82.79** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **User study. The bold ones in the figure are the best scores in each type of problem. Our method surpasses baselines in both consistency and quality.**
Figure 7: **Comparison with baselines (background and object editing). (a) 3DFuse cannot correctly generate the background of the dog, while our method generates “sod” and “sofa” properly. (b) With the object “yellow duck” changed to “gray duck”, 3DFuse only generates a duck with small gray wings, while our method changes the whole body to gray successfully.**
### Ablation Studies
Consistency TokensIn Fig. 8 (a), we show ablation studies for the two consistency tokens. First, we test the role of semantic token on single-image text-to-3D generation, and we find that removing the semantic token will cause the generated object's semantics to be inconsistent with the input image. In addition, we test the role of geometric token on object editing. We find that removing the geometric token leads to inconsistent generations under novel views, and the generated object could not be edited, which indirectly proves that the geometric token emphasizes the shape's geometric consistency across different viewing angles, while not over-fitting to the view of the input image.
LossesIn Fig. 8 (b), we test the roles of warp loss \(\mathcal{L}_{warp}\) and reconstruction loss \(\mathcal{L}_{rec}\) in the geometric encoding stage. With only the loss \(\mathcal{L}_{rec}\) applied, the model cannot generate correct background but the object is faithful to the input image. With only the loss \(\mathcal{L}_{warp}\) applied, the model generates geometric consistent 3D asset, but the background is not faithful to the input text. With both \(\mathcal{L}_{warp}\) and \(\mathcal{L}_{rec}\) applied, the results are faithful to the input image and text prompt while keeping geometric consistency with object and background correctly generated.
CFG ScalesIn the ablation study for score distillation sampling (Fig. 8 (c)), we vary the CFG scale to 10, 25, and 100, showing our low-scale distillation improves 3D quality. The consistency tokens reduce scale requirements for photo-realistic 3D generation. Our method achieves good, diverse results with low scales (below 25).
Seed ValuesWe experiment with fixed prompts and changing random seeds to verify the robustness of our approach as Fig. 8 (d) shows. The results demonstrate that our approach is not sensitive to random seeds.
## 5 Limitation and Future Work
Our method fails when the point cloud estimation is severely distorted. Moreover, if overly complex background prompts are used, the model may not be able to generate high-detail backgrounds. In future work, we intend to model objects and backgrounds separately to obtain more refined generations.
## 6 Conclusion
We introduce Consist3D, a method to faithfully and photo-realistically personalize text-to-3D generation from a single view image, with background and object editable by text prompts. addressing the inconsistency issues in semantic, geometry, and saturation. Specifically, we propose a 3-stage framework with a semantic encoding, a geometric encoding stages and a low-scale score distillation sampling stage. The semantic token learned in the first encoding stage encourages Consist3D to be robust to shape estimation, the geometric token learned in the second stage encourages the generation to be consist across different views, and both of the token are used in the third stage to encourage natural saturation of the 3D generation. Our method outperforms baselines quantitatively and qualitatively. Experiments on a wide range of images (including in-the-wild and synthesis images) demonstrate our approach can 1) generate high-fidelity 3D assets with consistency from one image. 2) change the background or Object of the 3D generations through editing the text prompts without changing the input image. Going forward, we plan to incorporate more geometric constraints into token training to further enhance 3D consistency.
Figure 8: **Ablation study.** (a) Consistency Tokens. The first row: without semantic token, the generation is not faithful to the input image. The second row: without geometric token, the generation is not consist in novel views and fail to do object editing. (b) Losses. Without \(\mathcal{L}_{rec}\) or \(\mathcal{L}_{warp}\), the generation becomes not faithful to the input image or fails to generate correct background. (c) CFG Scales. The 3D synthesis of sunglasses and teapot under CFG scales of 10, 25, 100, separately. Results at lower scale are more natural saturated, while the generations at higher scale tend to be with over-saturated color. (d) Seed Values. The 3D synthesis of shark and mushroom under seed values of 0, 1, 2 demonstrates our method is robust to different seeds, with the generations slightly changed. | 単一の画像からの3D生成は、人気だが、3DVisionの課題であり、多くの方法が提案されていますが、既存の研究は、1)semantischeinconsistency, 2)geometric inconsistency, and 3)saturation inconsistencyの不一致問題に苦しんでおり、歪んだ、過剰な学習、過剰な飽和の生成を招いています。これらの問題を考慮した上で、私たちは、単一の画像から、文脈、幾何学的、飽和の整合性を追う、3つの段階のフレームワークであるConsist3Dを提案しました。まず、最初の2つの段階は、パラメータ化された整合性を学習する目的で設計されています。最後に、最適化が行われます。具体的には、文脈エンコーディングは、ビューや推定に関わらず、独立したトークンを学習し、文脈的整合性を促進し、堅牢性を高めます。一方、幾何学的エンコーディング |
2309.12494 | Evidential uncertainty sampling for active learning | Recent studies in active learning, particularly in uncertainty sampling, have
focused on the decomposition of model uncertainty into reducible and
irreducible uncertainties. In this paper, the aim is to simplify the
computational process while eliminating the dependence on observations.
Crucially, the inherent uncertainty in the labels is considered, the
uncertainty of the oracles. Two strategies are proposed, sampling by Klir
uncertainty, which tackles the exploration-exploitation dilemma, and sampling
by evidential epistemic uncertainty, which extends the concept of reducible
uncertainty within the evidential framework, both using the theory of belief
functions. Experimental results in active learning demonstrate that our
proposed method can outperform uncertainty sampling. | Arthur Hoarau, Vincent Lemaire, Arnaud Martin, Jean-Christophe Dubois, Yolande Le Gall | 2023-09-21T21:26:50 | http://arxiv.org/abs/2309.12494v2 | # Evidential uncertainties on rich labels
###### Abstract
Recent research in active learning, and more precisely in uncertainty sampling, has focused on the decomposition of model uncertainty into reducible and irreducible uncertainties. In this paper, we propose to simplify the computational phase and remove the dependence on observations, but more importantly to take into account the uncertainty already present in the labels, _i.e._ the uncertainty of the oracles. Two strategies are proposed, sampling by Klir uncertainty, which addresses the exploration-exploitation problem, and sampling by evidential epistemic uncertainty, which extends the reducible uncertainty to the evidential framework, both using the theory of belief functions.
Keywords:Active Learning Uncertainty sampling Belief Functions
## 1 Introduction
For reasons of efficiency, cost or energy reduction in machine learning or deep learning, one of the important issues is related to the amount of data and in some cases, to the amount of labelled data. Active learning [19] is a part of machine learning in which the learner can choose which observation to label in order to work with only a fraction of the labeled dataset to reduce the labeling cost. For this purpose, the learner uses a strategy that allows it to select only certain observations that will then be labeled. Among all the proposed strategies in the literature [1, 19] one of the best known is sampling by uncertainty [15].
In uncertainty sampling, the learner selects the instances for which it is most uncertain. The measures used to quantify this uncertainty, such as entropy, are up to now probabilistic. In this paper, we propose to use a broader framework of uncertainty that generalizes probabilities.
As proposed in recent papers [10, 11, 18] the uncertainty can be decomposed into two interesting terms: the epistemic and the aleatoric uncertainties. Aleatoric uncertainty arises from the stochastic property of the event and is therefore not reducible, whereas epistemic uncertainty is related to a lack of knowledge and can be reduced. Proposed calculations depend on the model prediction but also on the observations. We suggest in this paper, to get rid of the direct dependence on the observations and to use only the model output for similar results. This representation also addresses the exploration-exploitation
problem in active learning, with the possibility of choosing one or the other, or even a compromise as in [2].
The labeling process is often carried out by humans [7, 17]; without making any difference between a label given by someone who has hesitated for a long time and a label given by someone who has no doubt, and therefore uncertainty may already exist in the labels. This information is not taken into account in most models and sampling strategies. In the case of supervised classification, several models are now able to handle these uncertain labels [4, 5, 6, 23]. The main objective, in addition to not being dependent on observations and to address the problem of exploration-exploitation, is to take into account in the sampling, the uncertainty already present in the labels.
Given the above, we propose in this paper two uncertainty sampling strategies capable of representing a decomposition of the model uncertainties with regard to the uncertainty already present in the labels.
The first strategy is based upon two different uncertainties, the discord (how self-conflicting the information is) and non-specificity (how ignorant the information is) in the model output. The second strategy extends the epistemic uncertainty to the evidential framework and to several classes, thus simplifying the computation.
The paper is organized as follows; section 2 introduces some important notions of imperfect labeling and the modeling of these richer labels using the theory of belief functions. The usual uncertainty sampling approach [15] is also recalled and section 3 describes the separation between aleatoric and epistemic uncertainties. Section 4 presents the two new proposed strategies, section 5 shows an application on a real world dataset, then section 6 discusses and concludes the article. The experiments performed in this paper are described in supplementary materials, to avoid lengthy explanations, since the purpose of the paper does not lie in this part. Furthermore, uncertainties are mapped on 2D representations but the objective is to later serve active learning.
## 2 Preliminaries
In this section, we introduce some general knowledge useful to understand the rest of the paper, starting with rich labels, modeled by the theory of belief functions and ending with the classical approach of sampling by uncertainty.
#### 2.0.1 Imperfect labeling -
Most of the datasets used for classification consider hard labels, with a binary membership where the observation is either a member of the class or not. In this paper, we refer as rich labels the elements of response provided by a source that may include several degrees of imprecision (_i.e._ "_This might be a cat_", "_I don't know_" or "_I am hesitating between dog and cat, with a slight preference for cat_)". Such datasets, offering uncertainty already present in the labels, exist [22] but are not numerous. These labels are called rich in this paper since they provide more information than hard labels and can be modeled using the theory of belief functions.
Theory of belief functions -
The theory of belief functions [3; 20], is used in this study to model uncertainty and imprecision for labeling and prediction. Let \(\Omega=\{\omega_{1},\ldots,\omega_{M}\}\) be the frame of discernment for \(M\) exclusive and exhaustive hypotheses. It is assumed that only one element of \(\Omega\) is true (closed-world assumption) [21]. The power set \(2^{\Omega}\) is the set of all subsets of \(\Omega\). A mass function assigns the belief that a source may have about the elements of the power set of \(\Omega\), such that the sum of all masses is equal to 1.
\[m:2^{\Omega}\rightarrow[0,1],\sum_{A\in 2^{\Omega}}m(A)=1. \tag{1}\]
Each subset \(A\in 2^{\Omega}\) such as \(m(A)>0\) is called a _focal element_ of \(m\). The uncertainty is therefore represented by a mass \(m(A)<1\) on a focal element \(A\) and the imprecision is represented by a non-null mass \(m(A)>0\) on a focal element \(A\) such that \(|A|>1\).
A mass function \(m\) is called _categorical mass function_ when it has only one focal element such that \(m(A)=1\). In the case where \(A\) is a set of several elements, the knowledge is certain but imprecise. For \(|A|=1\), the knowledge is certain and precise.
On decision level, the pignistic probability \(BetP\)[21] helps decision making on singletons:
\[BetP(\omega)=\sum_{A\in 2^{\Omega},\ \omega\in A}\frac{m(A)}{|A|}. \tag{2}\]
It is also possible to combine several mass functions (beliefs from different sources) into a single body of evidence. If the labels and therefore the masses are not independent, a simple average of the mass functions \(m_{j}\) derived from \(N\) sources can be defined as follows:
\[m(A)=\frac{1}{N}\sum_{j=1}^{N}m_{j}(A),\ \ A\in 2^{\Omega}. \tag{3}\]
There are other possible combinations that are more common than the mean, many of which are listed in [14].
\(\bullet\)**Example 1:** Let \(\Omega=\{Cat,Dog\}\) be a frame of discernment. An observation labeled "Cat" by a source can be modeled in the framework of belief functions by the mass function \(m_{1}\) such as: \(m_{1}(\{Cat\})=1\) and \(m_{1}(A)=0,\ \forall A\in 2^{\Omega}\backslash\{Cat\}\).
\(\bullet\)**Example 2:** An observation labeled "Cat or Dog" by a source can be modeled by the mass function \(m_{2}\) such as: \(m_{2}(\{Cat,Dog\})=1\) and \(m_{2}(A)=0\), \(\forall A\in 2^{\Omega}\backslash\{Cat,Dog\}\).
\(\bullet\)**Example 3:** The average mass function \(\bar{m}\) of \(m_{1}\) and \(m_{2}\) is: \(\bar{m}(\{Cat\})=0.5\), \(\bar{m}(\{Cat,Dog\})=0.5\) and \(\bar{m}(A)=0\) for all other subsets \(A\) in \(2^{\Omega}\). Its pignistic probability \(BetP\), used for decision making is: \(BetP(\{Cat\})=0.75\) and \(BetP(\{Dog\})=0.25\).
#### 2.1.1 Uncertainty sampling -
Active learning iteratively builds a training set by selecting the best instances to label. The principle is, for a given performance or a given budget, to label as few observations as possible. Among all the strategies proposed in the literature [19] one of the best known methods is uncertainty sampling [13], where the function that defines the instances to be labeled maximizes the uncertainty related to the model prediction as described below.
Let \(\mathcal{U}\) be the uncertainty to label a new observation \(x\) for a given model and \(\Omega=\{\omega_{1},\ldots,\omega_{M}\}\) the set of the \(M\) possible classes. The uncertainty \(\mathcal{U}\) can be calculated in several ways, a classical approach is to use Shannon's entropy:
\[\mathcal{U}(x)=-\sum_{\omega\in\Omega}p(\omega|x)\text{log}[p(\omega|x)], \tag{4}\]
with \(p(\omega|x)\) the probability for \(x\) to belong to the class \(\omega\), given by the model. Other uncertainty criteria exist, it is common to use the least confidence measure:
\[\mathcal{U}(x)=1-\max_{\omega\in\Omega}[p(\omega|x)]. \tag{5}\]
Measuring the uncertainty of a model to predict the class of some observations can be useful to find the areas of uncertainty in a space.
Figure 1 represents three two-dimensional datasets, the classes are perfectly separated.
Given the model and one of the uncertainty criteria, we can compute the uncertainty of any point in space. For each dataset, the areas of uncertainty of the model are represented, with more red for more uncertainty. It is remarkable that these uncertainty areas can be compared to the decision boundaries of the model. Often, the closer the observation is to the decision boundary, the less confident the model is about its prediction.
Uncertainty sampling consists of choosing the observation for which the model is least certain of its prediction. This is one of the basis of active learning,
Figure 1: Three 2-class datasets with areas of model uncertainty.
however, other methods allow to extract more information about this uncertainty which leads to the decomposition into epistemic and aleatoric uncertainties.
## 3 On the interest and limits of epistemic and aleatoric uncertainties for active learning
In this section, we introduce additional elements to decompose the uncertainty of the model so it can focus, in active learning, on the observations that will make it rapidly gain in performance.
The uncertainty \(\mathcal{U}(x)\) can be separated into two uncertainties [9], one reducible and the other irreducible. The example1 of Figure 2 shows these two types of uncertainties, on 2a the result of a coin toss is uncertain and it is not possible to generate more knowledge to predict that the coin will flip heads or tails, this ignorance is called aleatoric uncertainty. On 2b either heads or tails is written in Finnish, it is an uncertainty that can be resolved by learning this language, it is called epistemic uncertainty.
Footnote 1: This example is taken from Eyke Hullermeier’s talk “Representation and Quantification of Uncertainty in Machine Learning” at the LFA2022 conference. In our example the word tails is written in Finnish, the word heads is called “Kruuna”.
Being able to model these two uncertainties can help delimit where it is more interesting to provide knowledge and where it is useless. The total uncertainty \(\mathcal{U}(x)\) is often represented as the sum of the epistemic uncertainty \(\mathcal{U}_{e}(x)\) and the aleatoric uncertainty \(\mathcal{U}_{a}(x)\): \(\mathcal{U}(x)=\mathcal{U}_{e}(x)+\mathcal{U}_{a}(x)\).
For a two-class problem \(\Omega=\{0,1\}\), it is proposed in [18] to model this uncertainty (here under the [15] formalism) by computing the plausibility \(\pi\) of belonging to each of the two classes with the following formula, according to a probabilistic model \(\theta\):
\[\begin{split}\pi(1|x)&=\sup_{\theta\in\Theta}\,\min[ \pi_{\Theta}(\theta),p_{\theta}(1|x)-p_{\theta}(0|x)],\\ \pi(0|x)&=\sup_{\theta\in\Theta}\,\min[\pi_{\Theta} (\theta),p_{\theta}(0|x)-p_{\theta}(1|x)],\end{split} \tag{6}\]
with \(\pi_{\Theta}(\theta)\) depending on the likelihood \(L(\theta)\) and the maximum likelihood \(L(\hat{\theta})\):
\[\pi_{\Theta}(\theta)=\frac{L(\theta)}{L(\hat{\theta})}. \tag{7}\]
Figure 2: Representation of aleatoric and epistemic uncertainties through the tossing of a coin and the word “heads” or “tails” written in Finnish.
The epistemic uncertainty is then high when the two classes are very plausible while the aleatoric uncertainty is high when the two classes are implausible:
\[\begin{split}\mathcal{U}_{e}(x)&=\min[\pi(1|x),\pi(0|x )],\\ \mathcal{U}_{a}(x)&=1-\max[\pi(1|x),\pi(0|x)].\end{split} \tag{8}\]
This calculation depends not only on the prediction of the model but also on the observations. To summarize, the fewer observations there are in a region, or the fewer decision elements there are to strongly predict a class, the higher the plausibility of the two classes, and the more reducible (and thus epistemic) the uncertainty is by adding knowledge.
An example is shown in Figure 3, a two-class dataset is shown in (a)a and the areas of model uncertainty are shown in (b)b according to the uncertainty sampling presented in the previous section. An horizontal line can be distinguished where the model uncertainty is highest. However, the sample represented in (a)a, shows that part of the uncertainty can be removed more easily by adding observations. In the same figure, three different datasets show how the sample can evolve by adding observations. Whatever the final distribution, the uncertainty on the left is not very reducible, while the uncertainty on the right can be modified by adding knowledge.
These two uncertainties can be calculated using equation (8), and are shown in Figure 4. The aleatoric uncertainty, and therefore irreducible, is represented in (a)a and the epistemic uncertainty, reducible, is represented in (b)b. The total uncertainty is then the sum of the two (c)c. The goal here is to use only the epistemic uncertainty, to know the areas where the model can learn new knowledge and where it will have more impact.
Figure 3: Sample with areas of uncertainty according to the uncertainty sampling and three possible datasets based on the observations available in (a)a.
Using epistemic uncertainty as a sampling strategy is not reductive since it provides similar areas of uncertainty to those used previously, where epistemic and aleatoric uncertainty are indistinguishable.
Such information can be useful to find areas of reducible uncertainty, but it is not compatible with richer labels that also contain uncertainty. The way to compute this epistemic uncertainty is also dependent on the observations in addition to the model (_i.e._ the method could be oversimplified as: the model defines its zones of uncertainty, in which we look for the location with the smallest number of observations to define the reducible uncertainty.). Furthermore, the exploration-exploitation problem is not fully addressed. This leads to the next section in which two uncertainty sampling strategies for rich labels are proposed, they are also extended to several classes.
## 4 Richer labels and multiple classes
In this section, we propose two uncertainty sampling strategies, with a simplified calculation phase, able to deal with richer labels and no longer directly dependent on the observations but only on the model prediction2. We also propose a natural extension for a number of classes higher than two. The first method uses discord and non-specificity to map uncertainty in order to address the exploration-exploitation problem. The second method extends the epistemic and aleatoric uncertainties to rich labels, also simplifying the computation phase.
Footnote 2: The uncertainty is no longer directly dependent on the observations, but the model still is.
From there, a label can be uncertain and imprecise, which means that additional information on ignorance is represented. Figure 5 shows how these labels are represented in this document, the darker the dot, the less ignorance the label contains (_e.g. I'm sure this is a dog_), the lighter the dot, the more ignorance it contains (_e.g. I have no idea between dog and cat_).
### Discord and non-specificity: Klir uncertainty
In the framework of belief functions, discord and non-specificity are tools that allow to model uncertainty, we propose to use Klir's representation [12] for uncertainty sampling, some bridges can be made with epistemic and aleatoric uncertainty.
Figure 4: Areas of uncertainty on (a)a for epistemic and aleatoric uncertainties.
#### 3.1.2 Discord
is here applied to the output of a model capable of making an uncertain and imprecise prediction3. It represents the amount of conflicting information in the model's prediction and is calculated with the following formula:
Footnote 3: The Evidential \(K\)-nearest Neighbors model [5] is considered to illustrate the examples, which may vary depending on the model used.
\[D(m)=-\sum_{A\subseteq\Omega}\,m(A)\,\log_{2}(BetP(A)), \tag{9}\]
with \(m\) a mass function, or the output of the model (see section 2).
Figure 6 represents three different cases where the discord varies, from high discordance where labels around the central point (the observation to label) highly disagree 6a, to low discordance where each of the labels is in agreement 6c.
#### 3.1.3 Non-Specificity
allows to quantify the degree of ignorance of the model, the higher it is, the more imprecise the response of the model, it is calculated with:
\[N(m)\equiv\sum_{A\subseteq\Omega}m(A)\,\log_{2}(|A|). \tag{10}\]
The same Figure 6 also represents three different cases of non-specificity, in 6d the non-specificity is low as there are relevant sources of information next to the observation to be labelled, in 6e the non-specificity increases the further away the elements are from the observation and in 6f the non-specificity is also high because the nearby sources of information are themselves ignorant.
#### 3.1.4 Klir uncertainty
is then derived from discord and non-specificity, it is used here for uncertainty sampling by adding the two previous formulas:
\[\mathcal{U}_{m}(x)=N(x)+D(x), \tag{11}\]
with \(N(x)\) and \(D(x)\) respectively the non-specificity and discord of the model in \(x\). Klir [12] proposes to use the same weight for discord and non-specificity, but in [4] a parameter \(\lambda\in[0,1]\) is introduced and allows to bring more weight to non-specificity (we propose to use it for more exploration) or to discord (for more exploitation):
\[\mathcal{U}_{m}(x)=\lambda N(x)+(1-\lambda)D(x). \tag{12}\]
Figure 5: Observations on two dimensions with their rich labels, the darker the point, the more certain and precise its label.
Note that this uncertainty is naturally extended to \(|\Omega|\geq 2\) classes.
This formula has the advantage of identifying the total uncertainty as well as the reducible one, but also of taking into account the uncertainty already present in the labels and of being adjustable for more exploration or exploitation. Figure 7 shows a dataset with two areas of uncertainty (a)a, on the right an area with a lack of data and on the left an area where labels are more ignorant. The uncertainty sampling, using Shannon's entropy (4) or the least confidence measure (5) is not able to see either of these two areas (b)b. The epistemic uncertainty (8) is able to distinguish the uncertainty related to the arrangement of the observations in space (_i.e._ the uncertainty on the right) but not the uncertainty related to the ignorance of the sources (c)c.
The proposal of using Klir uncertainty for sampling (discord and non-specificity) allows to represent each of these uncertainties. Figure 8 shows the areas of non-specificity (a)a, of discord (b)b and Klir uncertainty (c)c.
Klir uncertainty can then be used for uncertainty sampling in active learning, it is also possible to vary the result for more exploration or more exploitation by modifying \(\lambda\). Figure 9 shows the areas of uncertainty for different values of \(\lambda\), more discord on the left to more non-specificity on the right.
Figure 6: Three degrees of discord and three degrees of non-specificity in the center.
Figure 7: An imperfectly labeled dataset (a)a with the areas of uncertainty according to uncertainty sampling and epistemic uncertainty.
We have proposed here to use Klir's uncertainty in sampling, which allows to represent some unknown uncertainties areas in active learning related to rich labels. The method is no longer dependent on the observations, but only on the prediction of the model and the exploration-exploitation problem is addressed thanks to the \(\lambda\) parameter. Even though discord may recall aleatoric uncertainty (non-reducible) and non-specificity may recall epistemic uncertainty (reducible). These notions are not quite equivalent. Therefore, in the following section we also propose an extension of epistemic (and aleatoric) uncertainty for rich labels and for several classes.
### Evidential epistemic uncertainty
We propose here to extend the notion of epistemic uncertainty to rich labels, by removing the dependence on observations, simplifying the computational phase, and allowing the model to detect new areas of uncertainty.
The epistemic uncertainty can be extended to rich labels by using the notion of plausibility within the framework of belief functions. It represents the total evidence that does not support the complementary event for a class \(\omega\) or more generally for an element \(A\in 2^{\Omega}\). The plausibility \(Pl\) defines the belief that could be allocated to \(A\):
\[Pl(A)=\sum_{A\cap B\neq\emptyset}m(B). \tag{13}\]
Figure 8: Areas of uncertainty corresponding to the dataset (a)a according to the non-specificity, the discord and the total uncertainty defined by Klir.
Figure 9: Areas of Klir uncertainty, modifying the amount of non-specificity and discord. With \(\lambda=0.1\), more discord is taken into account, with \(\lambda=0.5\), discord and non-specificity are used as much and with \(\lambda=0.9\), more non-specificity is taken into account.
The plausibility being the consistent evidence, the belief function \(Bel\) defines the total evidence directly supporting \(A\):
\[Bel(A)=\sum_{B\subseteq A,B\neq\emptyset}m(B). \tag{14}\]
We have \(Pl(A)=1-Bel(\bar{A})\). Analogous to equation (8) and for two classes \(\Omega=\{0,1\}\) the epistemic uncertainty is maximal when both classes are highly plausible. The proposed evidential epistemic and aleatoric uncertainties are defined as follows:
\[\begin{split}\mathcal{U}_{e}(x)&=\min[Pl(1|x),Pl(0 |x)],\\ \mathcal{U}_{a}(x)&=1-\max[Pl(1|x),Pl(0|x)].\end{split} \tag{15}\]
The equation for the aleatoric uncertainty can be rewritten depending on the belief \(Bel\):
\[\mathcal{U}_{a}(x)=\min[Bel(1|x),Bel(0|x)]. \tag{16}\]
The sum of the epistemic and aleatoric uncertainties is then the total evidential uncertainty: \(\mathcal{U}(x)=\mathcal{U}_{e}(x)+\mathcal{U}_{a}(x)\). However, when the number of classes exceeds 2 the equation of the epistemic uncertainty cannot be simplified by the minimum plausibility:
\[\begin{split}\mathcal{U}_{e}(x)&\neq\min([Pl( \omega|x)|\omega\in\Omega]),\\ \mathcal{U}_{a}(x)&\neq 1-\max([Pl(\omega|x)|\omega\in \Omega]).\end{split} \tag{17}\]
It is preferable to first define the uncertainty related to one of the classes \(\omega\), rewritten with the belief \(Bel\) to avoid having to manipulate \(\bar{\omega}\):
\[\begin{split}\mathcal{U}_{e}(\omega|x)&=\min[Pl( \omega|x),Pl(\bar{\omega}|x)]\\ &=\min[Pl(\omega|x),1-Bel(\omega|x)].\end{split} \tag{18}\]
The evidential extension of the epistemic and aleatoric uncertainties for \(|\Omega|\geq 2\) classes is then:
\[\begin{split}\mathcal{U}_{e}(x)&=\sum_{\omega\in \Omega}\min[Pl(\omega|x),1-Bel(\omega|x)],\\ \mathcal{U}_{a}(x)&=\sum_{\omega\in\Omega}\min[Bel( \omega|x),1-Pl(\omega|x)].\end{split} \tag{19}\]
The example in Figure 10 shows a dataset of three classes with a zone of ignorance for some labels (between the green and red classes). Probabilistic (4)-(5) and epistemic (8) uncertainties cannot model the imprecision present in the labels, this less complete uncertainty zone is represented in 10b.
The previous uncertainty resulting from the sum of the discord and the non-specificity is presented in Figure 11. It manages both exploration 11a and exploitation 11b to give a better representation of the uncertainty 11c.
Figure 11: Areas of uncertainty corresponding to the datasets (a)a according to the non-specificity, the discord and the total Klir uncertainty.
Figure 12: Areas of uncertainty corresponding to the datasets (a)a according to the evidential epistemic uncertainty for green, red and blue classes.
Figure 10: On the left, a sample of a dataset of three classes with an area of ignorance (labeled with imprecision) and on the right areas of uncertainty according to non-evidential uncertainty sampling.
Figure 13: Areas of uncertainty for evidential epistemic and aleatoric uncertainties, according to (a)a.
The extension of the epistemic uncertainty, also introduced in this paper, is presented in the following experiments. First, the evidential epistemic areas of uncertainties for each of the three classes are presented in Figure 12. Then, the resulting evidential epistemic uncertainty of the model is deducted from equation (19) in Figure 13 along with the evidential aleatoric and total uncertainties.
## 5 Sampling on real world dataset
Some datasets have been labeled in an uncertain and imprecise way by users during crowdsourcing campaigns [22]. We therefore have access to really imperfectly labeled datasets with rich labels. Conventional methods for computing model uncertainty do not take into account the degrees of imprecision of these rich labels. The two proposed methods are illustrated on Credal Dog-2, one of these datasets. Figure 14 shows the dataset on the two first components of a Principal Component Analysis. This is a two-class dataset represented in 14a with true classes and in 14b with uncertain and imprecise rich labels given by contributors. Darker dots indicate higher certainty, and vice versa.
Figure 16 shows the result of the first proposed method, sampling by Klir uncertainty, on the dataset with rich labels. The non-specificity is presented 15a and can be interpreted as the ignorance zones of the model. Discord is also represented 15b and the total uncertainty 15c is the sum of the two, it is this latter information that is used to sample on the model uncertainty.
The second proposed method, the extension of epistemic uncertainty, which is a reducible uncertainty applied to evidential reasoning, is presented in Figure 16. The irreducible aleatoric evidential uncertainty 16a is presented along with the reducible epistemic evidential uncertainty 16b. The total uncertainty 16c is the sum of the reducible and irreducible uncertainties. For active learning, it is not the total uncertainty, but the epistemic reducible uncertainty that is used.
## 6 Discussion & Conclusion
The calculation of epistemic uncertainty (non-evidential) is demanding, and not necessarily accessible. It is, depending on the observations, necessary to go through several phases of computation, estimation of likelihood, maximum likelihood and optimization.
In this paper, we have proposed two new uncertainty sampling strategies and a new way to represent them. With these two proposed methods, the use of Klir uncertainty and the extended evidential epistemic uncertainty, a simple calculation on the output of the model allows to obtain the uncertainties. The objective is to also take into account the uncertainty present in richer labels, which is currently not possible. The first strategy is based on Klir's uncertainty, combining discord (how self-conflicting the information is) and non-specificity (how ignorant the information is) in the model output. The second strategy extends epistemic (reducible) uncertainty to the evidential framework and to several classes, simplifying the computational phase.
This simplicity obviously has a counterpart: the model must be able to deliver a mass function, to represent uncertainty and imprecision in the output. Such models exist but are not numerous, among them are the much quoted Evidential \(K\)-Nearest Neighbors [5], Evidential Decision Trees [4, 6], Evidential Random Forest and even Evidential Neural Networks [23]. The proposed methods are compatible with probabilistic models (since a probability is a special mass function) but the full depth of evidence modeling would be lost.
The novelty of this work lies in the representation of new information for uncertainty sampling, rather than in performance comparison. The next step is to apply these models to active learning, where the learning model has access to a very limited number of labeled observations, and must choose the most relevant observations to label in order to increase performance. The ability of the model to define these areas of uncertainty, and to categorize these uncertainties, is then relevant information.
Figure 16: Areas of evidential epistemic uncertainty corresponding to 14b.
Figure 14: Credal Dog-2 dataset, Brittany breed is in green and Beagle in red.
Figure 15: Areas of uncertainty corresponding to the dataset 14b according to the non-specificity, the discord and to the total Klir uncertainty. | アクティブ学習における最近の研究、特に不確実性サンプリングでは、モデルの不確実性を分解し、reducibleとirreducible不確実性を区別してきました。この論文では、計算プロセスを簡素化し、観察に依存しない方法を提案する目的です。特に、ラベルに inherent な不確実性、すなわちオラクルの不確実性を考慮します。提案された2つの戦略は、Klirun不確実性によるサンプリングで、探索とエキスパートのジレンマを解決するものです。また、エビデンスに基づいた知見の不確実性を拡張したサンプリングでは、reducible不確実性の概念をエビデンスフレームワーク内に拡張したものです。両方の戦略は、信頼性関数理論に基づいています。アクティブ学習の実験結果から、提案された方法が不確実性サンプリングに優れていることが示されました。 |
2310.20223 | STDA-Meta: A Meta-Learning Framework for Few-Shot Traffic Prediction | As the development of cities, traffic congestion becomes an increasingly
pressing issue, and traffic prediction is a classic method to relieve that
issue. Traffic prediction is one specific application of spatio-temporal
prediction learning, like taxi scheduling, weather prediction, and ship
trajectory prediction. Against these problems, classical spatio-temporal
prediction learning methods including deep learning, require large amounts of
training data. In reality, some newly developed cities with insufficient
sensors would not hold that assumption, and the data scarcity makes predictive
performance worse. In such situation, the learning method on insufficient data
is known as few-shot learning (FSL), and the FSL of traffic prediction remains
challenges. On the one hand, graph structures' irregularity and dynamic nature
of graphs cannot hold the performance of spatio-temporal learning method. On
the other hand, conventional domain adaptation methods cannot work well on
insufficient training data, when transferring knowledge from different domains
to the intended target domain.To address these challenges, we propose a novel
spatio-temporal domain adaptation (STDA) method that learns transferable
spatio-temporal meta-knowledge from data-sufficient cities in an adversarial
manner. This learned meta-knowledge can improve the prediction performance of
data-scarce cities. Specifically, we train the STDA model using a
Model-Agnostic Meta-Learning (MAML) based episode learning process, which is a
model-agnostic meta-learning framework that enables the model to solve new
learning tasks using only a small number of training samples. We conduct
numerous experiments on four traffic prediction datasets, and our results show
that the prediction performance of our model has improved by 7\% compared to
baseline models on the two metrics of MAE and RMSE. | Maoxiang Sun, Weilong Ding, Tianpu Zhang, Zijian Liu, Mengda Xing | 2023-10-31T06:52:56 | http://arxiv.org/abs/2310.20223v1 | # STDA-Meta: A Meta-Learning Framework for Few-Shot Traffic Prediction
###### Abstract
As the development of cities, traffic congestion becomes an increasingly pressing issue, and traffic prediction is a classic method to relieve that issue. Traffic prediction is one specific application of spatio-temporal prediction learning, like taxi scheduling, weather prediction, and ship trajectory prediction. Against these problems, classical spatio-temporal prediction learning methods including deep learning, require large amounts of training data. In reality, some newly developed cities with insufficient sensors would not hold that assumption, and the data scarcity makes predictive performance worse. In such situation, the learning method on insufficient data is known as few-shot learning (FSL), and the FSL of traffic prediction remains challenges. On the one hand, graph structures' irregularity and dynamic nature of graphs cannot hold the performance of spatio-temporal learning method. On the other hand, conventional domain adaptation methods cannot work well on insufficient training data, when transferring knowledge from different domains to the intended target domain.To address these challenges, we propose a novel spatio-temporal domain adaptation (STDA) method that learns transferable spatio-temporal meta-knowledge from data-sufficient cities in an adversarial manner. This learned meta-knowledge can improve the prediction performance of data-scarce cities. Specifically, we train the STDA model using a Model-Agnostic Meta-Learning (MAML) based episode learning process, which is a model-agnostic meta-learning framework that enables the model to solve new learning tasks using only a small number of training samples. We conduct numerous experiments on four traffic prediction datasets, and our results show that the prediction performance of our model has improved by 7% compared to baseline models on the two metrics of MAE and RMSE.
Meta-Learning, Transfer-Learning, GCN, MAML, GAN, GRU
## I Introduction
Applications such as traffic prediction [1, 2], traffic scheduling [3, 4], congestion prediction [5], and automated vehicles [36] have been studied on massive data using machine learning methods, all falling under the umbrella of Intelligent Transportation Systems (ITS) [32, 33]. In scenarios with abundant data, the mentioned applications, primarily based on traditional machine learning approaches, exhibit robust performance. The challenge arises when data is insufficient, as existing machine learning algorithms usually require a significant amount of data to train models effectively. The reliance on large volumes of data can be limiting, especially in real-world scenarios where data availability is insufficient. This constraint underscores the need for exploring more efficient and effective learning approaches that can perform well even in data-scarce conditions. The reasons for the poor performance of traditional machine learning in scenarios with insufficient data are twofold. Firstly, they often fail to capture the spatio-temporal correlations between different cities, making it difficult to achieve better results from prior knowledge. Secondly, a small amount of data makes it challenging to achieve thorough gradient descent, leading to the phenomenon of overfitting.
Recently, there has been some related work on how to improve the efficiency of spatio-temporal graph learning. Yu [31] proposed a proposed a general traffic prediction method, which can extract spatio-temporal feature efficiently, but it has no good effect in the case of few-shot learning. Yao [37] first used auxiliary graphs for few-shot learning, but due to the limited parameters of the model, they could only use the model for classification problems and could only get the transfer knowledge from one source graph. A summary of the existing research indicates that the majority is centered around node classification problems. In our case, we are tackling a node regression problem for traffic prediction. The current models show deficiencies in terms of low robustness and accuracy, rendering them less appropriate for traffic prediction scenarios.
In this paper, we address the challenge of few-shot learning on traffic graph by transferring meta knowledge between different cities. Two main challenges need to be overcome: (i) Capturing the spatio-temporal domain-invariant correlations between the source and target cities. This requires an efficient method to capture the spatial and temporal characteristics of different cities at each time step, without incurring expensive computation costs. (ii) Training a model's parameters in a way that allows for fast adaptation to new tasks with only a small number of gradient updates. Overcoming these challenges is crucial to achieving improved few-shot learning performance on traffic graph.
In response to the above challenges, we propose an effective and novel framework called Spatio-**T**emporal **D**omain **A**daptation **Meta**-**L**earning**(**STDA-Meta**) framework, which consists of a spatial-Temporal adversarial adaptation module(**STDA**) and a Meta-Learning framework(**Meta**). Specifically, STDA is to solve the first challenge. Inspired by GANs [9], it distinguishes the spatial-temporal features from the
source and target cities at each time step via adversarial classification to capture spatial-temporal transferable features in an efficient manner. Meanwhile, Model-Agnostic Meta-Learning(MAML) [10] framework employed is to optimize model's updates, in which a small number of gradient updates will lead to fast learning on a new task.
In summary, the main contributions of our work are as follows:
* We apply domain adaptation in spatio-temporal few-shot prediction, focusing on urban traffic data. Our method fully exploits the inherent structural similarities found within the graph representations of different urban traffic datasets. Generative Adversarial Networks (GANs) based data augmentation for graph few-shot learning achieves significantly improvement of predictive results, which has been rigorously substantiated through extensive ablation experiments.
* We merge Model-Agnostic Meta-Learning (MAML) into domain adaptation, and prove its substantial advantages in few-shot learning. By integrating MAML, rapid adaptation to new tasks on minimal data can be completed, significantly improving domain adaptation's efficiency. Such promising solution to address few-shot challenges across various domains has been validated through extensive experiment on real-world datasets from different cities.
* Our proposed method shows effective on traffic speed datasets from different cities, and proves accurate to forecast traffic speed of cities with limited available data. Experimental results show that our model outperforms baseline models by 7% in terms of prediction accuracy.
## II Related Work
In this section, we review relevant work from two aspects. The first is spatio-temporal graph learning, which has been widely adopted in traffic prediction. However, when data is scarce, their performance degrade due to overfitting. To address this issue, graph few-shot learning, as the second type discussed here, developed currently. Through the prior knowledge of graph structure and the similarity between different graphs, such methods have shown potential advantages.
### _Spatio-Temporal Graph Learning_
Before deep-learning based approaches became popular, statistical methods were widely used in the field for spatio-temporal graph learning. Classical statistical methods such as ARIMA (Autoregressive Integrated Moving Average model) [6, 7, 8], VAR (Vector autoregressive models) [11], and HA (Historical Average) [12] were used to predict traffic. The advantage of statistical methods is that they depend on a single factor, are easily implemented, and quickly compute results. However, with the increasing popularity of deep learning, researchers have achieved better prediction results by using the ability of LSTM (long short-term memory) [13] and GRU (Gated Recurrent Unit) [14] to model complex functions and capture the dynamic time relationships. To capture spatial features, researchers divide the highway network into grids and use CNN [15] to extract the spatial relationship between adjacent toll stations for prediction results. However, CNN maps the traffic flow prediction problem in a non-Euclidean space to a Euclidean space, which results in a loss of spatial information. More recently, with the popularity of GCN [16], some works such as [17, 18] combine GCN and LSTM to improve the performance of traffic prediction.
In brief, deep learning based spatio-temporal graph learning methods rely on a large amount of training data, and struggle to generalize to scenarios with insufficient data, such as traffic prediction problems.
### _Graph Few-Shot Learning_
Few-shot learning has been widely adopted in various fields, such as natural language processing, computer vision, and robotics. It is an effective approach to learning from insufficient labeled data, which is often encountered in real-world applications. Among the existing few-shot learning methods, meta-learning-based models have shown promising results in addressing the problem of few-shot learning.
Few studies have explored the application of few-shot learning in the field of traffic prediction, especially in the context of spatio-temporal graph learning. Recently, there has been some progress in this direction. For example, the Meta-GNN model [21] and RALE [22] have been proposed to handle few-shot learning problems in the context of graph neural networks. These models use meta-learning techniques to learn to adapt to new tasks with few labeled examples.
Furthermore, GDN [23] is another few-shot learning model that can be used for anomaly detection on traffic graphs. This model can utilize knowledge from auxiliary networks to improve the robustness of the anomaly detection task.
Despite these promising results, the application of graph few-shot learning to traffic domain have been limited in scope and may not be sufficiently robust for real-world traffic problems. Most of them mainly focus on node classification problem rather than node regression problem in traffic prediction. Therefore, more effort is needed for graph few-shot learning of traffic prediction to handle node regression tasks in real-world scenarios.
## III Problem Formulation
### _Traffic Prediction on Road Graphs_
Traffic prediction is a fundamental problem in spatio-temporal prediction, where the aim is to predict traffic status such as traffic speed, traffic flow, traffic demand, and trave time. Historical data from the previous \(H\) time steps is
utilized to predict future traffic status for the next \(M\) time steps. It can be typically expressed as the maximum log-probability of the future traffic status on given the historical traffic data as the following formula. Here, \(v_{t}\in\mathbb{R}^{n}\) represents the observed traffic status for the whole traffic network at time step \(t\), where \(n\) is the total number of observation nodes in the network.
\[\begin{array}{l}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.
can propose weighted summation of adjacent node features using an attention mechanism.
Compared to the Graph Convolutional Network (GCN) [25, 26], the core difference of GAT is how to collect and aggregate the feature representation of neighbor nodes with distance 1. GAT replaces the fixed standardized operations in GCN with an attention mechanism. Essentially, GAT simply replaces the normalized function of the original GCN with a neighbor node feature aggregation function using attention weights.
In our approach, we first apply a linear transformation to each node feature, and then calculate the attention score \(e_{ij}\) between adjacent nodes as formula 3. Here, \(W\in\mathbb{R}^{d\times O}\) is the weight matrix and \(N_{i}\) is a set of neighbor nodes of node \(v_{i}\). The attention mechanism makes \(\mathbb{R}^{O}\times\mathbb{R}^{O}\rightarrow\mathbb{R}\). The attention score is normalized then across all choices of j using the softmax fuction as formula 4.
\[e_{ij}=\text{ attention }\left(Wz_{i}^{tp},Wz_{j}^{tp}\right),j\in\mathcal{ N}_{i} \tag{3}\]
\[\alpha_{ij}=\operatorname{softmax}_{j}\left(e_{ij}\right)=\frac{\exp\left(e_ {ij}\right)}{\sum_{k\in\mathcal{N}_{i}}\exp\left(e_{ik}\right)} \tag{4}\]
In order to obtain more abundant representation, we execute the attention mechanism for \(K\) times independently, and employ averaging to achieve the spatial meta knowledge of node \(v_{i}\) as formula 5. Thus, we obtain a spatial meta knowledge of a city, denoted as \(\mathbf{Z}^{sp}=(z_{1}^{sp},z_{2}^{sp},\cdots,z_{N}^{sp})\in\mathbb{R}^{N \times d^{\prime}}\).
\[z_{i}^{sp}=\sigma\left(\frac{1}{K}\sum_{k=1}^{K}\sum_{j\in\mathcal{N}_{i}} \alpha_{ij}W^{k}z_{j}^{tp}\right) \tag{5}\]
To integrate spatio-temporal features, we take \(\mathbf{Z}^{sp}\) as input to temporal feature extractor and we derive the spatio-temporal knowledge of one city \(\mathbf{Z}^{step}=\left(z_{1}^{stp},z_{2}^{stp},\cdots,z_{N}^{stp}\right)\in \mathbb{R}^{N\times d^{\prime}}\).
### _Spatio-Temporal Feature Discriminator Module_
Spatio-Temporal Feature Discriminator(\(G_{st}\)) module aims to distinguish the spatio-temporal features from the source and target cities at each time step. Through adversarial classification, spatio-temporal transferable features can be captured in an efficient manner.
We fetch random P days historical data from P source cities where each day contains k time intervals before the current time interval of the source cities. The input urban flow is \(\mathbb{X}^{S}=\left\{\mathcal{X}^{(S,1)};\mathcal{X}^{(S,2)};\ldots;\mathcal{ X}^{(S,P)}\right\}\). Next, each crowd flow \(\mathcal{X}^{(S,i)}\) of source city at day i is fed into spatio-temporal embedding(ST-E) module. We can obtain the initial spatio-temporal feature set of the source city \(Z^{S}=\left\{z_{1}^{S},\ldots,z_{P-1}^{S},z_{P}^{S}\right\}\in\mathbb{R}^{P \times N\times d^{\prime}}\). Similarly, we can
Fig. 1: The Proposed STDA-Meta Framework that consists of STDA Module and Inference Module.
obtain the spatio-temporal feature set of the target city, i.e., \(Z^{T}=\left\{z_{1}^{T},\ldots,z_{P-1}^{T},z_{P}^{T}\right\}\in\mathbb{R}^{P\times N \times d^{\prime}}\) in the same way.
Based on the obtain \(Z^{S}\) and \(Z^{T}\), we further label these \(P\times k\) samples(i.e. spatio-temporal features) as 1 or 0, where 1 is assigned to source city data as real data while 0 is assigned to target city data as fake data. Then, as depicted in Figure 1, we introduce a spatio-temporal a spatial feature discriminatory \(G_{st}\) via fully connected(FC) layers. Specifcally, we mix the label \(Z^{S}\) and \(Z^{T}\) as the input of \(G_{st}\). \(G_{st}\) then distinguishes the features from the source or target city. Its gradient training function is defined as formula 6. Here, \(W_{d}\) denotes the parameters of \(G_{st}\). \(z_{(i)}^{s}\) and \(z_{(i)}^{t}\) represent spatio-temporal feature of source and target cities, respectively.
\[\nabla W_{d}\frac{1}{(P)\times k}\left[\log G_{sd}\left(z_{(i)}^{s}\right)+ \log\left(1-G_{sd}\left(STE\left(z_{(i)}^{t}\right)\right)\right)\right] \tag{6}\]
In terms of \(G_{st}\) training, we add the gradient in 6 to get larger gradient for better classification performance. In contrast, ST-E aims to mix the spatio-temporal features of two cities as close as possible, so it substracts the gradient for training as formula 7. Here, \(W_{e}\) denotes the parameters of ST-E.
\[\nabla W_{e}\frac{1}{(P)\times k}\log\left(1-G_{sd}\left(STE\left(z_{(i)}^{t} \right)\right)\right) \tag{7}\]
Through the adversarial objective, the discriminator \(G_{st}\) is optimized to classify input spatio-temporl features into different domains, while the feature extractor ST-E is optimized in the opposite direction. Then, we define the spatio-temporal domain loss function as formula 8. BCELoss is widely used in classification task. The smaller the \(\mathcal{L}_{st}\) the closer spatio-temporal distributions of source and target cities.
\[\mathcal{L}_{st}=\mathrm{BCELoss}\left(1,G_{st}\left(STE\left(z_{(i)}^{t} \right)\right)\right) \tag{8}\]
### _Inference Module_
We design the inference module on traffic prediction tasks. In that module, the input is data-scare cities time-series \(v_{t-M+1}^{l},\ldots v_{t}^{l}\), and the output one step prediction \(\hat{v}\). As Figure. 1, in inference module, the parameters of ST-E sub-module come from STDA module, and the output layer adopts the loss \(\mathcal{L}_{p}\) defined as 9.
**Prediction Loss \(\mathcal{L}_{p}\)**. Here, the Root Mean Squared Error(RMSE) is adopted as the prediction loss function. It has been widely used in various spatio-temporal prediction studies. Here, \(v\) is the ground truth and \(\hat{v}\) is the predicted result.
\[\mathcal{L}_{p}=\mathrm{RMSE}(|v-\hat{v}|) \tag{9}\]
**Overall Loss \(\mathcal{L}_{overall}\)**. Considering the spatio-temporal domain loss \(\mathcal{L}_{st}\) and prediction loss \(L_{p}\), we define a harmonic average as 10. Here \(\lambda\) controls the weight of spatio-temporal adaptation ability.
\[\mathcal{L}_{\text{overall}}\,=\lambda\mathcal{L}_{st}+\mathcal{L}_{p} \tag{10}\]
### _STDA-Meta Learning Process_
To optimize the adaptation process in few-shot learning and enhance the adaptive ability of the model, STDA-Meta adopts a learning process that follows the Model-Agnostic Meta-Learning (MAML) based episode learning process. Specifically, STDA-Meta trains the spatio-temporal graph learning model in two stages: the base-model stage and the adaptation stage. In the base-model stage, STDA-Meta imitates the adaptation process to fine-tune the adaptation process in few-shot learning and optimize the adaptive ability. Different spatio-temporal group features are sampled into large datasets to form a batch task for the MAML-based episode, denoted as \(\mathcal{T}_{ST}\). Each batch task \(\mathcal{T}_{i}\in\mathcal{T}_{ST}\) includes \(K_{S}\) support sets \(S_{i}\) and \(K_{Q}\) query sets \(Q_{i}\).
In the adaptation stage, STDA-Meta updates the training parameters in the target domain through several gradient descent algorithms. Specifically, STDA-Meta first samples data from source datasets to generate batches of task sets \(\mathcal{T}_{ST}\), where each task \(\mathcal{T}_{i}\in\mathcal{T}_{ST}\) belongs to one single city and is divided into a support set \(\mathcal{ST}_{i}\), a query set \(Q\mathcal{T}_{i}\), and \(S\mathcal{T}_{i}\cap Q\mathcal{T}_{i}=\emptyset\). When learning a task \(\mathcal{T}i\), STDA-Meta considers a joint loss function that combines the prediction error loss \(\mathcal{LT}_{i}\) as formula 11. Here, \(\mathcal{LT}_{i}\) is the root mean square error between multi-step prediction and the ground truth of the support set \(S_{\mathcal{T}_{i}}\).
\[\mathcal{LT}_{i}=\frac{1}{|\mathcal{ST}i|}\sum_{(x_{j},y_{j})\in\mathcal{ST}_{i }}\left(f_{\theta}\left(x_{j}\right)-y_{j}\right)^{2}, \tag{11}\]
To achieve the optimal model parameter \(\theta^{*}\), STDA-Meta adopts the MAML optimization algorithm. Specifically, the base-model stage of STDA-Meta involves iteratively fine-tuning the model on a small set of examples sampled from the support set of each task. Then, the adapted model is tested on the corresponding query set, and the parameters are updated by back-propagating the prediction error through the adapted model. This process is repeated for each task, and the model with the lowest prediction error on the query set is selected as the final model. Overall, the MAML-based episode learning process enables STDA-Meta to optimize the adaptation process in few-shot learning and enhance the adaptive ability of the model.
## V Methodology
**Datasets.** We utilized four widely used open-source traffic datasets from different cities, including METR-LA, PEMS-BAY, Didi-Chengdu, and Didi-Shenzhen [28, 29]. These
datasets have been widely adopted in related studies and contain valuable information for traffic speed prediction. Table I summarizes the static information of the datasets, including the number of nodes, edges, interval, and time span.
Nodes here is the number of traffic sensors or detectors in the road network. Edges represent the links between nodes in the network. Interval refers to the time interval between two consecutive data points in the dataset. Time span represents the total time period covered by the dataset. Please see Table I for the detailed information of the datasets. Interval represents the time interval between two consecutive data points in the dataset. Time span represents the total time period covered by the dataset.
**Data Processing.** During the data processing stage, the datasets are preprocessed by normalizing them using the Z-Score method. To evaluate the ability of STDA-Meta for spatio-temporal knowledge transfer, the datasets are partitioned into three subsets, namely training, validation, and testing sets. As an example, we consider REMS-BAY as the target city, and select only three days of data (which is a very small subset compared to the complete data of other cities) as the target data for that city. The remaining data from other cities, namely METR-LA, Didi-Chengdu, and Didi-Shenzhen, are used as the source cities for meta-training. The test set for REMS-BAY consists of data from the remaining days.
**Experimental Settings.** All experiments are compiled and tested on a server (CPU:Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz, GPU:NVIDIA Corporation TU104GL [Tesla T4]). In order to more fully test the performance of our framework, we predict the traffic speed in the next 6 time steps with 12 historical time steps. Some important parameters are set as follows: task learning rate \(\alpha=0.01\), meta-training rate \(\beta=0.001\), task batch number \(\|\mathcal{T}\|=5\) and sum scale factor of \(\mathcal{L}_{overall}\)\(\lambda=1.5\). Metrics Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), widely used in traffic prediction, are evaluated here as formula 12. Here, \(y_{i}\) represents the actual value, \(y_{p}\) represents the predicted value, and n represents the number of observations.
\[\begin{split}\text{MAE }=\frac{\left|\left(y_{i}-y_{p}\right)\right|}{n}\\ \text{RMSE }=\sqrt{\frac{\sum\left(y_{i}-y_{p}\right)^{2}}{n}} \end{split} \tag{12}\]
**Baselines** We compare our framework STDA-Meta with the following two types of baselines.
1). Classic spatio-temporal graph learning methods. We choose LSGCN [30] and STGCN [31] as they are remarkable models for crowd flow prediction. Here, we only use target city data for model training.
2). Transfer learning methods. These methods first pre-train the deep models on the source city and then fine-tune models on the target city. We use LSGCN [30] and STGCN [31] as the base-model and we call them LSGCN-FT and STGCN-FT. AdaRNN [38] is a state-of-the-art transfer learning framework for non-stationary time series.
**Performance Comparison** In Table 2, each row represents a different method categorized into three types: Spatio-temporal graph learning methods, Transfer Learning methods, and variants of our proposed approach (STDA-Meta). We compared these methods across two distinct experimental settings. One setting involved using METR-LA, Didi Chengdu, and Didi Shenzhen as source cities, with PEMS-BAY as the target city. The other setting utilized PEMS-BAY, Didi Chengdu, and Didi Shenzhen as source cities, with METR-LA as the target city.
The performance of spatio-temporal graph learning methods was relatively poor due to the scarcity of samples in the target city, which constrained their ability to learn effective models. These methods typically demand a substantial volume of data for model training, and when the target city has limited samples, their performance is notably affected.
Although the fine-tuned baseline models exhibited improved performance compared to non-transfer models, they still lagged behind transfer learning methods. Fine-tuning can moderately adjust the model to suit the target city's data, but its efficiency is hampered by the availability and diversity of data in the target city.
Although the AdaRNN method performs well in transfer learning, particularly for forecasting time series with limited samples, it exhibits limitations in capturing spatial information within the complex scenario of overall traffic prediction. This constraint hinders its performance, making it slightly inferior to the performance of STDA-Meta, especially in scenarios involving comprehensive traffic prediction across the entire road network. AdaRNN's inability to thoroughly capture spatial features diminishes its effectiveness in traffic prediction tasks that encompass diverse geographical regions. In contrast, STDA-Meta, leveraging its superior meta-learning strategy, efficiently utilizes information from source cities and comprehensively understands and exploits the intricate traffic patterns of the target city. Thus, STDA-Meta
maintains superior performance in the context of overall traffic prediction, presenting a comparative advantage over the AdaRNN method.
It's noteworthy that STDA-Meta consistently outperformed all baseline models in both experimental settings. It can be attributed to STDA-Meta's superior ability to harness information from the source cities, facilitating efficient learning for the target city. Here, our model achieved a 7% enhancement in predictive performance compared to the baseline models, underscoring its superiority.
STDA-Meta, utilizing a meta-learning approach, showcases improved predictive performance by adapting to the data of the target city, emphasizing notable enhancements in performance.
**Ablation Study** Through the results above, the RMSE and MAE show a slight increase after removing either Domain Adaptation (DA) or the MAML (Model-Agnostic Meta-Learning) method, but both variants still outperform the baseline models. We further elaborate on the performance impact of DA and MAML, and the results are presented in Table 2. Specifically, we discuss the performance of STDA-Meta w/o DA and STDA-Meta w/o Meta.
1) The performance impact on DA. The decrease in performance when removing Domain Adaptation (DA) can be attributed to its role in facilitating effective domain transfer. DA enables the model to adapt to the target domain's characteristics and data distribution. The absence of DA makes the model hard to adapt to the target domain, and a slight increases appear in both RMSE and MAE.
2) The performance impact on MAML. The decrease in performance when removing the MAML (Model-Agnostic Meta-Learning) can be explained here. MAML enables the model to quickly adapt to new tasks or domains, and enhances the model's generalization capabilities. It is particularly crucial for few-shot learning scenarios. Removing MAML eliminates the mechanism for rapid adaptation, and declines in performance appear.
In summary, the experimental results highlight the crucial roles of Domain Adaptation and Model-Agnostic Meta-Learning in performance. Removing either component would weaken the model's capabilities, but the variants still outperform baselines. It proves the effectiveness and necessity of these key components.
**Hyperparameter Analysis**. We conducted a series of experiments to determine the optimum of hyperparameter \(\lambda\) in our model. By adjusting \(\lambda\) from 0.5 to 2.0 and repeating the experiments, we obtained the results presented in Figure 2. From the results in this figure, the weight ratio of the two loss functions in \(\mathcal{L}_{overall}\) is an important factor to consider, and the choice of \(\lambda\) significantly impacts the model's prediction performance. Specifically, we observed that increasing the value of \(\lambda\) often improves predictive results. It highlights the importance of the domain adaptation (DA) module during optimizing the model. It's worth noting that our findings indicate that incorporating data augmentation (DA) techniques into machine learning models can significantly improve performance in data-scarce scenarios. Our approach of adjusting the hyperparameter \(\lambda\) to optimize the balance between the two loss functions is simple yet effective to achieve better results.
Furthermore, we also observed that the choice of \(\lambda\) is related to the time scale. When the time scale is relatively long, an appropriate \(\lambda\) value is crucial. The predictive accuracy tends to increase initially with the increase in \(\lambda\), followed by a decrease. It indicates such a hyperparameter tuning in long time series prediction is significant.
In summary, key hyperparameter is essential to achieve optimal performance in our method.
## VII Discussion and Future Work
In this paper, we propose a novel Meta-Learning framework, namely STDA-Meta, for few-shot traffic prediction. The STDA module, which is based on city-level meta knowledge, enhances the effectiveness of spatio-temporal representation on multiple datasets. STDA-Meta integrates MAML-based episode learning process to learn easily adaptable model parameters through gradient descent. Extensive experimental results on traffic speed data demonstrate the superiority of STDA-Meta over other baseline methods.
In fact, the proposed STDA-Meta framework is not only limited in traffic prediction but can be applied to other few-shot scenarios involving spatio-temporal graph learning, such as residential electric load forecasting and taxi demand prediction in different cities. In future, we aim to further exploit and extend STDA-Meta in other few-shot learning tasks to enhance predictive performance. | 都市開発に伴い、交通渋滞はますます深刻な問題となり、交通予測は、その問題解決のための古典的な方法となっています。交通予測は、タクシー配車、気象予測、配送経路予測など、 spatio-temporal予測学習の1つの具体的な適用方法です。これらの問題に対して、深層学習を含む古典的な spatio-temporal予測学習方法には、大量のトレーニングデータが必要です。現実には、センサーが少ない新規に開発された都市では、この前提が成立しない可能性があり、データ不足は予測性能を悪化させる要因となります。このような状況では、少数のショット学習 (FSL) と呼ばれる、データ不足に対する学習方法が知られています。交通予測のFSLは課題となっています。一つには、グラフ構造の不規則性とグラフのダイナミックな性質は、 spatio-temporal 学習方法の性能を阻害します。もう一つには、データ不足の少ない学習データに対して、従来のドメイン適 |
2309.13952 | VidChapters-7M: Video Chapters at Scale | Segmenting long videos into chapters enables users to quickly navigate to the
information of their interest. This important topic has been understudied due
to the lack of publicly released datasets. To address this issue, we present
VidChapters-7M, a dataset of 817K user-chaptered videos including 7M chapters
in total. VidChapters-7M is automatically created from videos online in a
scalable manner by scraping user-annotated chapters and hence without any
additional manual annotation. We introduce the following three tasks based on
this data. First, the video chapter generation task consists of temporally
segmenting the video and generating a chapter title for each segment. To
further dissect the problem, we also define two variants of this task: video
chapter generation given ground-truth boundaries, which requires generating a
chapter title given an annotated video segment, and video chapter grounding,
which requires temporally localizing a chapter given its annotated title. We
benchmark both simple baselines and state-of-the-art video-language models for
these three tasks. We also show that pretraining on VidChapters-7M transfers
well to dense video captioning tasks in both zero-shot and finetuning settings,
largely improving the state of the art on the YouCook2 and ViTT benchmarks.
Finally, our experiments reveal that downstream performance scales well with
the size of the pretraining dataset. Our dataset, code, and models are publicly
available at https://antoyang.github.io/vidchapters.html. | Antoine Yang, Arsha Nagrani, Ivan Laptev, Josef Sivic, Cordelia Schmid | 2023-09-25T08:38:11 | http://arxiv.org/abs/2309.13952v1 | # VidChapters-7M: Video Chapters at Scale
###### Abstract
Segmenting long videos into chapters enables users to quickly navigate to the information of their interest. This important topic has been understudied due to the lack of publicly released datasets. To address this issue, we present VidChapters-7M, a dataset of 817K user-chaptered videos including 7M chapters in total. VidChapters-7M is automatically created from videos online in a scalable manner by scraping user-annotated chapters and hence without any additional manual annotation. We introduce the following three tasks based on this data. First, the video chapter generation task consists of temporally segmenting the video and generating a chapter title for each segment. To further dissect the problem, we also define two variants of this task: video chapter generation given ground-truth boundaries, which requires generating a chapter title given an annotated video segment, and video chapter grounding, which requires temporally localizing a chapter given its annotated title. We benchmark both simple baselines and state-of-the-art video-language models for these three tasks. We also show that pretraining on VidChapters-7M transfers well to dense video captioning tasks in both zero-shot and finetuning settings, largely improving the state of the art on the YouCook2 and ViTT benchmarks. Finally, our experiments reveal that downstream performance scales well with the size of the pretraining dataset. Our dataset, code, and models are publicly available at [https://antoyang.github.io/vidchapters.html](https://antoyang.github.io/vidchapters.html).
## 1 Introduction
As online media consumption grows, the volume of video content available is increasing rapidly. While searching for specific videos is already a challenging problem, searching within a long video is an even _less_ explored task. Manual navigation can often be time consuming, particularly for long videos. A compelling solution for organizing content online is to segment long videos into _chapters_ (see Figure 1). Chapters are contiguous, non-overlapping segments, completely partitioning a video. Each chapter is also labeled with a short description of the chapter content, enabling users to quickly
Figure 1: **A video with user-annotated chapters in VidChapters-7M: the video is temporally segmented into chapters, which are annotated with a chapter title in free-form natural language.**
navigate to areas of interest and easily replay different parts of a video. Chapters also give _structure_ to a video, which is useful for long videos that contain inherently listed content, such as listicles [96], instructional videos [64], music compilations and so on.
Given the plethora of content already online, our goal is to explore automatic solutions related to video chaptering - generating chapters automatically, and grounding chapter titles temporally in long videos. While the benefits of automatically chaptering videos are obvious, data for this task is scarce. Video captioning datasets (such as WebVid-10M [5] and VideoCC [66]) consist of short videos (10s in length), and hence are unsuitable. Web datasets consisting of longer videos (HowTo100M [64], YT-Temporal-1B [118]) come with aligned speech transcripts (ASR), which are only weakly related to visual content, and if used as chapter titles would tend to over-segment videos. Moment retrieval [24, 33] or dense video captioning [42, 127] datasets are perhaps the most useful, but do not focus on creating explicit _structure_, and instead describe low-level actions comprehensively. Such datasets are also manually annotated, and hence not scalable and small in size (see Table 1).
To remedy this, we curate VidChapters-7M, a large-scale dataset of user-annotated video chapters automatically scraped from the Web. Our dataset consists of 7M chapters for over 817K videos. Compared to existing datasets, videos in VidChapters-7M are long (23 minutes on average) and contain rich chapter annotations consisting of a starting timestamp and a title per chapter. Our dataset is also diverse, with 12 different video categories having at least 20K videos each, which itself is the size of existing dense video captioning datasets [29, 36, 42, 127]. On top of this dataset we also define 3 video tasks (see Figure 2): (i) _video chapter generation_ which requires temporally segmenting the video and generating a chapter title for each segment; (ii) _video chapter generation given ground-truth boundaries_, which requires generating a chapter title given an annotated video segment; and (iii) _video chapter grounding_, which requires temporally localizing a chapter given the chapter title. All three tasks involve parsing and understanding _long_ videos, and multi-modal reasoning (video and text), and hence are valuable steps towards story understanding.
For all three tasks, we implement simple baselines as well as recent, state-of-the-art video-text methods [45, 101, 114]. We find that the tasks are far from being solved, demonstrating the value of this problem. Interestingly, we also show that our video chapter generation models trained on VidChapters-7M transfer well to dense video captioning tasks in both zero-shot and finetuning settings, largely improving the state of the art on the YouCook2 [127] and ViTT benchmarks [36]. Moreover, we show that pretraining using both speech transcripts and chapter annotations significantly outperforms the widely used pretraining method based only on speech transcripts [65, 114, 118]. This demonstrates the additional value of our dataset as a generic video-language _pretraining_ set. Interestingly, we also find that the transfer performance scales with the size of the chapter dataset.
In summary, our contributions are:
* We present VidChapters-7M, a large-scale dataset of user-annotated video chapters obtained from the Web consisting of 817K videos and 7M chapters;
* Based on this dataset, we evaluate a range of simple baselines and state-of-the-art video-language models on the tasks of video chapter generation with and without ground-truth boundaries, and video chapter grounding;
* We show that video chapter generation models trained on VidChapters-7M transfer well to dense video captioning tasks in both zero-shot and finetuning settings, largely improving the state of the art on the YouCook2 [127] and ViTT benchmarks [36], outperforming prior pretraining methods based on narrated videos [114], and showing promising scaling behavior.
Our dataset, code and models are publicly available on our website [1].
Figure 2: **Illustration of the three tasks defined for VidChapters-7M.**
## 2 Related Work
**Large-scale vision-language datasets.** The development of powerful multi-modal models [3; 15; 23; 37; 38; 46; 48; 49; 50; 54; 61; 62; 72; 85; 87; 90; 94; 99; 105; 115; 116; 129] has been made possible by pretraining on large-scale image-caption datasets scraped from the Web such as SBU [68], Conceptual Captions [82], Conceptual-12M [12], LAIT [71], Wikipedia-ImageText [86], RedCaps [18] and LAION-5B [78]. Similarly, many strong video-language models [2; 27; 30; 41; 45; 47; 52; 53; 58; 65; 80; 81; 88; 89; 91; 97; 100; 107; 110; 111; 112; 126] have been pretrained on Web-scraped video-text datasets. These datasets are largely composed of short videos paired with captions, e.g. WebVid-10M [5] and VideoCC [66], or narrated videos with speech transcripts aligned over time (ASR), e.g. HowTo100M [64], YT-Temporal-1B [117; 118] and HD-VILA-100M [108]. Our proposed VidChapters-7M dataset is also downloaded from the Web, via a scalable pipeline without the need for expensive manual annotation. Unlike these datasets, VidChapters-7M consists of long videos with user-annotated chapters aligned over time (see Table 1), which significantly differ from ASR (see Section 3.3). Furthermore, most videos in VidChapters-7M _also_ contain ASR. Finally, VidChapters-7M is also related to the recent ChapterGen dataset [10], which also consists of user-annotated chapters. However, ChapterGen is several orders of magnitude smaller than VidChapters-7M (10K vs 817K videos) and is not open-sourced at the time of writing.
**Video tasks.** The video chapter generation task requires temporally segmenting the video into chapters, hence is related to video shot detection [76; 77; 84], movie scene segmentation [14; 75], temporal action localization [13; 16; 59; 83; 120; 121] and temporal action segmentation [8; 21; 26; 43; 55; 104]. However, unlike these tasks, video chapter generation also requires generating a free-form natural language chapter title for each segment. Hence this task is also related to video captioning [25; 57; 63; 69; 98; 102; 125], video title generation [123; 4; 119], generic event boundary captioning [103] and dense video captioning [42; 101; 128]. Most related to video chapter generation, the dense video captioning task requires temporally localizing and captioning all events in an untrimmed video. In contrast, video chapter generation requires temporally _segmenting_ the video (i.e. the start of the chapter \(i+1\) is the end of chapter \(i\), and the chapters cover the full video), and involves generating a chapter title that is substantially shorter than a video caption. We study in more detail the transfer learning between these two tasks in Section 4.4. Finally, the video chapter grounding task is related to temporal language grounding [33; 34; 44; 45; 67; 113; 122; 124]. However, we here focus on localizing a chapter starting point and not a start-end window. Furthermore, most temporal language grounding methods represent the video only with visual inputs, while we also exhibit the benefits of using speech inputs for localizing chapters in videos (see Section 4.3).
## 3 VidChapters-7M: a large-scale dataset of user-chaptered videos
Our goal is to build a large and diverse set of videos annotated with temporarily localized chapter information, consisting of chapter titles and chapter start times. In detail, chapters are contiguous, non-overlapping segments, completely partitioning a video. However manual annotation of chapters is time consuming and expensive and therefore hard to scale. Hence we automatically scrape chapter information from videos available online, as explained in Section 3.1. Then, we perform several processing steps on this data, e.g., to extract speech transcripts, as described in Section 3.2. The
\begin{table}
\begin{tabular}{l|c|c|c|c} Dataset & Number of videos & Video duration (min) & Number of descriptions & Annotations \\ \hline HowTo100M [64] & 1M & 7 & 136M & Speech transcripts \\ YT-Temporal-1B [118] & **19M** & 6 & \(\sim\)**900M** & Speech transcripts \\ HD-VLA-100M [108] & 3M & 7 & 103M & Speech transcripts \\ \hline ActivityNet Captions [42] & 20K & 3 & 100K & Dense Captions \\ YouCook2 [127] & 2K & 6 & 15K & Dense Captions \\ ViTT [36] & 8K & 4 & 56K & Dense Captions \\ Egg4D [29] & 10K & **23** & 4M & Dense Captions \\ \hline VidChapters-7M (Ours) & 817K & **23** & 7M & **Speech transcripts + User-annotated Chapters** \\ \end{tabular}
\end{table}
Table 1: **Comparison of VidChapters-7M with existing datasets**. We consider open-sourced video datasets that contain dense natural language descriptions aligned over time. VidChapters-7M is much larger than current dense video captioning datasets. Compared to datasets with ASR (top 3 rows), it is smaller in the total number of videos but contains longer videos with richer annotations (chapters).
outcome is VidChapters-7M, a dataset of 817K videos with 7M chapter annotations provided by real users online. Finally, we analyze VidChapters-7M in Section 3.3. Details are given next.
### Data collection
Since early 2020, YouTube users can create chapters for uploaded videos by annotating them in the YouTube description. The YouTube API, however, currently does not enable explicit search for user-chaptered videos. Hence, our data collection procedure consists of: (i) Collecting a large and diverse set of video candidates (characterized by their 11-character YouTube video ID), which do not necessarily contain user-annotated chapters; (ii) For all video candidates, downloading the video description, automatically selecting videos with user-annotated chapters, extracting video chapters and downloading corresponding videos. We next describe the individual steps in more detail.
**Video candidates.** We start from a large pool of video candidates built from the YT-Temporal-180M dataset [117], which was constructed to be more diverse than prior large video datasets such as HowTo100M [64]. Note that while the released YT-Temporal-180M dataset consists of only 5M videos, the authors collected a larger set of candidates by using YouTube's recommendation algorithm to suggest related videos. We obtained this extended list of 92 million video IDs directly from the authors.
**Extracting chapters from descriptions.** In the description, chapters typically constitute a block with consecutive lines following the format "<Timestamp>: <Chapter Title>" or "<Chapter Title>: <Timestamp>", where the chapter title is written in free-form natural language and its corresponding start timestamp is written in MM:SS format. The video should contain at least two timestamps listed in ascending order. Hence we download the descriptions for all video candidates and use standard regular expression operations to verify whether a given description contains user-annotated chapters and extract them if so. Note that some videos contain chapters that are automatically generated by YouTube algorithms, however, these generated chapters do not appear in the descriptions and, hence, are excluded by our procedure for data collection. Also note that the video content is only downloaded for user-chaptered videos, which is convenient for both the downloading speed and storage constraints. Finally, we obtain 817K user-chaptered videos, making up 0.9% of all video candidates.
### Data processing
We describe below how we process the previously obtained user-chaptered videos to facilitate building efficient video chapter generation models. For reproducibility, we publicly release the resulting speech transcripts and the code for extracting visual features.
**ASR extraction.** We observed that most user-chaptered videos contain speech. Hence, for all videos, we extract speech transcripts aligned in time with the video content (ASR) by applying the Whisper-Large-V2 model [73] on the audio track, using faster-whisper [40] backend for computational efficiency. We found that the Whisper model provides higher-quality ASR compared to the YouTube API ASR service on several data samples from VidChapters-7M. We further use WhisperX [6] to derive accurate word-level timestamps which we use to segment the speech transcript into sentences. For example, the Whisper-Large-V2 model extracts speech segments like _"Right, we're gonna do the Synthetics Dirty Race. No we're not. [...] So we're gonna put two t-shirts and two pairs of jeans in the"_ with timestamps 20.478s and 50.465s, and the corresponding first sentence output by WhisperX is _"Right, we're gonna do the Synthetics Dirty Race."_ with timestamps 20.538s and 29.26s.
**Visual feature extraction.** Training end-to-end deep learning models from RGB inputs on minutes-long videos is computationally expensive. Hence we extract visual features with CLIP ViT-L/14 backbone [20; 72] at resolution pixels and 1 FPS. This model has been trained to map images to text descriptions with a contrastive loss on 400M Web-scraped image-text pairs.
### Data analysis
The result of the previously described pipeline is VidChapters-7M, a dataset of 817,076 user-chaptered videos containing 6,813,732 chapters in total. We randomly split VidChapters-7M in training, validation, and testing splits with 801K, 8.2K, and 8.2K videos, respectively. We analyze VidChapters
7M below and give examples of annotated videos, more statistics, as well as a datasheet in Appendix Sections A, C, and F, respectively.
**Statistics.** VidChapters-7M is highly diverse and contains 4,894,855 distinct chapter titles. On average, a video contains 8.3 chapters, start times of adjacent chapters are separated by 142.0s seconds, a chapter title contains 5.4 words and a video lasts 1354 seconds. The most represented video category (in YouTube's glossary) is HowTo & Style, making up 17.0% of total videos. The distributions for the number of chapters per video, the video chapter duration, the length of the chapter title, and the video category are illustrated in Figure 3, and further show the diversity of VidChapters-7M, e.g., there are 12 different video categories with at least 20K videos in VidChapters-7M.
**ASR vs Chapters.** 97.3% of videos in VidChapters-7M contain speech transcripts (ASR). However, user-annotated chapters significantly differ from speech transcripts: on average, a video with ASR contains 269.8 speech sentences (vs 8.3 chapter titles), a speech sentence lasts 3.9 seconds (vs 142.0 seconds for chapters) in the video and contains 11.5 words (vs 5.4 words for chapters).
**Biases.** Using the langdetect [17] language detection tool, we find that 92.9%/93.9% of total videos in VidChapters-7M have their chapter titles/ASR in English. However, as shown in Figure 3 (bottom right), the distribution of chapter languages includes a long tail of languages, e.g., 13 languages appear in more than 1K videos of VidChapters-7M. We also use GenBit [79] to measure gender bias in the chapters and ASR. We observe that the percentage of female/male/non-binary gendered words is 19.7%/39.7%/40.7% for the chapters, and 11.6%/35.6%/52.8% for the ASR.
**Ethical considerations.** We employ several techniques to identify harmful visual or language content. We use a classifier [78] built on top of the previously extracted CLIP features to detect not-safe-for-work (NSFW) visual content (such as pornographic and sexualized content). Moreover, we use a language model [31] to detect toxic content in chapter titles and speech transcripts. These processes flag 5,716 (0.70%) visually NSFW videos, 355 (0.04%) videos with toxic chapter titles and 1,368 (0.17%) videos with toxic ASR. We assume the relatively low number of flagged videos is due to the regulations performed by the Web platform used to collect our dataset. Following [78], we refrain from removing these samples to encourage research in fields such as dataset curation and tag them instead. Note that these automated filtering techniques are not perfect and that harmful content may pass.
Figure 3: **Statistics of the VidChapters-7M dataset.**
**Manual assessment of the quality of annotations.** While chapter titles are manually written and uploaded by real users, sometimes chapter titles are not informative about the content of the video at the corresponding timestamps. To assess the quality of chapter title annotations in our dataset, we inspected a random sample of 100 videos in VidChapters-7M. For each video, we checked if the titles are related to the content of the video chapter and if so which video modalities (ASR, visual or raw audio) they are related to, or if they only refer to the structure of the video (e.g. chapter titles like "step 1", "step 2" etc). Results are presented in Table 2, and show that 83% of videos have chapters related to one or multiple modalities of the video, 14% of videos have chapters only referring to the structure of the video, and 3% of videos have chapters unrelated to the video content.
## 4 Experiments
In this Section, we present the results of models on VidChapters-7M for the full video chapter generation task in Section 4.1, the task of video chapter generation given ground-truth boundaries in Section 4.2 and the video chapter grounding task in Section 4.3. Finally, we study transfer learning from video chapter generation to dense video captioning tasks in Section 4.4.
**Evaluation metrics.** To evaluate the quality of the generated chapter titles (without their positions), we use standard metrics used for visual captioning: BLEU [70] (B), CIDEr [95] (C), METEOR [7] (M) and ROUGE-L [56] (RL). To evaluate video chapter generation as a whole, including the locations of the generated chapters, we follow standard protocols used for dense video captioning, given the similar nature of the two tasks. We use the standard evaluation tool [42] which calculates matched pairs between generated events and the ground truth across IoU thresholds of {0.3, 0.5, 0.7, 0.9}, and compute captioning metrics over the matched pairs. However, these metrics do not take into account the story of the video and give high scores to methods generating many redundant chapters. Hence for an overall evaluation, we also use SODA_c [22] (S) which first tries to find a temporally optimal matching between generated and reference chapters to capture the story of a video, then computes METEOR scores for the matching and derives F-measure scores from the METEOR scores to penalize redundant chapters. To separately evaluate chapter localization, we report the recall (R@Ks, R@K) and the precision (P@Ks, P@K) across various thresholds in terms of the distance to the ground-truth start time or IoU with the ground-truth start-end window. We also report the average recall (R) and average precision (P) across IoU thresholds of {0.3, 0.5, 0.7, 0.9}.
**Implementation details.** Unless stated otherwise, for all models, we use the speech transcripts (ASR) and visual features extracted as explained in Section 3.2. By default, each model is taken from the corresponding official implementation, and all model hyper-parameters are set according to the original papers. We use the Adam optimizer [39] for training and select the final model based on the best validation performance. Our experiments are run on 8 NVIDIA A100 80GB GPUs. More details are included in Appendix Section D.
### Video chapter generation
In this Section, we study the task of video chapter generation that requires temporally segmenting the video and generating a chapter title for each segment.
\begin{table}
\begin{tabular}{l|c} Type of chapter titles & Percentage \\ \hline Speech and visual & 49 \\ Audio and visual & 2 \\ Speech-only & 26 \\ Visual-only & 3 \\ Audio-only & 3 \\ \hline Structure-only & 14 \\ \hline Unrelated & 3 \\ \end{tabular}
\end{table}
Table 2: **Manual assessment of the informativeness of chapter titles in the VidChapters-7M dataset over a random sample of 100 videos.** Video chapter titles can be based on speech and vision; audio and vision; vision, audio or speech alone; or only on the structure of the video (_e.g._ ”step 1”, ”step 2” etc). In a small number of cases, video chapters are unrelated to the video content.
**Models.** For the video chapter segmentation subtask, we evaluate two zero-shot approaches (i.e., that are not trained on VidChapters-7M): speech text tiling [32], which detects subtopic shifts based on the analysis of lexical co-occurrence patterns, and a visual scene change detection algorithm [92] based on the sum of absolute differences. To derive zero-shot baselines for the full video chapter generation task, we combine text tiling and shot detection with various alternatives that can generate text given text or visual input: a random baseline that predicts a random speech sentence spoken inside the predicted boundaries, LLaMA-7B [93] (prompted to summarize the speech transcript spoken inside the predicted boundaries) and BLIP-2 [51] (prompted to describe the middle video frame of the predicted segment). Finally, we also train and evaluate two state-of-the-art end-to-end dense video captioning models on VidChapters-7M: PDVC [101] which consists of a visual-only DETR-style [11] architecture and Vid2Seq [114] which is a multi-modal sequence-to-sequence model pretrained on the C4 text corpus [74] and on narrated videos with ASR (_e.g._, YT-Temporal-1B [118]). For Vid2Seq, we also report zero-shot results after pretraining on narrated videos without finetuning on VidChapters-7M.
**Implementation details.** We use the text tiling implementation from the NLTK library [9] which tokenizes the text into pseudosciences of size 50. We use the shot detection software from the FFMPEG library [92] with a confidence threshold of 0.7. For BLIP-2, we use the 3.4B-parameter variant with FLAN-T5-XL [106] and CLIP ViT-L/14 [20, 72]. We reimplement Vid2Seq [114] (originally released in Jax) in PyTorch, use T5-Base pretrained on C4 [74] for initialization and pretrain Vid2Seq on HowTo100M [64]. More details are included in Appendix Section D.
**Results.** We report the results for video chapter generation using global metrics and localization-only metrics in Tables 3 and 4, respectively. We observe that models trained on VidChapters-7M outperform zero-shot baselines, demonstrating the effectiveness of training on VidChapters-7M. In particular, PDVC [101] has the best precision and Vid2Seq [114] achieves the best results in terms of overall generation and recall. We also find that Vid2Seq's speech-only mode outperforms its visual-only mode and that using both speech and visual inputs leads to the best performance. This demonstrates that video chapter generation is a multi-modal task. Finally, we observe that pretraining using ASR in narrated videos from HowTo100M [64] improves the video chapter generation performance of the Vid2Seq model. Specifically, pretraining on HowTo100M is more beneficial for vision-aware models than for the speech-only model.
\begin{table}
\begin{tabular}{l|c c c c|c c c c c c c} Method & Modalities & Pretraining Data & Finetuned & R@5s & R@0s & [email protected] & [email protected] & P@5s & [email protected] & [email protected] \\ \hline Text tiling [32] & Speech & 0 & ✗ & 9.4 & 5.8 & 23.6 & 8.9 & 12.6 & 7.9 & 26.0 & 8.8 \\ Shot detect [92] & Visual & 0 & ✗ & 31.2 & 27.4 & 24.9 & 12.5 & 33.2 & 29.7 & 18.0 & 8.7 \\ Vid2Seq [114] & Speech+Visual & C4 + HowTo100M & ✗ & 10.7 & 9.5 & 5.8 & 0.2 & 23.3 & 18.5 & 1.9 & 0.8 \\ \hline PDVC [101] & Visual & 0 & ✓ & 21.1 & 17.8 & 31.2 & 22.5 & **45.3** & **40.2** & **47.2** & **26.9** \\ Vid2Seq [114] & Speech & C4 & ✓ & **37.8** & **29.5** & 44.6 & 26.1 & 29.0 & 23.0 & 38.0 & 23.4 \\ Vid2Seq [114] & Speech & C4 + HowTo100M & ✓ & 36.7 & 28.9 & 46.5 & 27.2 & 29.5 & 23.3 & 40.4 & 24.8 \\ Vid2Seq [114] & Visual & C4 & ✓ & 35.3 & 26.4 & 23.6 & 8.7 & 17.9 & 13.6 & 17.2 & 7.1 \\ Vid2Seq [114] & Visual & C4 + HowTo100M & ✓ & 33.5 & 25.0 & 33.0 & 14.5 & 19.5 & 14.7 & 26.2 & 12.5 \\ Vid2Seq [114] & Speech+Visual & C4 & ✓ & 36.3 & 28.6 & 45.8 & 26.9 & 29.9 & 23.8 & 40.9 & 24.9 \\ Vid2Seq [114] & Speech+Visual & C4 + HowTo100M & ✓ & 36.4 & 28.5 & **48.2** & **28.5** & 30.3 & 24.0 & 43.1 & 26.4 \\ \end{tabular}
\end{table}
Table 4: **Video chapter generation (segmentation metrics) on VidChapters-7M test set.**
\begin{table}
\begin{tabular}{l|c c c|c c c c c c c} Method & Modalities & Pretraining Data & Finetuned & S & B1 & B2 & B3 & B4 & C & M & RL \\ \hline Text tiling [32] + Random & Speech & 0 & ✗ & 0.4 & 0.6 & 0.2 & 0.1 & 0.0 & 0.8 & 0.7 & 0.6 \\ Text tiling [32] + LLaMA [93] & Speech & Text mixture & ✗ & 0.2 & 0.4 & 0.1 & 0.1 & 0.0 & 0.5 & 0.3 & 0.4 \\ Shot detect [92] + BLIP-2 [51] & Visual & 129M image-texts & ✗ & 0.6 & 0.7 & 0.3 & 0.1 & 0.1 & 0.2 & 0.6 & 0.8 \\ Vid2Seq [114] & Speech+Visual & C4 + HowTo100M & ✗ & 0.1 & 0.1 & 0.0 & 0.0 & 0.0 & 0.1 & 0.1 & 0.1 \\ \hline PDVC [101] & Visual & 0 & ✓ & 6.8 & 9.4 & 3.7 & 14.0 & 9.5 & 35.9 & 9.4 & 11.4 \\ Vid2Seq [114] & Speech & C4 & ✓ & 10.2 & 9.5 & 6.7 & 4.0 & 2.7 & 48.8 & 8.5 & 11.0 \\ Vid2Seq [114] & Speech & C4 + HowTo100M & ✓ & 10.5 & 9.9 & 7.0 & 4.2 & 2.9 & 50.7 & 8.7 & 11.4 \\ Vid2Seq [114] & Visual & C4 & ✓ & 3.1 & 2.3 & 1.5 & 0.6 & 0.5 & 10.9 & 2.2 & 2.9 \\ Vid2Seq [114] & Visual & C4 + HowTo100M & ✓ & 5.5 & 4.5 & 2.8 & 2.0 & 2.9 & 21.4 & 4.1 & 5.5 \\ Vid2Seq [114] & Speech+Visual & C4 & ✓ & 10.6 & 9.9 & 7.0 & 4.2 & 2.8 & 51.3 & 8.8 & 11.6 \\ Vid2Seq [114] & Speech+Visual & C4 + HowTo100M & ✓ & **11.4** & **10.9** & **7.7** & **4.6** & **3.1** & **55.7** & **9.5** & **12.6** \\ \end{tabular}
\end{table}
Table 3: **Video chapter generation (global metrics) on VidChapters-7M test set.**
Here, finetuned refers to finetuning on the VidChapters-7M train set, and speech refers to transcribed speech (ASR).
### Video chapter generation given ground-truth boundaries
In this Section, we study the task of generating chapter titles provided correct temporal boundaries of video chapters. This task is a simplification of the previously studied task where we assume perfect temporal segmentation. We adopt the same models and implementation details as previously introduced in Section 4.1.
**Results.** We report results for video chapter generation given ground-truth boundaries in Table 5. Similar to the full video chapter generation task, we observe that solving the task without training on VidChapters-7M is hard. Indeed, LLaMA [93] struggles to summarize the speech content into a chapter title and underperforms the random baseline. Furthermore, BLIP-2 [51] slightly improves over the random baseline. In addition, Vid2Seq [114] in zero-shot mode underperforms the random baseline due to the large domain gap between ASR and chapter titles (see Section 3.3). In comparison, the performance of models trained on VidChapters-7M is significantly higher. Moreover, Vid2Seq's speech-only mode outperforms its visual-only mode, and using both speech and visual inputs is beneficial, confirming the benefit of multi-modal reasoning for the task of generating chapter titles. Finally, pretraining on narrated videos from HowTo100M [64] improves the performance of the Vid2Seq model on VidChapters-7M.
### Video chapter grounding
In this Section, we study the task of video chapter grounding that requires a model to temporally localize a chapter start time (or start-end window) given an annotated chapter title (query). Hence, compared to the video chapter generation task, we here assume chapter titles to be given and focus on the temporal chapter localization only.
**Models.** We evaluate three zero-shot alternatives: a random baseline that randomly picks the timestamps of a speech sentence in the video, a BERT [19] baseline that picks the timestamps of the speech sentence that has the closest text embedding with the queried chapter title, and a CLIP [72] baseline picking the frames where the query-frame similarity score drops from the highest scoring frame by a certain threshold \(\epsilon\). We also train and evaluate on VidChapters-7M a state-of-the-art end-to-end video grounding model: Moment-DETR [45] which is designed for moment retrieval based on visual inputs. Furthermore, we report zero-shot performance of Moment-DETR obtained with the model checkpoint from Lei et al. [45] pretrained on 5.4K narrated videos with ASR from the QVHighlights dataset [45].
**Implementation details.** We use the [CLS] token sequence embedding for the BERT baseline and a threshold of \(\epsilon=0.05\) for the CLIP baseline. More details are provided in Appendix Section D.
\begin{table}
\begin{tabular}{l|c c c|c c c c c c c} Method & Modalities & Pretraining Data & Finetuned & R@ 10s & R@5s & R@ 3s & R@ 0.3 & R@ 0.5 & R@ 0.7 & R@ 0.9 \\ \hline Random & Speech & \(\emptyset\) & ✗ & 3.1 & 1.8 & 1.2 & 0.6 & 0.7 & 0.3 & 0.1 & 0.0 \\ BERT [19] & Speech & BookCorpus + Wikipedia & ✗ & 9.0 & 6.8 & 5.4 & 2.9 & 0.6 & 0.3 & 0.1 & 0.0 \\ CLIP [72] & Visual & 400M image-texts & ✗ & 8.1 & 5.2 & 3.7 & 1.4 & 10.7 & 5.2 & 2.3 & 0.5 \\ Moment-DETR [45] & Visual & 5.4K narrated videos [45] & ✗ & 3.2 & 1.6 & 1.1 & 0.5 & 11.3 & 3.6 & 0.8 & 0.1 \\ \hline Moment-DETR [45] & Visual & \(\emptyset\) & ✓ & **21.8** & **15.5** & **12.4** & **8.3** & **37.4** & **27.3** & **17.6** & **6.4** \\ \end{tabular}
\end{table}
Table 6: **Video chapter grounding on VidChapters-7M test set.**
\begin{table}
\begin{tabular}{l|c c c c|c c c c c c} Method & Modalities & Pretraining Data & Finetuned & R@ 10s & R@5s & R@ 3s & R@ 0.3 & R@ 0.5 & R@ 0.7 & R@ 0.9 \\ \hline Random & Speech & \(\emptyset\) & ✗ & 3.1 & 1.8 & 1.2 & 0.6 & 0.7 & 0.3 & 0.1 & 0.0 \\ BERT [19] & Speech & BookCorpus + Wikipedia & ✗ & 9.0 & 6.8 & 5.4 & 2.9 & 0.6 & 0.3 & 0.1 & 0.0 \\ CLIP [72] & Visual & 400M image-texts & ✗ & 8.1 & 5.2 & 3.7 & 1.4 & 10.7 & 5.2 & 2.3 & 0.5 \\ Moment-DETR [45] & Visual & 5.4K narrated videos [45] & ✗ & 3.2 & 1.6 & 1.1 & 0.5 & 11.3 & 3.6 & 0.8 & 0.1 \\ \hline Moment-DETR [45] & Visual & \(\emptyset\) & ✓ & **21.8** & **15.5** & **12.4** & **8.3** & **37.4** & **27.3** & **17.6** & **6.4** \\ \end{tabular}
\end{table}
Table 5: **Chapter title generation given ground-truth boundaries on VidChapters-7M test set.**
**Results.** We report results for the video chapter grounding task in Table 6. We first observe that the simple zero-shot baselines based on ASR can decently find start times, but struggle to predict start-end windows due to the important domain gap between ASR and video chapters (see Section 3.3). The CLIP [72] baseline slightly underperforms the BERT baseline [19] at retrieving start times, but is much better at finding start-end windows. Furthermore, the Moment-DETR model [45] trained on VidChapters-7M outperform the zero-shot baselines for both localization of start times and start-end windows, which further demonstrates the effectiveness of training on VidChapters-7M. Finally, we note that Moment-DETR cannot handle speech inputs, but hope that our results showing the benefit of this modality on other tasks in VidChapters-7M will foster research in the localization of language queries in untrimmed videos using multi-modal inputs (vision and speech transcripts).
### Transfer learning on dense video captioning
In this Section, we investigate the pretraining of video-language models on our new VidChapters-7M. To this end, we adopt video chapter generation models trained on VidChapters-7M (see Section 4.1) to the tasks of dense video captioning with or without finetuning.
**Datasets.** We use two dense video captioning datasets. **YouCook2**[127] has 2K untrimmed videos of cooking procedures. On average, each video lasts 320s and is annotated with 7.7 temporally-localized sentences. **ViTT**[36] was created to better reflect the distribution of instructional videos in the wild compared to YouCook2, and consists of 8K untrimmed instructional videos. On average, each video lasts 250s and is annotated with 7.1 temporally-localized short tags. For both datasets, we extract speech transcripts and visual features as described in Section 3.2, and follow the standard splits for training, validation and testing. Note that we only use videos available on YouTube at the time of the work, resulting in 10 to 20% less videos than in the original datasets.
**Implementation details.** See Section 4.1 and Appendix Section D.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c c} \multirow{2}{*}{Method} & \multirow{2}{*}{Modalities} & \multirow{2}{*}{Pretraining Data} & \multicolumn{5}{c|}{YouCook2 (val)} & \multicolumn{5}{c}{ViTT (test)} \\ & & & S & C & M & R & P & S & C & M & R & P \\ \hline PVDC [101] & V & 0 & 4.4 & 22.7 & 4.7 & — & — & — & — & — & — \\ E2ESG [130] & T+V & C4 + WikiHow & — & 25.0 & 3.5 & 20.7 & 20.6 & — & 25.0 & 8.1 & 32.2 & 32.1 \\ Vid2Seq [114] & T+V & C4 +HTM & 8.3 & 4.3 & 9.5 & 27.1 & 27.0 & — & — & — & — \\ Vid2Seq [114] & T+V & C4 +YT-Temporal-1B & 7.9 & 47.1 & 9.3 & 27.9 & 27.8 & 13.5 & 43.5 & 8.5 & 42.6 & 46.2 \\ \hline PDVC\({}^{\dagger}\) & V & 0 & 4.8 & 28.8 & 5.8 & 22.6 & 33.1 & 9.4 & 40.6 & **16.5** & 19.2 & 37.4 \\ PVDC\({}^{\dagger}\) & V & VC (Chap.) & 5.9 & 34.7 & 7.5 & 28.8 & **36.4** & 10.1 & 41.5 & 16.1 & 21.3 & 37.2 \\ Vid2Seq\({}^{\dagger}\) & T+V & C4 +HTM & 8.6 & 53.2 & 10.5 & 20.2 & 26.2 & 14.1 & 44.8 & 8.7 & 43.8 & 44.5 \\ Vid2Seq\({}^{\dagger}\) & T+V & C4 +VC (ASR+Chap.) & 9.8 & 62.9 & 11.7 & 32.5 & 30.1 & **15.1** & **50.9** & 9.6 & 4.1 & 46.7 \\ Vid2Seq\({}^{\dagger}\) & T+V & C4 +HTM + VC (ASR) & 8.4 & 50.1 & 10.3 & 29.7 & 26.3 & 14.3 & 45.6 & 8.8 & 43.7 & 44.9 \\ Vid2Seq\({}^{\dagger}\) & T+V & C4 +HTM + 1% of VC (ASR+Chap.) & 8.2 & 52.7 & 10.4 & 29.3 & 27.6 & 13.5 & 41.6 & 8.2 & 44.7 & 42.1 \\ Vid2Seq\({}^{\dagger}\) & T+V & C4 +HTM + 10% of VC (ASR+Chap.) & 9.9 & 63.9 & 12.1 & 32.4 & 31.4 & 14.5 & 47.4 & 9.2 & 45.3 & 45.9 \\ Vid2Seq\({}^{\dagger}\) & T+V & C4 +HTM + 1% of VC (ASR+Chap.) & **10.3** & **67.2** & **12.3** & **34.0** & 31.2 & 15.0 & 50.0 & 9.5 & **45.6** \\ \end{tabular}
\end{table}
Table 7: **Comparison with the state of the art on the YouCook2 and ViTT dense video captioning benchmarks. T: Transcribed speech, V: Visual, HTM: HowTo100M [64], VC: VidChapters-7M, Chap.: Chapters. \({}^{\dagger}\) denote results of our experiments.**
\begin{table}
\begin{tabular}{l|c c c c|c c c|c c c c} \multirow{2}{*}{Method} & \multirow{2}{*}{Modalities} & \multirow{2}{*}{Pretraining Data} & \multicolumn{5}{c|}{YouCook2 (val)} & \multicolumn{5}{c}{ViTT (test)} \\ & & & S & C & M & R & P & S & C & M & R & P \\ \hline Text tiling [32] + Random & T & 0 & 0.3 & 0.9 & 0.3 & 3.8 & 6.6 & 0.3 & 0.6 & 0.1 & 11.6 & 24.4 \\ Text tiling [32] + LLaMA [93] & T & Text mixture & 0.2 & 0.6 & 0.2 & 3.8 & 6.6 & 0.2 & 0.6 & 0.5 & 11.6 & 24.4 \\ Shot detect [92] + BLIP-2 [51] & V & 129M image-texts & 0.6 & 1.0 & 0.5 & 8.5 & 5.5 & 0.2 & 0.1 & 0.2 & 3.1 & 13.7 \\ \hline Vid2Seq [114] & V & C4 +VC (ASR) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.2 & 0.8 \\ Vid2Seq [114] & V & C4 + VC (Chap.) & 0.7 & 1.1 & 0.5 & 21.3 & 8.6 & 1.5 & 1.9 & 16.8 & 18.9 & 10.4 \\ Vid2Seq [114] & T+V & C4 +HTM & 0.0 & 0.1 & 0.0 & 0.5 & 0.6 & 0.0 & 0.0 & 0.0 & 0.5 & 1.0 \\ Vid2Seq [114] & T+V & C4 +VC (ASR) & 0.1 & 0.1 & 0.1 & 1.9 & 0.0 & 0.0 & 0.0 & 0.0 & 0.7 & 0.6 \\ Vid2Seq [114] & T+V & C4 +VC (Chap.) & 0.1 & 0.2 & 0.1 & 0.7 & 1.4 & 0.7 & 1.1 & 0.3 & 14.3 & 12.8 \\ Vid2Seq [114] & T+V & C4 +VC (ASR+Chap.) & 3.2 & 10.2 & 2.9 & 20.6 & 19.7 & **13.0** & **26.7** & **33.8** & **40.8** \\ Vid2Seq [114] & T+V & C4 +HTM + VC (ASR+Chap.) & 0.0 & 0.1 & 0.0 & 1.2 & 0.9 & 0.0 & 0.0 & 0.0 & 0.8 & 0.7 \\ Vid2Seq [114] & T+V & C4 +HTM + 1% of VC (ASR+Chap.) & 2.7 & 7.2 & 2.1 & 18.1 & 17.3 & 5.5 & 15.5 & 43.3 & 31.3 & 37.1 \\ Vid2Seq [114] & T+V & C4 +HTM + 10% of VC (ASR+Chap.) & 3.2 & 11.5 & 3.0 & 19.4 & 19.2 & 6.4 & 21.6 & 5.3 & 31.0 & 38.2 \\ Vid2Seq [114] & T+V & C4 +HTM + 1% of VC (ASR+Chap.) & **3.9** & **13.3** & **34.2** & **22.3** & **20.1** & 9.0 & 28.0 & 6.5 & 33.7 & 40.1 \\ \end{tabular}
\end{table}
Table 8: **Zero-shot dense video captioning on the YouCook2 and ViTT benchmarks. T: Transcribed speech, V: Visual, HTM: HowTo100M [64], VC: VidChapters-7M, Chap.: Chapters.**
Results after finetuning.In Table 7, we show that pretraining for video chapter generation on VidChapters-7M greatly improves the downstream dense video captioning performance compared to training from scratch or pretraining only with ASR data as done in previous work [114]. We also find that pretraining both on HowTo100M [64] and VidChapters-7M results in the best overall performance. In particular, the Vid2Seq model pretrained on both HowTo100M and VidChapters-7M largely improves the state of the art on both the YouCook2 and ViTT benchmarks. In detail, on the YouCook2 benchmark, in the setting with C4 + HowTo100M pretraining, we observe that a boost of about 4.9 points in CIDEr is obtained with our reimplementation of Vid2Seq, and that 14.0 additional points in CIDEr are obtained by pretraining on VidChapters-7M. Finally, we report the results of the Vid2Seq model after pretraining on different fractions of VidChapters-7M for a fixed number of iterations. We construct these subsets such that larger subsets include the smaller ones. These results suggest that the scale of the chapter dataset is an important factor in the downstream dense video captioning performance. We conclude that VidChapters-7M opens a promising avenue for multi-modal pretraining. We further show qualitative examples of dense video captioning in Appendix Section B.
**Zero-shot dense video captioning.** In Table 8, we report results obtained by directly applying video chapter generation models trained on VidChapters-7M for dense video captioning without finetuning for this task. As far as we know, our work is the first to explore this challenging zero-shot setting where no manual annotation of dense video captions is used for training. The Vid2Seq model trained only using ASR data underperforms the random baseline, due to the large domain difference between speech transcripts and dense captions [114]. In the visual-only setting, the variant trained on chapter annotations is better than the variant trained on ASR annotations. In the visual+speech settings, only using chapter annotations does not perform well, as training only on chapters (i.e., without speech) does not enable the model to learn how to use the input speech modality at inference. However, using both ASR and chapter annotations results in a largely better zero-shot dense video captioning performance and outperforms all baselines not trained on VidChapters-7M, demonstrating the complementary nature of the ASR and chapters annotations. Finally, we also observe the benefits of increasing the size of the pretraining dataset of chapters in this setting.
## 5 Conclusion, Limitations, and Societal Impacts
In this work, we presented VidChapters-7M, a large-scale dataset of user-chaptered videos. Furthermore, we evaluated a variety of baselines on the tasks of video chapter generation with and without ground-truth boundaries and video chapter grounding. Finally, we investigated the potential of VidChapters-7M for pretraining video-language models and demonstrated improved performance on the dense video captioning tasks. VidChapters-7M thus provides a new resource to the research community that can be used both as a benchmark for the video chapter generation tasks and as a powerful means for pretraining generic video-language models.
**Limitations.** As it is derived from YT-Temporal-180M [117], VidChapters-7M inherits the biases in the distribution of video categories reflected in this dataset.
**Societal Impacts.** The development of video chapter generation models might facilitate potentially harmful downstream applications, e.g., video surveillance. Moreover, models trained on VidChapters-7M might reflect biases present in videos from YouTube. It is important to keep this in mind when deploying, analysing and building upon these models.
## Acknowledgements
This work was granted access to the HPC resources of IDRIS under the allocation 2023-A0131011670 made by GENCI. The work was funded by Antoine Yang's Google PhD fellowship, the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute), the Louis Vuitton ENS Chair on Artificial Intelligence, the European Regional Development Fund under project IMPACT (reg. no. CZ.02.1.01/0.0/0.0/15 003/0000468). We thank Jack Hessel and Remi Lacroix for helping with collecting the dataset, and Antoine Miech for interesting discussions. | 動画のセグメント化により、ユーザーは興味のある情報へのアクセスを迅速に行うことができます。この重要なテーマは、公開されたデータセットの欠如により、十分に研究されていません。この問題に対処するために、私たちはVidChapters-7Mという、合計700万のユーザーによる章を含む、817万のユーザー章動画のデータセットを提案します。VidChapters-7Mは、ユーザーの章をスクレイピングして、自動的にオンラインのビデオから作成されます。このデータセットの生成には、追加の manuales annotation が不要です。このデータセットに基づいて、以下の3つのタスクを導入します。まず、ビデオ章の生成タスクは、ビデオを時間的にセグメントし、各セグメントに対して章タイトルを生成することを指します。このタスクをより深く解析するため、以下の2つのバリアントを定義します。 ground-truth 境界線に基づいたビデオ章 |
2309.07927 | Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech
Recognition for Children VS. Adults | Recent advancements in Automatic Speech Recognition (ASR) systems,
exemplified by Whisper, have demonstrated the potential of these systems to
approach human-level performance given sufficient data. However, this progress
doesn't readily extend to ASR for children due to the limited availability of
suitable child-specific databases and the distinct characteristics of
children's speech. A recent study investigated leveraging the My Science Tutor
(MyST) children's speech corpus to enhance Whisper's performance in recognizing
children's speech. They were able to demonstrate some improvement on a limited
testset. This paper builds on these findings by enhancing the utility of the
MyST dataset through more efficient data preprocessing. We reduce the Word
Error Rate (WER) on the MyST testset 13.93% to 9.11% with Whisper-Small and
from 13.23% to 8.61% with Whisper-Medium and show that this improvement can be
generalized to unseen datasets. We also highlight important challenges towards
improving children's ASR performance. The results showcase the viable and
efficient integration of Whisper for effective children's speech recognition. | Ahmed Adel Attia, Jing Liu, Wei Ai, Dorottya Demszky, Carol Espy-Wilson | 2023-09-12T06:58:18 | http://arxiv.org/abs/2309.07927v3 | Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech Recognition for Children vs. Adults
###### Abstract
Recent advancements in Automatic Speech Recognition (ASR) systems, exemplified by Whisper, have demonstrated the potential of these systems to approach human-level performance given sufficient data. However, this progress doesn't readily extend to ASR for children due to the limited availability of suitable child-specific databases and the distinct characteristics of children's speech. A recent study investigated leveraging the My Science Tutor (MyST) children's speech corpus to enhance Whisper's performance in recognizing children's speech. They were able to demonstrate some improvement on a limited testset. This paper builds on these findings by enhancing the utility of the MyST dataset through more efficient data preprocessing. We reduce the Word Error Rate (WER) on the MyST testset 13.93% to 9.11% with Whisper-Small and from 13.23% to 8.61% with Whisper-Medium and show that this improvement can be generalized to unseen datasets. We also highlight important challenges towards improving children's ASR performance. The results showcase the viable and efficient integration of Whisper for effective children's speech recognition.
Ahmed Adel Attia\({}^{1}\), Jing Liu\({}^{1}\), Wei Ai\({}^{1}\), Dorottya Demszky\({}^{2}\), Carol Espy-Wilson\({}^{1}\)\({}^{1}\)University of Maryland College Park, MD, USA
\({}^{2}\)Stanford University, CA, USA Whisper, children ASR, My Science Tutor, MyST, CSLU kids, automatic speech recognition
## 1 Introduction
Automatic Speech Recognition (ASR) has witnessed a boom in recent years through utilizing huge amounts of transcribed speech scrapped from the internet. Whisper [1] was able to approach human-level accuracy by utilizing 680K hours of speech data. XLS-R [2] pre-trains on 436K hours of untranscribed speech in a self-supervised manner and 65K hours of transcribed speech. These models were able to achieve state-of-the-art (SOTA) results by leveraging huge amounts of data. ASR models still underperform with low-resource languages and tasks. Recent works have attempted to explore how ASR models performance can be improved for low-resource languages [3, 4, 5, 6] but they haven't caught up with high-resource languages.
Children ASR is considered a low resource task and previous works have demonstrated the gap between children and adult ASR even in English. The main reason for that has been attributed to inter-speaker variability due to varying developmental rates and intra-speaker variability due to underdeveloped pronunciation skills [7, 8, 9, 10, 11, 12]. Current ASR models trained on adult speech are not capable of learning these variabilities as they are mostly unseen in the training data. Moreover, children's speech databases are limited and difficult to collect and transcribe [13].
In this work, we explore how Whisper can be fine-tuned on children's speech. We chose Whisper because of its massive training data which makes it more likely to generalize to unseen and uncommon speech patterns. Additionally, Whisper has been shown to be noise-robust [14]. We take advantage of the My Science Tutor (MyST) speech corpus [15] which is the largest publicly available children's speech corpus, provided free to academics for research purposes.
A recent study [16] has attempted to adapt Whisper to the MyST corpus. They found that the quality of audio files as well as transcriptions in the MyST corpus varies, and were able to extract 65 hours of well-transcribed speech from the 197 hours of transcribed speech provided in MyST. We expand upon their work by outlining a more efficient data preprocessing scheme and extracting a total of 179.2 hours, which we show improves the performance and robustness of Whisper. Additionally, we maintain the train/test/development splits provided in the MyST corpus to ensure there's no overlap in speakers between data splits. We demonstrate tangible improvement on the MyST testset, reducing the Word Error Rate (WER) of the Small Whisper model from 13.93% to 9.11% and that of the Medium model from 13.23% to 8.61%. This also leads to improving to WER on the spontaneous part of the CSLU Kids dataset from 32.00% to 27.16% with the Small model, and from 37.04% to 16.53% with the Medium model without explicitly including this dataset in the training set.
We begin by giving a quick overview of Whisper in Section 2, followed by a description of the datasets used and our proposed preprocessing scheme in Section 3. We follow that by showcasing our experiments and training parameters in Section 4. Results and further discussion are in Section 5. We end with a conclusion outlining plans for future research in Section 6, and acknowledgments in Section 7.
## 2 Model Description
Whisper is a family of ASR models with varying sizes, namely, Tiny, Base, Small, Medium, and Large. Models from Tiny to Medium have an English-only variant and a multilingual variant. The training data for Whisper includes 438K hours of English-to-English transcription, 117K hours covering 96 languages not including English, and 125K hours of speech spoken in different languages, transcribed in English. To filter out low-quality transcription, the training set was passed through an initial model, and files with a high WER were flagged and manually inspected to remove automatically transcribed and mistranscribed files. This substantial amount of training data helped Whisper achieve near human-level transcription, especially in English, with their Large model achieving a WER of 2.82 on the Librispeech clean test set.
## 3 Dataset Description and Processing
We mainly focus on the MyST corpus in this study. However, we also discuss how well the results on MyST can be generalized beyond this corpus. For that purpose, we use the CSLU kids database [17]. Additionally, we study how finetuning affects the performance on adult speech by testing our models on the test-clean subset of Librispeech. In this section, we describe each corpus.
### My Science Tutor Dataset
The MyST corpus is the largest publicly available children's speech corpus. It consists of 393 hours of conversational children's speech, recorded from virtual tutoring sessions in physics, geography, biology, and other topics. The corpus spans 1,371 third, fourth, and fifth-grade students although age and gender information for each student are not available. Around 197 hours of the dataset were transcribed, although the quality of transcriptions varies. To the best of our knowledge, the MyST corpus was not included in Whisper's training set. Upon manual inspection, some transcriptions were assigned to the wrong files completely.
Provided Transcription: Um, the wires are like a pathway energy goes through it into the motor and makes it work.
Actual Transcription: Um, because it's metal, and metal I think has energy.
Other files appear to have been automatically transcribed with a lower-quality transcriber.
Provided Transcription: No, I don't hearing even a candle burns.
Actual Transcription: No, I don't hear anything when the candle burns.
Additionally, some files have poor audio quality, with the children speaking too close to the microphone, which resulted in a high level of distortion in the audio files.
To identify these files, we follow a similar technique as in [1], by passing the entire dataset through Whisper-Large and flagging files with WER larger than 50%. Additionally, one and two-word files were removed altogether, because they lacked the context to distinguish between homophones, like "to", "too" and "two". All files with no speech activity, i.e. files labeled as \(<\)DISCARD\(>\) or \(<\)NO_SIGNAL\(>\) or \(<\)SILENCE\(>\), were also removed from the dataset. Table 1 shows the effect of different filtering steps on total dataset duration and WER. According to our results, around 5 hours of the training data is either mistranscribed or has low audio quality and is responsible for increasing the WER on the training data by about 3%. Similar results can be inferred about the test and development sets. Additionally, short files which accounted for only 4 hours of the training data increased the WER by more than 7%. We will publish the list of flagged files on GitHub and link to it in the camera-ready manuscript.
Files longer than 30 seconds in the training and development sets were also removed. That is because Whisper processes files in 30-second chunks, and any files longer than 30 seconds are truncated. However, it is not possible to accurately truncate the transcriptions without any timestamps present, so these files are unsuitable for loss calculation. Additionally, the majority of the files in the MyST corpus were too short, with the average file length in the training data being 8 seconds. That would mean that training batches are mostly padding, leading to inefficient training. To remedy this, files within a single recording session were concatenated to be close to but not longer than 30 seconds while maintaining the context of the conversation within the recording session.
Our filtering technique removes 17.8 hours from the entire dataset, which leaves us with 179.2 hours of well-transcribed speech in total. We maintain the train/development/test split provided in the MyST database to avoid any overlap in speakers between the splits. We ended up with 132.5 hours in the training data, 20.9 in the development data, and 25.8 in the test data.
The text of the transcriptions was all upper case which destabilized the training. Consequently, all the text was mapped to be lowercase and further normalized using WhisperNormalizer1 Python package, which mapped tokens like "you're" to a standard "you are", as well as mapping all digit numbers to be spelled out. This ensured that only actual mistranscriptions would be penalized. This also reduces the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Filtration Method** & **Train** & **Test** & **Development** \\ \hline
**No Filteration** & 29.5 (145) & 26.2 (28.1) & 26.2 (25.5) \\ \hline
**Removing Files w. WER \(>\) 50\%** & 26.8 (140) & 22.3 (26.7) & 22.3 (25.5) \\ \hline
**Removing Files w. WER \(>\) 50\%** & 19.2 (132.5) & 14.2 (25.6) & 12.8 (21) \\ or **w. Less Than 3 Words** & & & \\ \hline \end{tabular}
\end{table}
Table 1: WER of Whisper-Large-v1 transcriptions of all three data splits of the MyST corpus before and after different levels of filtration (Duration of splits in hours).
diversity in transcription quality, which was noted to harm the performance, unlike diversity in audio quality[1].
When contrasted with [18], their filtration method of removing all files longer than 20 seconds and shorter than 10 seconds yielded only 65 hours in total, which they partitioned into a 55-hour training set and a 10-hour test set and no development set. Their sets also suffered from overlapping speakers between the train and test sets. By sticking to the splits provided by MyST, our splits share no overlapping speakers, and have almost 3x the amount of data. To ensure fair comparison, the speech from all speakers in [16]'s test-set was removed from our training set, leaving us with a 125.7-hour training set.
### CSLU Kids
The CSLU Kids speech corpus contains spontaneous and prompted speech from 1100 children between Kindergarten and Grade 10, with approximately 100 children per grade. In the scripted subset of the dataset, each child was prompted to read from a list of 319 scripts, that can either be simple words, sentences, or digit strings. Each utterance of spontaneous speech begins with a recitation of the alphabet followed by one minute of unprompted speech. The spontaneous speech in the CLSU corpus is distinct from the MyST corpus in that it is unstructured. Instead of talking about a particular topic, children were only given an open prompt like "Tell me about your favorite movie." [17]. Below is a sample such of transcription.
...usually just lay down on my bed, for now i don't like to i don't know, uh football okay first they are like standing on the ground and then they run and then they mm and if the girl pass the whole field you get a six points uh think it's twenty four i don't know think yeah they catch block and uh one uh the quarter back throws and the runners run uh it's blue uh and it has a big big big electric train set uh i have a workshop...
The majority of the recordings in the spontaneous section of the CLSU corpus were longer than 30 seconds, and are thus unsuitable for training. Instead, we use the scripted portion of the CSLU corpus to help the model adapt to the channel differences between MyST and CSLU recordings, but still consider the spontaneous section as an out of sample testset.
The transcriptions were of a high enough quality and filtering was not necessary, but they were all normalized to ensure a standard style of transcription. Files in the scripted portion of the dataset were shuffled and split into train, development, and test sets with an 80/10/10 split. The training set was 35 hours long, and the development and test sets were both around 4.8 hours long. Short files were combined to be close to 30 seconds as we did with the MyST corpus.
### Librespecch: test-clean
The test-clean subset of the Librespecch corpus was used to test the ASR model's performance on Adult speech. It contains about 5.4 hours of speech read from Audiobooks from the LibriVox project. Since Librespecch was not used for training, we didn't combine the files, and we also didn't filter out any transcriptions to allow for reproducible and contrastable results. All transcriptions were normalized.
## 4 Training Details and Hyperparameters
We followed the Huggingface Whipser finetuning tutorial 2. Our training scripts are available on Github 3, and we will link to the checkpoints on Huggingface in the camera-ready manuscript. Our evaluation script, which calculates the WER, was adapted from a code snippet by OpenAI4. All models were trained on Nvidia A6000 50GB GPU.
Footnote 2: [https://huggingface.co/blog/fine-tune-whisper](https://huggingface.co/blog/fine-tune-whisper)
Footnote 3: [https://github.com/ahmedadelphia/whisperKids](https://github.com/ahmedadelphia/whisperKids)
Footnote 4: [https://github.com/openai/whisper/discussions/654](https://github.com/openai/whisper/discussions/654)
For the Small models, we used a learning rate of \(1\times 10^{-5}\), batch size of 64, and 1 gradient accumulation step. For the Medium models, we used a learning rate of \(1\times 10^{-5}\), batch size of 32, and 1 gradient accumulation step. All models were finetuned until convergence and the best checkpoints were used for evaluation.
## 5 Results and Discussion
### Whisper Zero-shot Models
Table 3 shows the WER for different Whisper models without any finetuning. Looking at these results, the gap between children and adult speech becomes immediately clear. The WER for the scripted part of CSLU Kids is between 6 and 10 times that of Librispeech, and the WER for MyST is between 3 and 5 times. In general, English models perform better than multilingual models, with the exception of the Medium model. That could be because the Medium model is big enough to benefit from seeing more data in different languages. The bigger the model, the better the performance, with the exception of Large-V1 being slightly worse than Medium. In fact, the performance seems to saturate beyond Medium and the difference in performance between Medium and Large-V2 is negligible.
We note that the zero-shot WER reported here is smaller than that reported in [16]. We attribute this to the fact that they used a different normalizer than the one Whisper was trained with, which we validated by inspecting their datasets which are publicly accessible on Huggingface Based on these results, we finetune the Small and Medium models, both the English and multilingual variants.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Dataset** & **Training** & **Development** & **Testing** & **Filterted2** & **Age Group** \\ & **Duration** & **Duration** & **Duration** & **Filtration** & **Filtered2** & **Age Group** \\ \hline
**MyST** & 125 & 20.9 & 25.8 & ✓ & 8-11 Years \\
**CSLU Kids - Scripted** & 35 & 4.8 & 4.8 & X & 6-11 Years \\
**Librespecch- testclean** & 0 & 0 & 5.4 & X & Adult \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the Datasets Used. Durations in hours.
### Finetuned Whisper Models
In this section, we showcase the performance of our finetuned models and contrast them with the models from [16], whose models are publicly available on Huggingface. We report the best-performing variants here. We tested all models on testsets from four corpora, listed in Table 2.
Looking at the results in Table 4, it is clear that Whisper can and does improve its performance on the MyST dataset as well as CSLU, proving that transformer ASR models have the capacity to improve their performance on children's speech. We establish strong SOTA performance of 8.61% for the MyST testset. To the best of our knowledge, our best performance on the CSLU scripted dataset of 1.97% beats the current SOTA of 5.52% [19]. We also show improvement on unseen datasets, since both our models trained on just MyST, or a combination of MyST and CSLU scripted data show improvement on CSLU spontaneous speech without any training on speech from this dataset. Our best performing model on the CSLU spontaneous dataset scores 16.53% WER, which is about half the WER of zeroshot Whisper. Additionally, our models "forget" less about the adult speech than the baseline, with our models seeing a degradation of only about 1%.
Medium models outperformed Small models, and generalized better to unseen datasets. The English-only variant of the Small model showed significant improvement over the multilingual variant in seen and unseen datasets. The Medium multilingual variant performed slightly better on the MyST dataset when finetuned exclusively on it, but the English-only variant generalized better to unseen data. Multilingual models in both sizes had higher WER for Librispeech.
Looking at the results for the scripted portion of the CSLU corpus, it is clear that the lack of context in these script harm the performance of the models that weren't trained on speech from this dataset. However, the performance improved significantly when speech from this dataset was included in the training data, mainly because of the lack of variability on the scripts, unlike the more diverse MyST or CSLU spontaneous datasets. We also attribute the gap in performance between the MyST and CSLU spontaneous datasets to the fact that speech in the MyST corpus is more structured than the CSLU spontaneous dataset. This shows that one of the reasons behind the gap in performance between adult and children's ASR is that the decoder in Whisper, which acts as an audio-condtional language model, is not well adapted to the variablility found in children's speech, where they can suddenly change topic several times in a short period.
## 6 Conclusions and Future Work
In this paper, we outlined how Whisper, a SOTA ASR system can be finetuned on children's speech using MyST, the largest publically available conversational children's speech corpus. We showcased a way to filter out mistranscribed files from the corpus and established a strong baseline for children's speech recognition. Our finetuning reduced the WER by 4 to 5% and reduced the gap between adult and children's speech. We also outlined some of the challenges that faces children ASR, namely the fact that audio-conditional language models are not well adapted to the variability in children's speech.
In the future, we will explore the noise robustness of Whisper. Specifically we will look at babble noise and other typical classroom nonspeech sounds and how they can affect performance, and how to improve such robustness in children's ASR. We will also explore whether these models are biased towards a certain gender, racial group or age group.
The authors of [20] developed grade-specific ASR models, and proposed grouping different age groups separately, instead of under the umbrella term "children speech". Their suggested grouping was kindergarten; 1st grade; 2nd and 3rd grade; and 4th grade and above, and they noted that it is possible to achieve adult-like performance with the latter group. We aim to expand upon their work in the future, exploring whether their results can be replicated with large transformer ASR models and whether such bias against youger children can be mitigated.
## 7 Acknowledgments
The authors of this paper thank Wayne Ward for sharing his experience with Whisper, MyST and other children databases.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Training Data**} & \multirow{2}{*}{**MyST**} & **CSLU Kids** & **CSLU Kids** & **Librespeech** \\ & & & **Scripted** & **Spontaneous** & **testclean** \\ \hline \multicolumn{5}{|c|}{**Small**} \\ \hline
**ML - Zeroshot** & - & 14.06 & 25.15 & 36.36 & 3.39 \\
**EN - Zeroshot** & - & 13.93 & 21.31 & 32.00 & **3.05** \\ \hline
**EN - [16]** & MyST55H & 13.23 & 31.26 & 28.63 & 5.40 \\
**ML** & MyST & 11.80 & 55.51 & 28.53 & 6.23 \\
**ML** & MyST + CSLU & 12.11 & 2.74 & 32.72 & 7.97 \\
**EN** & MyST & **9.11** & 33.85 & 28.47 & **4.18** \\
**EN** & MyST + CSLU & 9.21 & **2.59** & **27.16** & 4.74 \\ \hline \multicolumn{5}{|c|}{**Medium**} \\ \hline
**ML - Zeroshot** & - & 13.23 & 18.57 & 31.85 & 3.02 \\
**EN - Zeroshot** & - & 12.90 & 18.62 & 37.04 & **2.76** \\ \hline
**EN - [16]** & MyST55H & 14.40 & 28.31 & 26.76 & 8.66 \\
**ML** & MyST & **8.61** & 30.10 & 24.26 & 5.32 \\
**ML** & MyST + CSLU & 8.99 & **1.97** & 20.28 & 4.28 \\
**EN** & MyST & 8.91 & 47.94 & 25.56 & 3.95 \\
**EN** & MyST + CSLU & 8.85 & 2.38 & **16.53** & **3.52** \\ \hline \end{tabular}
\end{table}
Table 4: WER on different test sets for different Whisper Models. EN stands for English-only model and ML stands for multilingual model.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**MyST**} & **CSLU Kids** & **CSLU Kids** & **Librespeech** \\ & & **Scripted** & **Spontaneous** & **testclean** \\ \hline
**Tiny** & 21.16 & 74.98 & 57.01 & 7.49 \\
**Tiny.en** & 18.34 & 61.04 & 45.29 & 5.59 \\
**Base** & 18.54 & 40.20 & 43.71 & 4.98 \\
**Base.en** & 15.57 & 33.18 & 38.57 & 4.15 \\
**Small** & 14.06 & 25.15 & 36.36 & 3.39 \\
**Small.en** & 13.93 & 21.31 & 32.00 & 3.05 \\
**Medium** & 12.90 & 18.62 & 37.04 & 2.76 \\
**Medium.en** & 13.23 & 18.57 & 31.85 & 3.02 \\
**Large-V1** & 14.15 & 21.50 & 45.18 & 2.98 \\
**Large-V2** & 12.80 & 17.22 & 29.39 & 2.82 \\ \hline \end{tabular}
\end{table}
Table 3: Zero-shot WER on different test sets for different Whisper Models Without Finetuning. | 最近の自動音声認識(ASR)システムの進歩、特にWhisperの例を挙げると、十分なデータがあれば、これらのシステムが人間レベルのパフォーマンスに近づく可能性を示しています。しかし、この進歩は、子供向けASRに広がってはいないのが現状です。これは、適切な子供向けデータベースの少ない利用と子供の発音の独特な特徴によるものです。最近の研究では、My Science Tutor(MyST)の子供の発音をLeveragingして、Whisperの子供の発音認識の性能を高めています。彼らは、限定的なテストセットでいくつかの改善を示しています。この論文では、MySTデータセットの利用性を向上させるために、より効率的なデータの前処理を行うことで、Whisperの性能を強化しています。Whisper-SmallとWhisper-MediumでMySTテストセットでのWordError Rate(WER)を13.93%から9.11%に削減し、13.23%から8.6 |
2309.10283 | FRAMU: Attention-based Machine Unlearning using Federated Reinforcement
Learning | Machine Unlearning is an emerging field that addresses data privacy issues by
enabling the removal of private or irrelevant data from the Machine Learning
process. Challenges related to privacy and model efficiency arise from the use
of outdated, private, and irrelevant data. These issues compromise both the
accuracy and the computational efficiency of models in both Machine Learning
and Unlearning. To mitigate these challenges, we introduce a novel framework,
Attention-based Machine Unlearning using Federated Reinforcement Learning
(FRAMU). This framework incorporates adaptive learning mechanisms, privacy
preservation techniques, and optimization strategies, making it a well-rounded
solution for handling various data sources, either single-modality or
multi-modality, while maintaining accuracy and privacy. FRAMU's strength lies
in its adaptability to fluctuating data landscapes, its ability to unlearn
outdated, private, or irrelevant data, and its support for continual model
evolution without compromising privacy. Our experiments, conducted on both
single-modality and multi-modality datasets, revealed that FRAMU significantly
outperformed baseline models. Additional assessments of convergence behavior
and optimization strategies further validate the framework's utility in
federated learning applications. Overall, FRAMU advances Machine Unlearning by
offering a robust, privacy-preserving solution that optimizes model performance
while also addressing key challenges in dynamic data environments. | Thanveer Shaik, Xiaohui Tao, Lin Li, Haoran Xie, Taotao Cai, Xiaofeng Zhu, Qing Li | 2023-09-19T03:13:17 | http://arxiv.org/abs/2309.10283v3 | # FRAMU: Attention-based Machine Unlearning using Federated Reinforcement Learning
###### Abstract
Machine Unlearning is an emerging field that addresses data privacy issues by enabling the removal of private or irrelevant data from the Machine Learning process. Challenges related to privacy and model efficiency arise from the use of outdated, private, and irrelevant data. These issues compromise both the accuracy and the computational efficiency of models in both Machine Learning and Unlearning. To mitigate these challenges, we introduce a novel framework, Attention-based Machine Unlearning using Federated Reinforcement Learning (FRAMU). This framework incorporates adaptive learning mechanisms, privacy preservation techniques, and optimization strategies, making it a well-rounded solution for handling various data sources, either single-modality or multi-modality, while maintaining accuracy and privacy. FRAMU's strength lies in its adaptability to fluctuating data landscapes, its ability to unlearn outdated, private, or irrelevant data, and its support for continual model evolution without compromising privacy. Our experiments, conducted on both single-modality and multi-modality datasets, revealed that FRAMU significantly outperformed baseline models. Additional assessments of convergence behavior and optimization strategies further validate the framework's utility in federated learning applications. Overall, FRAMU advances Machine Unlearning by offering a robust, privacy-preserving solution that optimizes model performance while also addressing key challenges in dynamic data environments.
Machine Unlearning, Privacy, Reinforcement Learning, Federated Learning, Attention Mechanism.
## I Introduction
The widespread availability of decentralized and heterogeneous data sources has created a demand for Machine Learning models that can effectively leverage this data while preserving privacy and ensuring accuracy [1]. Traditional approaches struggle to handle the continual influx of new data streams, and the accumulation of outdated or irrelevant information hinders their adaptability in dynamic data environments [2, 3]. Moreover, the presence of sensitive or private data introduces concerns regarding data breaches and unauthorized access, necessitating the development of privacy-preserving techniques [4]. The concept of the "right to be forgotten" allows individuals to have their personal information removed from online platforms, although there's no universal agreement on its definition or its status as a human right [5]. Despite this, countries like Argentina, the Philippines, and large parts of the EU are working on regulations 1. Therefore, there is a pressing need to advance the field of Machine Unlearning to ensure both adaptability and privacy in Machine Learning applications.
Footnote 1: [https://link.library.eui.eu/portal/The-Right-To-Be-Forgotten-A-Comparative-Study/tw0VHCyGcDc/](https://link.library.eui.eu/portal/The-Right-To-Be-Forgotten-A-Comparative-Study/tw0VHCyGcDc/)
**Example 1**.: _In a landmark 2014 decision that underscored the pressing need for Machine Unlearning, a Spanish court ruled in favor of an individual who sought the removal of specific, outdated Google search results related to a long-settled debt [6]. This verdict not only led to Google taking down the search results but also influenced broader European Union policies on the subject, emphasizing the urgent need for mechanisms that can efficiently erase outdated or private information from Machine Learning models without sacrificing accuracy. This critical requirement for Machine Unlearning is further highlighted by high-profile c
Fig. 1: This graphical abstract depicts the evolution of the FRAMU framework, a novel approach integrating federated learning and reinforcement learning to enable efficient, privacy-preserving data analysis. Designed to adaptively update Machine Learning models across decentralized networks, the framework places special emphasis on the unlearning process to remove private, outdated, or irrelevant data. The graphic illustrates the interaction between local agents and a central server, employing reinforcement learning and attention mechanisms for both learning and unlearning.
Gunn, the famed writer and director, who was dismissed by Disney in 2018 when old, inappropriate tweets resurfaced [7]. Although social media platforms like Facebook offer features like "Off-Facebook Activity" to disconnect user data from third-party services, this does not guarantee the complete erasure of that data from the internet 2. Together, these instances accentuate the growing imperative for the development of robust Machine Unlearning technologies, especially in an era where data privacy regulations are continuously evolving and the "right to be forgotten" is increasingly recognized as essential._
Footnote 2: [https://www.facebook.com/help/2207256696182627](https://www.facebook.com/help/2207256696182627)
_Challenges._ In today's digitally connected environment, data is distributed in various forms and from different sources, such as sensors, text documents, images, and time series data. For unlearning outdated or private data, Machine Unlearning presents unique challenges depending on whether it is a single type of data (known as single-modality) or multiple types of data (referred to as multimodality) [8]. With single-modality data, the issue primarily lies in the build-up of outdated or irrelevant information, which can negatively affect the model's effectiveness and precision [9, 10]. On the other hand, multimodality situations are even more complicated. Here, each type of data can have different characteristics and varying contributions to the overall model's performance [11, 12]. As we discussed in example 1, the need to unlearn outdated or private data is most important. This ensures individuals have the "right to be forgotten" about their information in publicly available platforms. However, the unlearning needs to happen in both single-modality and multimodality data to make it a holistic unlearning.
Distributed learning systems, particularly federated learning, have made significant strides forward in enabling Machine Learning models to train on decentralized data, offering the dual advantage of reduced communication costs and enhanced privacy [13, 14]. Notable efforts have been made to incorporate Differential Privacy (DP) into these systems [15], ensuring robust privacy safeguards through techniques like DP-SGD and DP-FedAvg [16, 17]. However, these existing frameworks face limitations when confronted with the dynamic nature of data distribution, an intrinsic challenge in distributed learning [18]. Although some efforts have been made in Machine Unlearning to address data irrelevancy over time, such as Sharded, Isolated, Sliced, and Aggregated(SISA) training method, these solutions often operate in isolation from privacy-preserving mechanisms [19, 20]. This bifurcation leaves a crucial research gap: the absence of a unified approach that addresses both privacy concerns and the adaptability requirements in the face of ever-changing data landscapes. There is a need to bridge this gap by providing an integrated solution for robust privacy measures and efficient selective unlearning, thereby enabling Machine Learning models to be both secure and adaptable in dynamic, distributed environments.
To address these challenges, we propose an Attention-based Machine Unlearning using Federated Reinforcement Learning (FRAMU). By integrating federated learning, adaptive learning mechanisms, and privacy preservation techniques, FRAMU aims to leverage the diverse and dynamic nature of data in both single-modality and multimodality scenarios, while upholding privacy regulations and optimizing the learning process. An attention mechanism is incorporated into FRAMU to ensure responsible and secure handling of sensitive information across modalities. FRAMU leverages reinforcement learning and adaptive learning mechanisms to enable models to dynamically adapt to changing data distributions and individual participant characteristics in both single-modality and multimodality scenarios. This adaptability facilitates ongoing model evolution and improvement in a privacy-preserving manner, accommodating the dynamic nature of the data present in federated learning scenarios. In addition to addressing the challenges associated with unlearning outdated, private, and irrelevant data in both single-modality and multimodality scenarios (see Fig. 1), FRAMU offers valuable insights into the convergence behavior and optimization of the federated learning process.
_Contributions._ The major contributions of our work are as follows:
* We propose an adaptive unlearning algorithm using an attention mechanism to adapt to changing data distributions and participant characteristics in single-modality and multimodality scenarios.
* We develop a novel design to personalize the unlearning process using the FedAvg mechanism [21] and unlearn the outdated, private, and irrelevant data.
* We propose an efficient unlearning algorithm that demonstrates fast convergence and achieves optimal solutions within a small number of communication rounds.
* We conduct extensive experiments to demonstrate the efficiency and effectiveness of the proposed approach using real-world datasets.
_Organization._ In Section II, we review related works. Section III outlines the problem addressed in this study. We present the proposed framework FRAMU in Section IV. The applications of FRAMU in single-modality and multimodality are discussed in Section V. In Section VI, we present the experimental setup and the evaluation results of the proposed framework, along with convergence and optimization analysis. Section VII delves into the implications of the proposed framework. Finally, in Section VIII, we conclude the paper.
## II Related Works
The importance of data privacy in distributed learning systems has garnered significant attention, especially when handling sensitive types of data like medical or behavioral information [22]. Differential Privacy (DP), a mathematically rigorous framework for ensuring individual privacy, has been widely adopted for this purpose [23, 24]. Efforts to integrate DP within distributed learning environments, particularly in federated learning, have been increasing [13, 14]. Abadi et al. [16] developed a seminal approach called Deep Learning with Differential Privacy (DP-SGD), which adapts the Stochastic Gradient Descent (SGD) algorithm to meet DP standards by clipping gradients and injecting noise, thereby offering stringent privacy safeguards during deep neural network
(DNN) training. Building on this, McMahan et al. [17] further tailored DP mechanisms for federated learning through an extension called DP-FedAvg. While these methods effectively address privacy concerns, they often fall short in dealing with dynamic data distributions, a prevalent issue in distributed learning [18]. Specifically, data sets can evolve over time, rendering some information outdated or irrelevant, and the persistence of such data in the learning process can compromise model efficacy. Although Machine Unlearning approaches like Sharded, Isolated, Sliced, and Aggregated (SISA) training [19] have emerged to tackle this issue by enabling efficient selective forgetting of data, these methods are not yet designed to work synergistically with privacy-preserving techniques like DP [20].
Federated learning has substantially revolutionized distributed learning, enabling the training of Machine Learning models on decentralized networks while preserving data privacy and minimizing communication costs [25]. Among the pioneering works in this area is the FedAvg algorithm by McMahan et al. [21], which relies on model parameter averaging across local models and a central server. However, FedAvg is not without its limitations, particularly when handling non-IID data distributions [26]. Solutions like FedProx by Li et al. [27] have sought to address this by introducing a proximal term for improved model convergence. While other researchers like Sahu et al. [28] and Konecny et al. [29] have made strides in adaptive learning rates and communication efficiency, the realm of federated learning still faces significant challenges in dynamic adaptability and efficient Machine Unlearning. While privacy has been partially addressed through Differential Privacy [30] and Secure Multiparty Computation [31], these techniques often compromise on model efficiency. Additionally, the applicability of federated learning in diverse sectors like healthcare and IoT emphasizes the unmet need for a model capable of dynamically adapting to varied data distributions, while preserving privacy and efficiency [32, 33].
Reinforcement Learning has garnered much attention for its ability to train agents to make optimal decisions through trial-and-error interactions with their environments [34, 35]. Several pivotal advancements have shaped the field, including the development of Deep Q-Networks (DQNs) [36]. DQNs combine traditional reinforcement learning techniques with DNNs, significantly enhancing the system's ability to process high-dimensional inputs such as images. Furthermore, experience replay mechanisms have been integrated into them to improve learning stability by storing and reusing past experiences [37]. Mnih et al. [38] significantly accelerated the reinforcement learning field by implementing DQNs that achieved human-level performance on a variety of complex tasks. However, there are evident gaps in addressing challenges posed by non-stationary or dynamic environment situations where the statistical properties of the environment change over time. Under such conditions, a reinforcement learning agent's ability to adapt quickly is paramount. Several approaches like meta-learning [39] and attention mechanisms [40, 41] have sought to remedy these issues to some extent. Meta-learning, for example, helps models quickly adapt to new tasks by training them on a diverse range of tasks. However, the technique does not offer a robust solution for unlearning or forgetting outdated or irrelevant information, which is crucial for maintaining performance in dynamic environments. In a similar vein, attention mechanisms help agents focus on important regions of the input space, but they also fail to address the need for efficient unlearning of obsolete or irrelevant data. This leaves us with a significant research gap: the lack of mechanisms for efficient unlearning and adaptability in reinforcement learning agents designed for non-stationary, dynamic environments.
A key challenge for federated learning when faced with dynamic data distributions and the accumulation of outdated or irrelevant information is its adaptability in evolving environments. Reinforcement learning has been instrumental in training agents for optimal decision-making in dynamic environments, yet it too grapples with the need to efficiently unlearn outdated or irrelevant data. These challenges underscore the importance of integrating attention mechanisms into the Machine Unlearning process. Unlike selective data deletion, attention mechanisms assign reduced weights to outdated, private, or irrelevant information. The dynamic adjustment of attention scores allows these models to prioritize relevant data while disregarding obsolete or extraneous elements. By bridging the worlds of federated learning and reinforcement learning with attention mechanisms, our study addresses the pressing need for an integrated solution that optimizes decision-making in distributed networks with changing data landscapes [42]. In addition, this approach must preserve data privacy and adaptively forget outdated, private, or irrelevant information.
## III Preliminaries & Problem Definition
In this section, we introduce the mathematical notations and terms that will be used throughout the paper, which are summarized in Tab. I. We consider a set of agents \(AG\), with each agent \(ag\) observing states \(S_{i}\) and taking actions from a set \(A\). The agents follow policies \(\pi_{i}(s,a)\) and receive rewards \(R_{i}(s,a)\) based on their actions. Local and global model parameters are denoted by \(\theta_{i}\) and \(\theta_{g}\) respectively. Attention scores, \(w_{ij}\) and \(w_{ik}\), play a pivotal role in our problem definition. The research explores the mechanisms for unlearning outdated or irrelevant data in Machine Learning models while maintaining accuracy and computational efficiency.
The problem is defined by two distinct settings: single-modality and multimodality. The single-modality setting is simpler and widely applicable in scenarios with uniform data types, such as sensor networks or content recommendation systems. However, it may lack the context provided by different types of data, potentially leading to less nuanced decisions. On the other hand, the multimodality setting is more complex but highly relevant in fields like healthcare, where a range of data types (e.g., medical imaging, patient history, etc.) can be used for more comprehensive understanding and decision-making. By exploring the problem in both these settings, we offer solutions that are both versatile and contextually rich.
### _Problem Definition - Single Modality_
**Problem Definition 1**.: _Let \(AG=\{ag_{1},ag_{2},\ldots,ag_{n}\}\) be a set of agents, where each agent \(ag\in AG\) represents an
entity like an IoT device, traffic point, wearable device, edge computing node, or content recommendation system. Each agent \(ag\) observes states \(S_{i}=\{s_{1},s_{2},\ldots,s_{m}\}\) and takes actions \(A=\{a_{1},a_{2},\ldots,a_{k}\}\) based on a policy \(\pi_{i}(s,a)\). Rewards \(R_{i}(s,a)\) evaluate the quality of actions taken in different states. Agents possess local models with parameters \(\theta_{i}\), while a central server maintains a global model with parameters \(\theta_{g}\)._
**Example 2**.: _In the single-modality setting, let \(AG=\{ag_{1},ag_{2},\ldots,ag_{n}\}\) be a set of agents. An agent \(ag\) can represent a real-world entity such as a traffic light in a city. These traffic lights observe various states \(S_{i}=\{s_{1},s_{2},\ldots,s_{m}\}\), such as the number and speed of passing cars, and the change of colors (actions \(A=\{a_{1},a_{2},\ldots,a_{k}\}\)) according to an algorithmic policy \(\pi_{i}(s,a)\). The system evaluates the effectiveness of the traffic light changes in reducing wait time or congestion (rewards \(R_{i}(s,a)\)). Each traffic light has its own local decision-making model characterized by parameters \(\theta_{i}\), and there is a global model for optimizing city-wide traffic flow with parameters \(\theta_{g}\)._
### _Problem Definition - Multimodality_
**Problem Definition 2**.: _In the multimodality setting, let \(M=\{1,2,\ldots,k\}\) represent the set of modalities, where \(k\) is the total number of modalities. Each modality \(k\in M\) is associated with a set of data vectors \(X_{k}=\{x_{k1},x_{k2},\ldots,x_{kn}\}\), and has its local model with parameters \(\theta_{k}\). Attention scores \(w_{ik}\) are assigned to individual data points \(x_{ik}\) within each modality to guide the learning and unlearning process._
**Example 3**.: _In the multimodality setting, consider a healthcare system as a collection of agents in set \(M=\{1,2,\ldots,k\}\), where \(k\) represents different types of medical data (modalities) such as medical imaging and patient history. For instance, medical imaging (modality \(M_{1}\)) would have a set of MRI scans represented as data vectors \(X_{1}=\{x_{11},x_{12},\ldots,x_{1n}\}\). Likewise, patient history (modality \(M_{2}\)) might involve a set of past diagnosis records that are represented as data vectors \(X_{2}=\{x_{21},x_{22},\ldots,x_{2n}\}\). Each modality has a specialized Machine Learning model with parameters \(\theta_{1}\) for medical imaging and \(\theta_{2}\) for patient history. These models use attention mechanisms to weigh the importance of each data point, represented by attention scores \(w_{1k}\) for MRI scans and \(w_{2k}\) for patient history records. These scores guide the decision-making process in diagnosis and treatment._
## IV FRAMU Framework
In an era marked by an ever-increasing influx of data, the need for adaptive Machine Learning models that can efficiently unlearn outdated, private, or irrelevant information is paramount. The methodology proposed in this paper addresses this necessity by introducing two key technical contributions. First, we propose an adaptive unlearning algorithm that utilizes attention mechanisms to tailor the learning and unlearning processes in a single-modality, and then extend the process to multimodality. This innovative approach allows the model to adapt to dynamic changes in data distributions, as well as variations in participant characteristics such as demographic information, behavioral patterns, and data contribution frequencies among others. Second, we put forth a novel design that employs the FedAvg mechanism [21] to personalize the unlearning process. This design ensures that the model is able to discard data that has become irrelevant, outdated, or potentially invasive from a privacy perspective, thus preserving the integrity of the learning model while adapting to new or changing data. The following sections will elaborate on these contributions, providing a detailed discussion of the proposed framework as depicted in Fig. 2.
The FRAMU framework adopts a federated learning architecture comprising Local Agents and a Central Server, each with distinct roles in model training, unlearning, and adaptation. It employs a reinforcement learning paradigm where each agent iteratively learns from its environment. This integration
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Symbol** & **Description** \\ \hline \(AG\) & Set of agents in the model \\ \hline \(ag\) & An individual agent in the set \(AG\) \\ \hline \(S_{i}\) & States observed by an agent \(ag\) \\ \hline \(A\) & Set of possible actions \\ \hline \(\pi_{i}(s,a)\) & Policy followed by the agent \\ \hline \(R_{i}(s,a)\) & Rewards for actions in different states \\ \hline \(\theta_{i}\) & Parameters of local models \\ \hline \(\theta_{g}\) & Parameters of global model \\ \hline \(w_{ij}\) & Attention score for a data point \(j\) \\ \hline \(M\) & Set of modalities in multimodality setting \\ \hline \(X_{k}\) & Data vectors for modality \(k\) \\ \hline \(\theta_{k}\) & Parameters for modality \(k\) \\ \hline \(w_{ik}\) & Attention scores within a modality \(k\) \\ \hline \(t\) & Time step \\ \hline \(s_{t}\) & State at time step \(t\) \\ \hline \(a_{t}\) & Action at time step \(t\) \\ \hline \(r_{t}\) & Reward at time step \(t\) \\ \hline \(R_{t}\) & Cumulative reward \\ \hline \(\pi(a_{t}|s_{t})\) & Policy function \\ \hline \(Q(s_{t},a_{t})\) & \(Q\)-function \\ \hline \(\gamma\) & Discount factor \\ \hline \(\alpha_{i}\) & Attention score for feature \(i\) \\ \hline \(\Delta\theta_{i}\) & Update sent by agent \(a_{i}\) \\ \hline \(f\) & Function for calculating attention scores \\ \hline \(w_{ij}\) & Global attention score for update from agent \(a_{i}\) \\ \hline \(K\) & Number of local agents \\ \hline \(\alpha_{\text{avg}}\) & Average attention score \\ \hline \(\delta\) & Predetermined threshold for attention score \\ \hline \(ag\in AG\) & A specific agent within the set of all agents \(AG\) \\ \hline \(m\) & Number of modalities \\ \hline \(x_{1},x_{2},...,x_{m}\) & Data vectors for each modality \\ \hline \(v_{i}\) & Feature vector for modality \(i\) \\ \hline \(\bar{w}_{j}\) & Averaged attention score across modalities for data point \(j\) \\ \hline \(\lambda\) & Mixing factor \\ \hline \(T\) & The total number of training rounds \\ \hline \(\alpha\) & Learning rate for \(Q\)-value function updates \\ \hline \(\eta\) & Scaling factor for attention score updates \\ \hline \(\beta\) & Mixing factor for combining global and local model parameters \\ \hline \(\varepsilon\) & Convergence threshold for global model parameters \\ \hline \(w_{k}\) & Local model parameters for agent \(k\) \\ \hline \(W\) & Global model parameters \\ \hline \(A_{i}\) & Attention score for data point \(i\) \\ \hline \(A_{ik}\) & Attention score for data point \(i\) within agent \(k\) \\ \hline \(N\) & Total number of data points across all agents \\ \hline \(n_{k}\) & Number of data points in agent \(k\) \\ \hline \end{tabular}
\end{table} TABLE I: Summary of Notations and Descriptions
of federated learning and reinforcement learning is termed federated reinforcement learning. However, what sets FRAMU apart is the integration of attention mechanisms to weigh the relevance of each data point in learning and unlearning. The attention scores are then aggregated and processed at the Central Server to refine the global model.
* **Local Agents**: Responsible for collecting real-time data and performing local model updates. They observe states, take actions, and calculate rewards to update their Q-values and attention scores.
* **Central Server**: Aggregates local models and attention scores, filters out irrelevant data points, and updates the global model.
* **Attention Mechanism**: Dynamically calculates attention scores for each data point to inform the unlearning process.
* **FedAvg Mechanism**: Utilized for global model updates, ensuring that the global model represents a consensus across all agents.
The algorithm 1 outlines the implementation of the FRAMU Framework. It initializes local and global model parameters and attention scores (lines 2-3). It then iterates through local agents to observe, select actions, and update Q-values and attention scores (lines 4-15). Local updates are sent to a Central Server (line 16), where averaged attention scores are used to diminish irrelevant data points in the global model (lines 17-21). Both local and global models are updated and shared (lines 22-25), followed by fine-tuning local models based on the global model's performance (lines 26-32). The algorithm aims for adaptive decision-making in distributed networks.
```
Input: a set of Local Agents, a Central Server, \(T\), \(\theta\), \(\alpha\), \(\eta\), \(\gamma\), \(\underline{\beta}\), \(\varepsilon\) Output:\(\widehat{W}\): Trained global model parameters for federated reinforcement learning
1 Initialize local model parameters \(w_{k}\) for each agent \(k\);
2 Initialize global model parameters \(W\) at the central server;
3 Initialize attention scores \(A_{ik}\) for each data point \(i\) in \(k\);
4for\(t=1\) to Tdo
5foreach local agent \(k\)do
6 Observe current states \(s_{ij}\) for each modality \(j\);
7 Take action \(a_{t}\) based on policy derived from \(Q(s,a;w_{k})\);
8 Observe reward \(r_{t}\) and next states \(s^{\prime}_{i,j}\) for each modality \(j\);
9 Compute TD error \(\delta=r_{t}+\gamma\max_{a}Q(s^{\prime}_{i,j},a;w_{k})-Q(s_{ij},a_{t};w_{k})\);
10 Update \(Q(s_{ij},a_{t};w_{k})\gets Q(s_{ij},a_{t};w_{k})+\alpha\delta\);
11 Update attention scores \(A_{ikj}\gets A_{ikj}+\eta|\delta|\);
12 Send local model parameters \(w_{k}\) and attention scores \(A_{ikj}\) to Central Server;
13foreach data point \(i\)do
14if\(\sum_{k}\frac{1}{m}\sum_{j}A_{ikj}/K<\theta\)then
15 Reduce influence of data point \(i\) in the global model;
16 Aggregate local model parameters to update global parameters: \(W\leftarrow\sum_{k}\left(\frac{y_{k}}{N}\right)w_{k}\);
17 Send updated global model parameters \(W\) to local agents;
18foreach local agent \(k\)do
19 Fine-tune local model with global model:
20\(w^{\prime}_{k}\leftarrow\beta W+(1-\beta)w_{k}\);
21if\(|P(W_{t+1})-P(W_{t})|<\varepsilon\)then
22 Break;
23
24return\(W\)
```
**Algorithm 1**FRAMU Framework
## V Applications of FRAMU
This section explores the practical applications of the FRAMU framework across different settings, single-modality and multimodality, and its continuous adaptation and learning.
### _FRAMU with Single Modality_
Central to FRAMU is an attention layer that functions as a specialized approximator, augmenting the learning capability
Fig. 2: An overview of the proposed FRAMU framework, illustrating its end-to-end adaptive algorithm that incorporates an attention mechanism. The figure is divided into multiple components, each corresponding to a specific phase in the federated learning process. Starting from the left, the diagram begins with data collection from diverse modalities. The framework applies an adaptive learning algorithm that not only updates the global model, but also incorporates an efficient unlearning mechanism for discarding outdated, private, or irrelevant data.
of individual agents. This attention layer distinguishes itself by assigning attention scores to individual data points during the function approximation process. These scores serve as indicators of each data point's relevance in the agent's local learning. The agent updates these scores as it interacts with its environment and receives either rewards or penalties, thereby continually refining its model. Specifically, an agent operates in discrete time steps, current state \(s_{t}\), taking action \(a_{t}\), and receiving reward \(r_{t}\), at each time step \(t\). The ultimate goal is to determine an optimal policy \(\pi(a_{t}|s_{t})\) that maximizes the accumulated reward \(R_{t}\). The \(Q\)-function, which quantifies expected accumulated rewards with a discount factor \(\gamma\), is given by Equation 1.
\[Q(s_{t},a_{t})=\mathbb{E}[R_{t}\mid s_{t},a_{t}]=r_{t}+\gamma\mathbb{E}[Q(s_{t +1},a_{t+1})\mid s_{t},a_{t}] \tag{1}\]
The attention layer further characterizes each state \(s_{t}\) by its features \([x_{1},x_{2},...,x_{n}]\), and assigns attention scores \(\alpha_{i}\) as per:
\[\alpha_{i}=\text{Attention}(x_{i},\text{context}) \tag{2}\]
Here, the context may include additional data such as previous states or actions. The _Q_-function is then approximated using a weighted sum of these features:
\[Q(s_{t},a_{t})\approx\sum(\alpha_{i}\cdot x_{i}) \tag{3}\]
After completing their respective learning cycles, agents forward their model updates \(\theta\) and attention scores \(\alpha\) to the Central Server as a tuple \((\theta,\alpha)\).
#### Iii-A1 Local and Global Attention Score Estimation
FRAMU estimates attention scores both locally and globally. On the local front, each agent employs its attention mechanism to compute scores for individual data points based on their relevance to the task at hand. For an agent \(a_{i}\) with local model parameters \(\theta_{i}\), the attention score \(w_{ij}\) for data point \(j\) is given by:
\[w_{ij}=f(s_{j},\theta_{i}) \tag{4}\]
At the global level, these scores assist the Central Server in prioritizing updates or pinpointing data points for global unlearning. For global parameters \(\theta_{g}\), the global attention score derived from the updates of agent \(a_{i}\) is:
\[w_{gi}=f(\Delta\theta_{i},\theta_{g}) \tag{5}\]
In this equation, \(\Delta\theta_{i}\) is the model update from agent \(a_{i}\), and the function \(f\) calculates attention scores while taking into account the aggregated local scores and other global contextual cues.
#### Iii-A2 Global Model Refinement and Unlearning
Model updates from local agents are aggregated at the Central Server using FedAvg [43]. The attention scores are instrumental in the global unlearning process, with the average attention score calculated as:
\[\alpha_{\text{avg}}=\frac{1}{K}\sum\alpha_{k} \tag{6}\]
When \(\alpha_{\text{avg}}\) falls below a predetermined threshold \(\delta\), the server adjusts the contribution of the respective feature in the global model as given by Equation 7:
\[\theta_{\text{global}}:=g(\theta_{\text{global}},\alpha_{\text{avg}}) \tag{7}\]
Once refined, this global model is sent back to the local agents. The enhanced model shows improved adaptability and robustness to changes in data distributions due to the integration of aggregation and unlearning mechanisms. Consequently, the local agents are better positioned to excel within their particular operational environments. These revised global model parameters, denoted as \(\theta_{\text{global}}\), are then dispatched from the Central Server to the local agents, where \(\theta_{k}=\theta_{\text{global}}\).
### _FRAMU with Multimodality_
The multimodal FRAMU Framework extends its capabilities to seamlessly incorporate various data types, including images, text, audio, and sensor readings. This integration not only enriches decision-making but also optimizes the performance of local agents. By fine-tuning their models to multiple data types, agents are better equipped to operate in complex environments.
#### Iii-B1 Modality-Specific Attention Mechanisms
To effectively manage data from diverse sources, the framework employs specialized attention mechanisms for each modality. These mechanisms generate unique attention scores for data points within a given modality, aiding in both learning and unlearning processes. By doing so, the framework allows local agents to focus on the most relevant and informative aspects of each modality.
The attention scores for a specific modality \(j\) for an agent ag \(\in\) AG can be mathematically represented as:
\[w_{ij}=f_{j}(s_{ij},\theta_{i}), \tag{8}\]
Here, \(s_{ij}\) signifies a data point from modality \(j\) related to agent ag \(\in\) AG, while \(\theta_{i}\) represents that agent's local model parameters. The function \(f_{j}\) considers modality-specific attributes and context to compute these attention scores.
For a feature vector \(v_{i}\) derived from modality \(j\) within agent ag \(\in\) AG, feature-level fusion can be represented as:
\[v_{i}=[x_{i1},x_{i2},\dots,x_{im}] \tag{9}\]
#### Iii-B2 Unlearning and Adaptation across Modalities
In a multimodal setup, attention scores from all modalities collectively inform the unlearning process. If a data point consistently receives low attention scores across different modalities, it indicates that the point is either irrelevant or outdated. The Central Server uses this multimodal insight to refine the global model.
The average attention score across all modalities for a specific data point is:
\[\bar{w}_{j}=\frac{1}{m}\sum_{i=1}^{m}w_{ij} \tag{10}\]
If \(\bar{w}_{j}\) fall below a predefined threshold, the Central Server de-emphasizes or removes that data point from the global model, ensuring that only current and relevant data contribute to decision-making.
During the adaptation phase, local agents utilize the updated global model to enhance their local models. The interplay between global and local parameters is regulated by a mixing factor, which allows local agents to leverage shared insights while preserving modality-specific skills. This relationship can be denoted by:
\[\theta_{i}^{\text{new}}=\lambda\theta_{\text{global}}+(1-\lambda)\theta_{i}^{ \text{old}} \tag{11}\]
Here, \(\theta_{i}^{\text{new}}\) represents the updated local model parameters, \(\theta_{i}^{\text{old}}\) is the previous local parameters, and \(\lambda\) serves as the mixing factor. Through this, the multimodal FRAMU framework maintains an up-to-date and relevant global model, while enabling local agents to make better decisions across a range of data types.
### _Continuous Adaptation and Learning in the FRAMU Framework_
Continuous adaptation and learning are critical in the FRAMU framework, enabling it to thrive in dynamic and changing environments. These processes create an iterative exchange of knowledge between local agents and a Central Server, which leads to consistent model refinement on both local and global scales.
#### V-C1 Local-Level Adaptation
Local agents need the ability to adapt in real time to changes in their operational environments. Within reinforcement learning paradigms, agents continually update their policies in response to actions taken and rewards observed. Furthermore, attention scores allocated to data points or features can vary dynamically based on new data or shifts in relevance. This adaptability ensures that the models of individual local agents remain current. Let \(s_{t}\) denote the state of the environment at time \(t\), and \(a_{t}\) represent the action taken by the agent. After receiving a reward \(r_{t}\) and transitioning to a new state \(s_{t+1}\), the agent aims to maximize the expected cumulative reward. The Q-value function \(Q(s,a)\) serves as a proxy for this cumulative reward, and it is updated using temporal-difference learning algorithms as follows:
\[Q(s_{t},a_{t})\gets Q(s_{t},a_{t})+\alpha\left[r_{t}+\gamma\max_{a}Q(s_{t+ 1},a)-Q(s_{t},a_{t})\right] \tag{12}\]
Here, \(\alpha\) is the learning rate, and \(\gamma\) is the discount factor.
Attention scores, denoted by \(A_{i}\) for data point \(i\), are updated based on the temporal-difference error \(\delta\):
\[A_{i}\gets A_{i}+\eta|\delta|, \tag{13}\]
where \(\eta\) is a scaling factor, and \(\delta=r_{t}+\gamma\max_{a}Q(s_{t+1},a)-Q(s_{t},a_{t})\).
#### V-C2 Global Model Aggregation and Adaptation
As local agents continuously update their models, these adaptations are communicated to the Central Server. It aggregates this information to refine the global model while also tracking the attention scores from local agents. If these scores reveal diminishing importance for certain data points, the server may initiate global unlearning. This ensures the global model remains current and avoids obsolescence. Local agents send their updated model parameters, \(w_{k}\) for agent \(k\), and attention scores \(A_{ik}\) to the Central Server. The server aggregates these to update the global model parameters \(W\) as follows:
\[W\leftarrow\frac{1}{K}\sum_{k}w_{k}, \tag{14}\]
where \(K\) represents the total number of local agents.
#### V-C3 Feedback Mechanisms
After the global model is updated, it is disseminated back to local agents through a feedback loop. This cyclic interaction allows local agents to either initialize or further refine their models based on the global one. This is particularly beneficial when local agents confront new or unfamiliar data points that other agents have encountered. Through this mechanism, the global model acts as a repository of shared knowledge, enhancing the decision-making capabilities of all local agents. The global model parameters \(W\) are sent to local agents, who then adjust their local models using a mixing factor \(\beta\) as follows:
\[w_{k}^{\prime}\leftarrow\beta W+(1-\beta)w_{k}, \tag{15}\]
where \(\beta\) ranges from 0 to 1 and regulates the influence of the global model on local models.
## VI Experimental Setup and Results Analysis
In order to evaluate the performance of FRAMU, we conducted extensive experiments using real-world datasets to not only validate the efficiency and effectiveness of our proposed approach but also provide an empirical basis for the utility of FRAMU in real-world applications. The experimental
Fig. 3: Experimental Setup: This diagram showcases the architecture of the FRAMU framework, detailing the interaction between local and global models within a federated learning environment. The setup incorporates attention-based mechanisms for Machine Unlearning.
setup discusses the following components: datasets, baseline models, evaluation metrics, and FRAMU configurations as shown in Fig. 3. In our experiments evaluating FRAMU's performance, several key thresholds were set to guide the unlearning process. Specifically, we established parameters like outdated_threshold and irrelevant_threshold, which were fine-tuned based on domain expertise and sensitivity analysis. The outdated_threshold parameter quantifies how much time should elapse before data is considered 'outdated' and is eligible for unlearning. The irrelevant_threshold, on the other hand, defines a criteria, usually a measure of statistical insignificance or information gain, to determine when data should be categorized as 'irrelevant' and be removed from the model. Sensitivity analyses were conducted to assess the impact of these thresholds on the model's accuracy, ensuring they were set at levels that optimize performance without compromising data integrity. Additionally, a privacy_epsilon parameter was introduced to manage the trade-off between data utility and privacy preservation. This parameter was specifically designed to keep the model in line with privacy regulations such as GDPR. By setting a low privacy_epsilon, we aimed for high levels of privacy, while a higher value allowed for more flexibility in data usage at the cost of reduced privacy. Through these specifically configured thresholds, we were able to objectively measure and validate the effectiveness and efficiency of FRAMU in various real-world applications, as shown in Fig. 3.
### _Datasets_
In this study, publicly available datasets that encompass various modalities and address specific challenges related to outdated, private, and irrelevant data are adopted. Tab. II provides detailed information about each dataset, including the data modality, number of instances, attributes, target variables, and specific characteristics pertinent to our study. In order to evaluate FRAMU, we conducted a comprehensive comparison of its performance against several contemporary baseline models.
### _Baseline Models_
In evaluating the performance and robustness of the FRAMU framework, several baseline models have been selected for comparison. These models were chosen for their relevance to the challenges addressed by FRAMU, including adaptive learning, Machine Unlearning, and privacy preservation in both single-modality and multimodality settings.
* **FedLU [50]**: This model is a federated learning approach that incorporates knowledge graph embedding and mutual knowledge distillation.
* **Zero-shot MU [51]**: This is another baseline model designed specifically for Machine Unlearning. It employs an innovative approach of using error-minimizing-maximizing noise and gated knowledge transfer to optimize the unlearning process.
* **SISA Training [19]**: This framework strategically limits data points for optimized unlearning. It is designed to restrict the data points involved in training to create a more streamlined and efficient unlearning process.
* **MMoE [52]**: This Multi-gate Mixture-of-Experts(MMoE) model is optimized for multimodal data using ensemble learning. The model employs a mixture of expert networks, each specialized in handling different types of data modalities, and combines their outputs through ensemble learning.
* **CleanCLIP [53]**: This is a fine-tuning framework designed to weaken spurious associations resulting from backdoor attacks. It counteracts spurious correlations in data, making it resilient to backdoor attacks through the use of fine-tuning techniques.
* **Privacy-Enhanced Emotion Recognition (PEER) [54]**: This model employs adversarial learning for privacy-preserving emotion recognition. The framework uses adversarial learning to ensure that the emotional state recognition does not compromise the privacy of the individuals involved.
### _Evaluation Metrics_
The FRAMU framework is evaluated using several important metrics: Mean Squared Error (MSE), Mean Absolute Error (MAE), Reconstruction Error (RE), and Activation Distance (AD). A lower MSE or MAE score shows that the unlearning process is closely aligned with what was expected, indicating a high quality of unlearning. The RE measures how well the model can rebuild data that it has unlearned, with a lower score being better. AD measures the average distance between the predictions of the model before and after unlearning, using what is known as L2-distance, on a specific set of forgotten data. These metrics together give a well-rounded evaluation of how well the unlearning process is working.
All the experiments were run using Python programming language (version 3.7.6) and related TensorFlow, Keras, Open Gym AI, and stable_baselines3 packages.
### _FRAMU Unlearning Results in Single Modality Context_
To evaluate the efficacy of FRAMU in unlearning outdated, private, and irrelevant data, we analyze the results obtained
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
from the experiments. The performance of FRAMU is compared against the performances of the baseline models: FedLU, Zero-shot MU, and SISA training. Notably, the METR-LA dataset [45] is excluded from the privacy data experiments due to the absence of private data. For a comprehensive comparison, the performance metrics of FRAMU in unlearning outdated, private, and irrelevant data are presented alongside the results of the baseline models in Tab. III. The associated p-values serve as indicators of the statistical significance of FRAMU's performance improvements.
#### Iv-B1 Outdated Data
Unlearning outdated data is crucial in maintaining the accuracy and relevancy of trained models. Outdated data may introduce noise, biases, or patterns that no longer hold true in the current context. By selectively unlearning outdated data, FRAMU aims to adapt the model to the most up-to-date data distribution. When unlearning outdated data, FRAMU consistently achieves lower MSE and MAE compared to the baseline models across all datasets. This improvement is attributed to FRAMU's ability to adapt the model to the current data distribution by selectively unlearning outdated data, thereby ensuring that the model is trained on the most relevant and up-to-date information. The low p-values associated with each comparison as shown in Tab. III highlight the statistical significance of FRAMU's superiority in unlearning outdated data, clearly demonstrating that FRAMU significantly outperforms other models in this regard.
#### Iv-B2 Private Data
Protecting the privacy of sensitive information is of utmost importance in many real-world applications. Unintentional retention of private data in the model can lead to privacy breaches and legal concerns. FRAMU incorporates privacy-preserving techniques during the unlearning process to ensure that sensitive information from private data is not retained. The METR-LA dataset was not considered for evaluating private data unlearning, as it doesn't contain privacy-sensitive data. In the case of private data, FRAMU consistently demonstrates superior performance in terms of both MSE and MAE. For example, in the AMPds2 dataset, FRAMU achieved an MSE of 0.038 and an MAE of 4.670, outperforming the baseline models. This effectiveness can be traced back to the federated reinforcement learning approach adopted by FRAMU, enabling collaborative learning across multiple parties while respecting data privacy constraints. The statistical significance of this performance improvement is further supported by the associated p-values, which unequivocally confirm the substantial and meaningful nature of FRAMU's enhancements in unlearning private data.
#### Iv-B3 Irrelevant Data
Unlearning irrelevant data is essential in reducing noise and interference caused by data points that do not contribute to the underlying data distribution. Irrelevant data can introduce unnecessary patterns or outliers that negatively affect the model's understanding and prediction accuracy. By unlearning irrelevant data, FRAMU focuses on the most informative and relevant data instances, resulting in improved model performance. FRAMU exhibits remarkable performance in unlearning irrelevant data, consistently achieving the lowest MSE and MAE values compared to the baseline models. In the AMPds2 dataset, FRAMU achieved an MSE of 0.033 and an MAE of 5.600, outperforming the other models. The results are backed by low p-values, indicating its statistical significance over the baseline models, and underscore FRAMU's substantial advantage in unlearning irrelevant data.
A visual comparison of the differences in MSE and MAE between original and unlearned data for different datasets and baseline models is shown in Fig. 4. FRAMU consistently shows the highest differences, suggesting that it may be the
Fig. 4: Comparative Analysis of MSE and MAE Differences between Original and Unlearned Single Modality Data
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Unlearning**} & \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c|}{**FedLU [50]**} & \multicolumn{3}{c|}{**Zero-shot [51]**} & \multicolumn{3}{c|}{**SISA [19]**} & \multicolumn{3}{c}{**FRAMU (Ours)**} \\ \cline{6-13} & & & **MSE** & **MAE** & **p-value** & **MSE** & **MAE** & **p-value** & **MSE** & **MAE** & **p-value** & **MSE** & **MAE** \\ \hline \multirow{4}{*}{**Outdated**} & \multirow{4}{*}{**Original**} & AMFds2 & 0.063 & 6.740 & 0.024 & 0.061 & 6.890 & 0.031 & 0.059 & 6.760 & 0.041 & **0.046** & **5.570** \\ \cline{3-13} & & METR-LA & 0.079 & 7.140 & 0.016 & 0.082 & 7.210 & 0.038 & 0.078 & 7.950 & 0.029 & **0.065** & **5.930** \\ \cline{3-13} & & MIMIC-III & 0.099 & 12.800 & 0.031 & 0.102 & 12.930 & 0.045 & 0.097 & 12.680 & 0.032 & **0.083** & **10.650** \\ \cline{3-13} & \multirow{2}{*}{**Unlearned**} & AMFds2 & 0.060 & 6.850 & 0.015 & 0.035 & 0.860 & 0.029 & 0.056 & 6.690 & 0.036 & **0.038** & **4.670** \\ \cline{3-13} & & MIFR-LA & 0.075 & 7.020 & 0.029 & 0.077 & 7.100 & 0.025 & 0.072 & 0.960 & 0.032 & **0.052** & **4.910** \\ \cline{3-13} & & MIMIC-III & 0.095 & 12.650 & 0.023 & 0.098 & 12.820 & 0.041 & 0.094 & 12.820 & 0.017 & **0.069** & **5.900** \\ \hline \multirow{4}{*}{**Private**} & \multirow{4}{*}{**Original**} & AMFds2 & 0.052 & 6.780 & 0.014 & 0.054 & 6.930 & 0.037 & 0.053 & 6.810 & 0.041 & **0.041** & **0.540** \\ \cline{3-13} & & MIMIC-III & 0.078 & 12.870 & 0.035 & 0.080 & 15.010 & 0.043 & 0.079 & 12.760 & 0.045 & **0.064** & **10.600** \\ \cline{3-13} & \multirow{2}{*}{**Unlearned**} & AMFds2 & 0.049 & 6.670 & 0.011 & 0.052 & 6.910 & 0.035 & 0.051 & 6.740 & 0.015 & **0.033** & **4.590** \\ \cline{3-13} & & MIMIC-III & 0.075 & 12.720 & 0.031 & 0.077 & 12.900 & 0.038 & 0.076 & 12.650 & 0.016 & **0.053** & **8.560** \\ \hline \multirow{4}{*}{**Irrelevant**} & \multirow{4}{*}{**Original**} & AMFds2 & 0.047 & 6.700 & 0.035 & 0.050 & 6.850 & 0.044 & 0.048 & 6.730 & 0.031 & **0.037** & **5.440** \\ \cline{3-13} & & METR-LA & 0.054 & 7.100 & 0.027 & 0.056 & 7.170 & 0.041 & 0.055 & 7.050 & 0.025 & **0.043** & **5.830** \\ \cline{1-1} \cline{3-13} & & MIMIC-III & 0.070 & 12.730 & 0.038 & 0.072 & 12.870 & 0.031 & 0.071 & 12.620 & 0.039 & **0.057** & **10.410** \\ \cline{1-1} \cline{3-13} & & AMFds2 & 0.045 & 6.590 & 0.011 & 0.047 & 6.830 & 0.036 & 0.046 & 0.660 & 0.029 & **0.030** & **4.510** \\ \cline{1-1} \cline{3-13} & & METR-LA & 0.052 & 6.980 & 0.014 & 0.054 & 7.070 & 0.019 & 0.053 & 6.930 & 0.022 & **0.035** & **4.750** \\ \cline{1-1} \cline{3-13} & & MIMIC-III & 0.068 & 12.880 & 0.029 & 0.070 & 12.760 & 0.024 & 0.069 & 12.510 & 0.027 & **0.047** & **8.690** \\ \hline \end{tabular}
\end{table} TABLE III: FRAMU - Evaluation Results in Single Modality Context
most affected by the unlearning process. By comparison, the other models showed varying patterns of differences across the datasets.
FRAMU manifests superior performance relative to its counterparts across all datasets in RE and AD metrics, as shown in Fig. 4(a). To elucidate, in the AMPds2 dataset, FRAMU registered an RE and AD of 0.024 and 0.57, respectively, compared to FedLU's figures of 0.03 and 0.66. Similarly, within the METR-LA dataset, FRAMU attained average values of 0.033 and 0.59 for RE and AD, as opposed to FedLU's 0.038 and 0.7. In the case of the MIMIC-III dataset, FRAMU again excelled, recording 0.044 and 1.16 against FedLU's 0.049 and 1.263 for RE and AD, respectively.
### _FRAMU Unlearning Results in Multimodality Context_
In the multimodality experiment, the FRAMU framework handled multiple modalities of data, including images, text, and sensor data. The purpose of this experiment was to assess the performance of FRAMU in the context of unlearning outdated, private, and irrelevant data within a multimodal learning setting. To conduct the experiment, we utilized well-known benchmark datasets: MIMIC-CXR [48], NYPD Complaint Data [47], and SHED [49]. The evaluation primarily focused on measuring the reduction in error and performance improvement achieved by FRAMU compared to baseline models when unlearning outdated, private, and irrelevant data. The p-values associated with these comparisons are pivotal in highlighting the statistical significance of FRAMU's advancements.
#### Iv-E1 Outdated Data
FRAMU achieved lower MSE, MAE, RE, and AD values compared to the baseline models across all datasets. For example, in the NYPD Complaint Data [47] dataset, FRAMU achieved an MSE of 0.047 and an MAE of 5.037, outperforming MMoE, CleanCLIP, and Privacy-Enhanced Emotion Recognition. Similar trends can be observed in the MIMIC-CXR [48] and SHED [49] datasets, where FRAMU consistently achieved better performance. FRAMU excelled in capturing temporal changes and patterns within multimodal data. By unlearning outdated information and emphasizing the most recent and relevant features, FRAMU effectively reduced the impact of outdated patterns on predictive performance. This allowed FRAMU to outperform the baseline models, which do not have mechanisms specifically designed for handling outdated data. This achievement is supported by the associated p-values, underlining the statistical significance of FRAMU's performance improvements. It affirms FRAMU's substantial advantage in unlearning outdated data over the baseline models.
#### Iv-E2 Private Data
FRAMU continued to outperform the baseline models in terms of MSE and MAE values. In the NYPD Complaint Data [47] dataset, FRAMU achieved an MSE of 0.043 and an MAE of 5.067, outperforming the other models. This trend is also observed in the MIMIC-CXR [48] and SHED [49] datasets, where FRAMU consistently achieved lower values. FRAMU's attention-based Machine Unlearning framework played a crucial role in preserving data privacy. By selectively attending to shared features across modalities while ignoring private information, FRAMU achieved a balance between privacy protection and predictive accuracy. This enabled FRAMU to achieve superior performance compared to the baseline models, which may struggle to preserve privacy while maintaining predictive power. The p-values underscore that
Fig. 5: Comparative analysis of FRAMU’s performance against baseline models in RE and AD metrics. The plots show how different methods fare in unlearning outdated, private, and irrelevant data.
Fig. 6: Comparative Analysis of MSE and MAE Differences between Original and Unlearned multimodality Data
FRAMU significantly outperforms other models in unlearning private data.
#### V-B3 Irrelevant Data
FRAMU again demonstrated superior performance. In the NYPD Complaint Data [47] dataset, FRAMU achieved an MSE of 0.038 and an MAE of 5.012, surpassing the baseline models. Similar trends can be observed in the MIMIC-CXR [48] and SHED [49] datasets, where FRAMU consistently achieved lower values. FRAMU's attention mechanism allowed it to focus on the most relevant features and modalities for prediction while disregarding irrelevant or noisy information. This ability to selectively attend to informative features improved the overall predictive accuracy of FRAMU, leading to its statistically significant performance gains over the baseline models. The baseline models, lacking attention mechanisms, are less effective in filtering out irrelevant information, which may negatively impact their predictive performance. These p-values reinforce the fact that FRAMU significantly excels in unlearning irrelevant data.
The differences in MSE and MAE between original and unlearned data for different datasets and baseline models are presented in Fig. 6. FRAMU consistently showed the highest differences in both MSE and MAE, suggesting that it may be the most affected by the unlearning process. This could be interpreted as FRAMU being more responsive to unlearning, which is possibly aligned with how it is designed to handle outdated, private, and irrelevant data. The other models showed relatively similar patterns of differences, with slight variations across the datasets.
We evaluated the FRAMU performance in terms of RE and AD metrics, and compared with the baseline models as shown in Fig. 5b. Across three distinct unlearning scenarios and datasets (NYPD, MIMIC-CXR, and SHED), FRAMU consistently outperformed its competitors. It achieved lower average RE and AD scores, indicating higher efficiency and applicability in Machine Unlearning tasks. This robust performance across diverse conditions establishes FRAMU as a leading option in this emerging research field.
### _Convergence Analysis_
In this study, we proposed an efficient unlearning algorithm within FRAMU that showcased fast convergence. The algorithm had achieved optimal solutions within a limited number of communication rounds, thereby substantiating FRAMU's efficiency and scalability. The convergence analysis of FRAMU, as shown in Fig. 7, evaluated its performance over multiple communication rounds using MSE and MAE metrics across three types of data: outdated, private, and irrelevant. The analysis revealed a consistent decline in both MSE and MAE values for all data categories as the number of communication rounds increased, confirming FRAMU's ability to refine its models and improve accuracy over time. Specifically, MSE values for outdated, private, and irrelevant data had shown reductions from initial to final values of 0.053 to 0.039, 0.044 to 0.030, and 0.039 to 0.025, respectively. Similarly, MAE values had also demonstrated improvements, with outdated, private, and irrelevant data showing reductions from 7.201 to 4.845, 7.17 to 4.409, and 6.75 to 4.210, respectively.
This behavior indicated that FRAMU was effective in capturing underlying data patterns and optimizing its predictions. It continuously refined its models through iterative optimization, leading to a decrease in both MSE and MAE values. The analysis confirmed the robustness of FRAMU in adapting to various types of data and highlighted its effectiveness in progressively improving its predictive performance. Overall,
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Unlearning**} & \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c|}{**MMoE [52]**} & \multicolumn{3}{c|}{**CleanCLIP [53]**} & \multicolumn{3}{c|}{**PEER [54]**} & \multicolumn{3}{c}{**FRAMU (ours)**} \\ \cline{6-13} & & & **MSE** & **MAE** & **p-value** & **MSE** & **MAE** & **p-value** & **MSE** & **MAE** & **p-value** & **MSE** & **MAE** \\ \hline \multirow{4}{*}{**Outdated**} & \multirow{2}{*}{**Original**} & NYPD & 0.064 & 7.28 & 0.024 & 0.062 & 6.95 & 0.031 & 0.06 & 6.4 & 0.041 & **0.055** & **5.77** \\ \cline{3-13} & & MIMIC-CXR & 0.075 & 8.71 & 0.016 & 0.079 & 8.31 & 0.038 & 0.074 & 7.67 & 0.029 & **0.071** & **6.9** \\ \cline{2-13} & & SHED & 0.095 & 11.27 & 0.031 & 0.098 & 10.76 & 0.045 & 0.093 & 9.92 & 0.032 & **0.089** & **8.93** \\ \cline{2-13} & \multirow{2}{*}{**Data**} & NYPD & 0.061 & 7.13 & 0.015 & 0.059 & 6.78 & 0.029 & 0.058 & 5.71 & 0.036 & **0.042** & **4.54** \\ \cline{2-13} & & MIMIC-CXR & 0.071 & 8.55 & 0.029 & 0.075 & 8.12 & 0.025 & 0.077 & 6.84 & 0.032 & 0.052 & **5.45** \\ \cline{2-13} & & SHED & 0.091 & 11.1 & 0.023 & 0.094 & 10.54 & 0.041 & 0.09 & 9.76 & 0.017 & **0.067** & **7.07** \\ \hline \multirow{4}{*}{**Private**} & \multirow{2}{*}{**Original**} & NYPD & 0.053 & 7.33 & 0.014 & 0.055 & 7 & 0.037 & 0.054 & 6.45 & 0.041 & **0.051** & **5.81** \\ \cline{2-13} & & MIMIC-CXR & 0.063 & 8.76 & 0.035 & 0.065 & 8.36 & 0.043 & 0.064 & 7.71 & 0.045 & **0.062** & **6.94** \\ \cline{2-13} & & SHED & 0.078 & 11.34 & 0.035 & 0.08 & 10.82 & 0.044 & 0.079 & 9.98 & 0.031 & **0.077** & **8.98** \\ \cline{2-13} & & NYPD & 0.051 & 7.17 & 0.011 & 0.053 & 6.82 & 0.035 & 0.052 & 6.31 & 0.015 & **0.039** & **4.57** \\ \cline{2-13} & & MIMIC-CXR & 0.06 & 8.6 & 0.031 & 0.062 & 8.17 & 0.038 & 0.061 & 7.56 & 0.016 & **0.046** & **5.45** \\ \cline{2-13} & & SHED & 0.075 & 11.17 & 0.011 & 0.077 & 10.61 & 0.036 & 0.076 & 9.81 & 0.029 & **0.058** & **7.11** \\ \hline \multirow{4}{*}{**Irrelevant**} & \multirow{2}{*}{**Original**} & NYPD & 0.047 & 7.25 & 0.027 & 0.05 & 6.92 & 0.041 & 0.048 & 6.38 & 0.025 & **0.046** & **5.74** \\ \cline{2-13} & & MIMIC-CXR & 0.054 & 8.66 & 0.038 & 0.056 & 8.27 & 0.031 & 0.055 & 7.63 & 0.039 & **0.053** & **6.87** \\ \cline{2-13} & & SHED & 0.07 & 11.21 & 0.045 & 0.072 & 10.7 & 0.032 & 0.071 & 9.87 & 0.042 & **0.069** & **8.38** \\ \cline{2-13} & & NYPD & 0.045 & 7.1 & 0.014 & 0.047 & 6.74 & 0.019 & 0.046 & 6.24 & 0.022 & **0.034** & **4.52** \\ \cline{2-13} & & MIMIC-CXR & 0.052 & 8.5 & 0.029 & 0.054 & 8.08 & 0.024 & 0.053 & 7.48 & 0.027 & **0.044** & **5.42** \\ \cline{2-13} & & SHED & 0.068 & 11.04 & 0.025 & 0.07 & 10.49 & 0.022 & 0.069 & 9.71 & 0.021 & **0.052** & **7.84** \\ \hline \end{tabular}
\end{table} TABLE IV: FRAMU - Evaluation Results in Multimodality Context
Fig. 7: Convergence Analysis
FRAMU's strong convergence characteristics across different data categories have demonstrated its versatility and capability in minimizing errors, making it a robust choice for various federated learning applications.
### _Optimization_
The performance of the FRAMU framework is evaluated through MSE and MAE metrics across various communication rounds and thresholds, as presented in Fig. 8 and Fig. 9. Fig.8 investigates FRAMU's efficiency with outdated data across time durations that ranged from 24 hours to a year. Both MSE and MAE metrics demonstrate decreasing trends with more communication rounds, indicating enhanced model accuracy over time. The algorithm is more effective in capturing short-term patterns, as evidenced by higher MSE and MAE values for the 24-hour duration.
Fig. 9 shifts the focus to FRAMU's performance on private data, revealing that the algorithm not only maintains but even improves its accuracy compared to outdated data scenarios. Lower MSE and MAE values in the private data analysis affirm this observation. Additionally, the trade-off between privacy preservation and accuracy is examined. Although increasing privacy guarantees (lower \(\epsilon\) values) generally leads to higher MSE and MAE, FRAMU still manages to maintain reasonable accuracy levels. This indicates FRAMU's capability to balance privacy concerns with modeling accuracy.
## VII Research Implications
The FRAMU framework introduced in this study holds substantial academic implications for both single-modality and multimodality federated learning scenarios. It addressed key aspects of federated learning, such as privacy preservation, adaptability to changing data distributions, unlearning mechanisms for model evolution, attention mechanisms for model aggregation, and optimization strategies for efficient resource utilization and system scalability.
One salient implication of FRAMU is its approach to privacy preservation. In an era where data privacy is a paramount concern, this framework implements mechanisms to prevent the model from over-relying on sensitive or private demographic data. This focus on privacy does not compromise accuracy; as shown in our empirical evaluations, the framework skillfully balances the often conflicting objectives of data privacy and model performance. This achievement marks a milestone in federated learning and sets the stage for future research on privacy-preserving algorithms.
Adaptability is another notable strength of the FRAMU framework. federated learning inherently deals with non-IID (non-Independently and Identically Distributed) data across diverse participants and evolving patterns. FRAMU tackles these challenges by employing adaptive models that adjust to shifting data distributions. This adaptability makes the framework especially valuable for real-world applications that are characterized by data heterogeneity and dynamism.
The framework's unlearning mechanisms also have important implications for the evolution of Machine Learning models. The ability to identify and discard outdated or irrelevant data is not merely a feature but a necessity for the real-world deployment of federated learning models. This allows the system to focus computational resources on the most relevant and up-to-date data, thereby maintaining or even improving model accuracy and relevance over time.
FRAMU's incorporation of attention mechanisms has pivotal academic repercussions for intelligent model aggregation in federated learning systems. These mechanisms enable FRAMU to filter out noise and prioritize the most informative features during learning and aggregation. This nuanced approach offers a potential roadmap for the development of more efficient and effective federated learning systems.
Finally, FRAMU's optimization strategies make a significant academic contribution, particularly in how they minimize the number of communication rounds needed for model convergence. This enhancement benefits both the efficiency and scalability of federated learning systems. Empirical validation through convergence analyses confirms that the framework not only reduces communication overheads but also achieves an optimal solution in fewer rounds.
## VIII Conclusion
The FRAMU framework is a significant advancement in both single-modality and multimodality of Machine Unlearning. By incorporating privacy preservation, adaptability to changing data distributions, unlearning of outdated or irrelevant data, attention mechanisms for model aggregation, and optimization strategies, FRAMU addresses critical challenges and enhances performance, privacy, efficiency, and scalability in federated learning. The evaluation results demonstrate
Fig. 8: Optimization Analysis - Outdated Data
Fig. 9: Optimization Analysis - Private Data
FRAMU's effectiveness in improving model accuracy, protecting sensitive data, adapting to dynamic environments, and optimizing the federated learning process. Statistical analysis shows FRAMU outperforms baseline models in MSE and MAE values during the unlearning of outdated, private, and irrelevant data across diverse datasets.
However, FRAMU has limitations in the retraining process, computational complexity, scalability to larger setups, and hyperparameter choices. Further research is needed to overcome these challenges and refine FRAMU for real-world applications. Future research should focus on optimizing retraining with techniques like transfer learning and parallel computing. Enhancing scalability through innovative communication and aggregation methods will broaden FRAMU's adoption. Additionally, improving adaptability and fairness in the presence of diverse data distributions will enhance its applicability. Addressing these limitations and pursuing research directions can revolutionize federated learning, fostering robust, privacy-preserving, and efficient AI systems across domains. The continuous advancements in federated learning, supported by FRAMU, will lead to a new era of data privacy and performance optimization.
| 機械学習の無効化は、データプライバシー問題に対処するための新しい分野であり、機械学習プロセスから個人情報や不適切なデータを取り除くことを可能にする。古い、個人情報、不適切なデータの使用によるプライバシーとモデル効率に関する課題が生じ、この問題は、機械学習と無効化の両方でモデルの精度と計算効率を損なう。これらの課題に対処するために、私たちは、注意を基にした機械学習の無効化のための新しいフレームワーク、フェデレーション・リソース・学習による注意を基にした機械学習の無効化(FRAMU)を導入する。このフレームワークは、適応学習メカニズム、プライバシー保護技術、最適化戦略を組み合わせており、単一モダリティまたはマルチモダリティの様々なデータソースを処理し、精度とプライバシーを維持する、包括的なソリューションである。FRAMUの強 |
2305.19614 | Search for Multiple Adjacent Marked Vertices on the Hypercube by a
Quantum Walk with Partial Phase Inversion | There is a strong interest in quantum search algorithms, particularly in
problems with multiple adjacent solutions. In the hypercube, part of the energy
of the quantum system is retained in states adjacent to the target states,
decreasing the chances of the target states being observed. This paper applies
the Multiself-loop Lackadaisical Quantum Walk with Partial Phase Inversion to
search for multiple adjacent marked vertices on the hypercube. Aspects like the
type of marked vertices are considered in addition to using multiple self-loops
and weight compositions. Two scenarios are analyzed. Firstly, the relative
position of non-adjacent marked vertices together with adjacent marked
vertices. Secondly, only adjacent marked vertices are analyzed. Here, we show
experimentally that, with partial phase inversion, a quantum walk can amplify
the probability amplitudes of the target states, reaching success probabilities
of values close to $1$. We also show that the relative position of non-adjacent
marked vertices does not significantly influence the search results. Our
results demonstrate that the partial phase inversion of target states is a
promising alternative to search adjacent solutions with quantum walks, which is
a key capacity for real search applications. | Luciano S. de Souza, Jonathan H. A. de Carvalho, Henrique C. T. Santos, Tiago A. E. Ferreira | 2023-05-31T07:30:04 | http://arxiv.org/abs/2305.19614v2 | Search for Multiple Adjacent Marked Vertices on the Hypercube by a Quantum Walk with Partial Phase Inversion
###### Abstract
There is a strong interest in quantum search algorithms, particularly in problems with multiple adjacent solutions. In the hypercube, part of the energy of the quantum system is retained in states adjacent to the target states, decreasing the chances of the target states being observed. This paper applies the Multiself-loop Lackadaisical Quantum Walk with Partial Phase Inversion to search for multiple adjacent marked vertices on the hypercube. Aspects like the type of marked vertices are considered in addition to using multiple self-loops and weight compositions. Two scenarios are analyzed. Firstly, the relative position of non-adjacent marked vertices together with adjacent marked vertices. Secondly, only adjacent marked vertices are analyzed. Here, we show experimentally that, with partial phase inversion, a quantum walk can amplify the probability amplitudes of the target states, reaching success probabilities of values close to \(1\). We also show that the relative position of non-adjacent marked vertices does not significantly influence the search results. Our results demonstrate that the partial phase inversion of target states is a promising alternative to search adjacent solutions with quantum walks, which is a key capacity for real search applications.
Quantum Computing Quantum Walks Quantum Search Algorithm Lackadaisical Quantum Walk Multiple Self-loops Partial Phase Inversion Adjacent Marked Vertices
## 1 Introduction
Many advances have been achieved since the article's publication by Aharonov et al. (1993), which is considered the first in quantum walks. One of the first quantum search algorithms based on quantum walks was designed by Shenvi et al. (2003), which defined the quantum walks as one of the most promising resources and an intuitive framework for building new quantum algorithms. Many other works on quantum walks have been developed since this moment. [Ambainis et al., 2004, Potocek et al., 2009, Hein and Tanner, 2009, Ambainis et al., 2012].
Amongst many proposed works in quantum walks, Wong (2015) developed a quantum search algorithm called lackadaisical quantum walks - LQW, an analog of classical lazy random walks in which the quantum walker has a chance to stay at the current vertex position introducing \(m\) self-loops of integer weight \(l\) at each vertex of the complete graph. This proposal was altered by Wong (2017) where the \(m\) self-loops were reduced to one self-loop of non-integer weight. In turn, Souza et al. (2023) proposed a new quantum search algorithm based on the LQW called Multiself-Loop Lackadaisical Quantum Walk with Partial Phase Inversion - MSLQW - PPI, which uses \(m\) self-lops in each vertex on the hypercube with weight value \(l=l^{\prime}\cdot m\). The weight value \(l^{\prime}\in\mathbb{R}\) and \(m\in\mathbb{Z}\).
However, some other studies indicate that the type of marked vertices influences the results of quantum search algorithms, particularly the adjacent marked vertices. According to Potocek et al. (2009), the final state of the algorithm designed by Shenvi et al. (2003) is mainly composed of the marked state but also part of the probability amplitude is retained in adjacent states. Another behavior of quantum walks on the hypercube referring to adjacent marked vertices is the formation of stationary states (Nahimovs et al., 2019). Souza et al. (2021) experimentally showed that adjacent marked vertices interfere with the search results. However, they have proposed a new ideal value of weight \(l=(d/N)\cdot k\) when there are adjacent marked vertices in the set of solutions, a decrease in the maximum probability of success occurs.
Therefore, this work objective is to apply MSLQW - PPI to research multiple marked vertices on the hypercube in two scenarios. The first scenario analyzes the research by multiple adjacent and no-adjacent vertices to verify that the relative position of non-adjacent vertices interferes with the search results. The second scenario examines the research by multiple adjacent vertices. The coefficient of variation was also used to evaluate the dispersion around the average maximum probability according to the relative position of the non-adjacent marked vertices. The results indicate insignificant variation around the maximum mean success probability. These results are significant because they show that the partial inversion of the target state based on multiple self-loops provides a new perspective of advances in developing new quantum search algorithms.
This paper is organized as follows. Section 2 presents some concepts about Multiself-loop Lackadaisical Quantum Walks with Partial Phase Inversion on the hypercube. Section 3 the experiments are defined. Section 4 presents the results and discussion. Finally, in Section 5 are the conclusions.
## 2 Mslqw - PPI on the hypercube
The multiself-loop Lackadaisical Quantum walk with Partial Phase Inversion was proposed by Souza et al. (2023). This quantum algorithm is obtained by adding \(m\) self-loops at each vertex of the hypercube with weights \(l^{\prime}\), and a partial phase inversion of the target state is applied. The Hilbert space associated with the MSLQW - PPI in the hypercube is the Hilbert space associated with the quantum coin space, and is the Hilbert space associated with the quantum coin space, and is the Hilbert space associated with nodes. The component \(e_{i}\) is a binary string of \(n\) bits with \(1\) in the \(i\)-th position, and \(\circ_{j}\) is the self-loop. To account for the weighted self-loop, a modification is made to the Grover coin as follows,
\[C=2\ket{s^{C}}\bra{s^{C}}-I_{(n+m)} \tag{1}\]
where
\[\ket{s^{C}}=\frac{1}{\sqrt{n+l}}\left(\sqrt{l^{\prime}}\sum_{j=0}^{m-1}\ket{ \circ_{i}}+\sum_{i=0}^{n-1}\ket{i}\right) \tag{2}\]
and \(l=l^{\prime}\cdot m\). The MSLQW - PPI system on the hypercube is started according to Equation 3.
\[\ket{\Psi(0)} =\frac{\sqrt{l^{\prime}}}{\sqrt{N}\times\sqrt{n+l}}\sum_{\vec{x} }\sum_{j=0}^{m-1}\ket{\vec{x},\circ_{j}} \tag{3}\] \[+\frac{1}{\sqrt{N}\times\sqrt{n+l}}\sum_{\vec{x}}\sum_{i=0}^{n-1} \ket{\vec{x},i}\]
Consider the quantum walk with search. A query to the "Grover oracle" is included in each step of the quantum walk \(U^{\prime}=U\cdot(I_{n}\otimes Q)\) as follow,
\[Q=I_{(n+m)+N}-2\left|\epsilon,\omega\right\rangle\left\langle\omega,\epsilon \right|-2\left|\odot_{\tau},\omega\right\rangle\left\langle\omega,\odot_{\tau}\right| \tag{4}\]
where \(\left|\omega\right\rangle\) represents the marked vertex, \(\epsilon\) represents an edge that is not a self-loop, and \(\odot_{\tau}\) is the self-loop that will have its phase inverted. The proposed modification of Grover's oracle described in Equation 4 makes it possible to identify the components of the target state that have their phase inverted.
## 3 Experiment setup
The simulations performed in this work are divided into the following two scenarios. In the first scenario, we consider both adjacent and non-adjacent marked vertices. In the second scenario, we consider only the adjacent marked vertices. As in Souza et al. (2023), we flipped the phase of just a single self-loop, _i.e._, \(\odot_{\tau=0}\).
### Definition of marked vertex samples
According to the definitions of the hypercube, two vertices are adjacent if the Hamming distance between them is \(1\). Non-adjacent vertices have a Hamming distance of at least \(2\) from any other vertex. We define the set of marked vertices divided into \(M_{k,j}\) groups of samples with \(k\) vertices and \(j\) samples.
The adjacent and non-adjacent vertices form the first set. This set is divided into twelve groups of one hundred samples. For each sample of \(k\) adjacent vertices, other \(k-1\) non-adjacent vertices are marked, and thirty MSLQW - PPI are performed. Therefore, thirty-six hundred simulations are performed. For every hundred simulations, we preserve the same \(k\) adjacent marked vertices and vary the locations of the \(k-1\) non-adjacent marked vertices. For example, if \(k=3\), we have two adjacent marked vertices and one non-adjacent marked vertex, \(M_{3,100}=[\{0,1,1128\}_{1},\{0,1,2950\}_{2},\ldots,\{0,1,1470\}_{100}]\).
The adjacent vertices form the second set. This set is divided into twelve sample groups containing between \(2\) and \(13\) marked vertices. To search for adjacent vertices, twelve simulations are performed. Initially, we have two marked vertices, and at each new simulation, a new vertex is marked and added to the new group as follows: \(M_{2,1}=\{0,1\},\ldots,M_{13,1}=\{0,1,2\ldots,1024,2048\}\), until all adjacent vertices are marked.
The samples have \(k\) distinct vertices, _i.e._, without replacement. The simulations performed in the first scenario set were necessary to obtain the average behavior based on the relative position of the non-adjacent marked vertices and verify their influence on the results. The stop condition for a simulation occurs after each of the thirty walks obtains the maximum value of the probability amplitude. In each quantum walk, a number \(m\) of self-loops per vertex was defined, which varies between \(1\) and \(30\). The weight \(l\) is distributed by dividing its value between \(m\) equal parts.
### Hardware and software setup for the simulations
The simulations were performed using the Parallel Experiment for Sequential Code - PESC Santos et al. Henrique et al. (2023) to perform computational simulations distributed over a network. The programming language used to write the algorithms was Python 3.7. All machines used in the simulations utilize the operational system, Ubuntu 18.04.6 LTS (Bionic Beaver).
## 4 Results and discussion
As previously defined, the experiments are divided into two scenarios according to the type of marked vertices. In the first scenario, we have adjacent and non-adjacent marked vertices. As we analyzed the relative positional of the non-adjacent marked vertices, thirty-six thousand simulations were performed, which were divided into twelve groups with one hundred samples from k vertices, and to each sample was performed thirty quantum walks MSLQW - PPI. Then, the variability of the results was also analyzed and is represented in Fig. 2. In the second scenario, we only have adjacent marked vertices. Each node has the same number of adjacent vertices as the hypercube's degree number. Therefore, twelve simulations were realized, and thirty MSLQW - PPI were performed in each simulation. The results are represented in Figures 1 and 3 respectively. They present the maximum probability of success according to the number of self-loops and marked vertices.
### Analyzing the search with adjacent and non-adjacent marked vertices
Fig. 1a shows the probability of success for the weight \(l=n/N\). Rhodes and Wong (2020) proposed this weight value to search a single vertex, while Souza et al. Souza et al. (2021) used it to search multiple vertices. However, the results showed
that this weight value is not ideal. Souza et al. (2023) used this weight value and applied MSLQW - PPI to search for multiple non-adjacent marked vertices, but there was no increase in the maximum probability of success. In this article, the maximum average probability obtained \(p=0.999\) with three marked vertices (two adjacent vertices and one non-adjacent vertex) and a single self-loop, which is a result close to that achieved by Souza et al. (2021) of \(p=0.997\). In both cases, the maximum probability of success decreases as the number of marked vertices increases.
Fig. 0(b) shows the success probability using the weight \(l=(n/N)\cdot k\). In cases with only non-adjacent marked vertices, for this weight value, only a single self-loop is needed (Souza et al., 2021, 2023). Although, when there are adjacent marked vertices, it is necessary to increase the number of self-loops and use partial phase inversion to obtain success probabilities close to \(1\), as we can see in Table 1. In some cases, we can observe an improvement in the probability of success compared to the results obtained using only one self-loop.
Comparing columns A and C of Table 1, we can see that using partial phase inversion from a certain quantity of self-loops, it is possible to obtain more significant probabilities than those achieved with the use of a single self-loop.
Figure 1: The probability of success of the MSLQW – PPI to search for adjacent and non-adjacent marked vertices with \(n=12\) and \(N=4096\) vertices. (a) weight value \(l=n/N\). (b) weight value \(l=(n/N)\cdot k\). (c) weight value \(l=n^{2}/N\). (d) weight value \(l=(n^{2}/N)\cdot k\).
Now, comparing columns B and C with at least two self-loops, it is possible to improve the maximum probability of success. However, for this self-loop weight value, as the number of marked vertices increased, only a single self-loop is needed to achieve probabilities of approximately \(p\approx 0.98\).
Fig. 0(c) shows the probability of success using the weight value \(l=n^{2}/N\). This weight is proposed by Souza et al. (2023) and is composed of the weight value presented by Rhodes and Wong (2020) to search for one marked vertex plus an exponent in the element that represents the degree of the vertex in the numerator. Compared with the results found by Souza et al. (2021) and with the results shown in Fig. 0(a), there was a significant improvement in the maximum probability of success for numbers \(k>3\) marked vertices. In this scenario, the success probabilities depend on the inversely proportional relationship between the number \(k\) of marked vertices and the number \(m\) of Self-loops. This means that as the number of marked vertices increases, the number of self-loops decreases and the other way around, however, the maximum probability of success continues above \(p=0.97\).
Another analysis of the results for the weight \(l=n^{2}/N\) was performed. Two scenarios are compared, and some results are shown in Table 2. The results presented in column A refer to the scenario with only non-adjacent marked vertices obtained by Souza et al. (2023). In the scenario presented in column B where we have both types of marked vertices for each \(k\) adjacent vertices, we have \(k-1\) non-adjacent vertices. Although there are adjacent marked vertices in the sample, the use of multiple self-loops guarantees, in some cases, maximum success probabilities close to \(1\).
Note that, in the case where there are only non-adjacent marked vertices, as the number of marked vertices increases, the number of self-loops decreases. However, when there are marked adjacent vertices, a more significant number of self-loops is needed to maintain the success probability close to the maximum. Comparisons made between different scenarios and the same weights show that the type of marked vertex influences the search result. However, although there are adjacent marked vertices in the sample, partial state inversion guarantees, in some cases, maximum success probabilities close to \(1\).
Now, let us analyze the case where the type of marked vertices is the same, but the weights are different. Considering the behavior of the probability of success in Figures 0(c) and 0(d), we can see that not only the type of marked vertices influences the probability of success, but also the weight value. Note that the difference in weight composition, in this case, is the number of marked vertices. We can see that after the increase in the number of self-loops, overall, we significantly improved the probability of success. We can better see these results in Table 3, which shows the maximum probabilities of success and the number of self-loops according to the number of marked vertices and weight value.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{A} & \multicolumn{2}{c}{B} & \multicolumn{2}{c}{C} \\ \cline{2-7} k & p & m & p & m & p & m \\ \hline
3 & 0.794 & 8 & 0.999 & 3 & 0.754 & 1 \\
5 & 0.911 & 4 & 0.996 & 2 & 0.863 & 1 \\
7 & 0.931 & 3 & 0.993 & 2 & 0.921 & 1 \\
9 & 0.981 & 2 & 0.981 & 2 & 0.948 & 1 \\
11 & 0.970 & 2 & 0.970 & 2 & 0.964 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cases for searching adjacent and non-adjacent marked vertices where more than one self-loop is required to obtain a maximum probability close to \(1\) using the weight \(l=(n/N)\cdot k\) proposed by Souza et al. (2021).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{A} & \multicolumn{2}{c}{B} \\ \cline{2-5} k & p & m & p & m \\ \hline
3 & 0.999 & 4 & 0.999 & 12 \\
5 & 0.990 & 2 & 0.997 & 5 \\
7 & 0.992 & 2 & 0.996 & 3 \\
9 & 0.978 & 1 & 0.996 & 2 \\
11 & 0.996 & 1 & 0.983 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between the probability of success and number of self-loops for two different scenarios for weight value \(l=n^{2}/N\). Column A represents the results found by Souza et al. (2023) to search for non-adjacent marked vertices, and column B to search for adjacent and non-adjacent marked vertices.
Comparing the results described in Table 2(a) and Table 2(b), it is essential to realize that the weight composition is very relevant. Although the type of marked vertices can influence the probability of success, with an ideal weight value along with an ideal number of self-loops and partial phase inversion, it is possible to improve the results. The exception was \(k=3\), where there was a reduction in the probability of success but still close to \(1\). The other bold lines show the cases with the more expressive improvements in the probability of success. In general, there was a significant increase in the number of self-loops.
As in Souza et al. (2023), we analyzed whether the relative position of the non-adjacent marked vertices influences the results of the maximum probability of success. We also used the coefficient of variation to analyze the results' dispersion level. The relative position of the non-adjacent marked vertices also did not show significant influence considering a numerical precision of four digits. Fig. 2 shows the coefficient of variation for the results presented in Fig. 1. Variations around the mean value are small. However, the behavior exhibited is stable. The maximum success probabilities close to \(1\) coincide with these small variations. Considering the weight value of the self-loop, in general, the weight \(l=(n^{2}/N)\cdot k\) indicated minor variability.
### Analyzing the search with adjacent marked vertices
The simulations performed in the samples of the previous scenario were necessary to obtain the average behavior based on the relative position of the non-adjacent marked vertices. In this scenario, let us analyze only adjacent vertices. According to Nahimovs et al. (2019), a stationary state occurs when two adjacent marked vertices exist. In this case, the maximum probability of success obtained in our simulations was approximately \(p=0.02\) for all weights. Now, consider \(k\geqslant 3\). Fig. 2(a) shows the probability of success for the weight \(l=n/N\). Comparing the results presented by Souza et al. (2021, 2023) to search for multiple marked vertices and a single self-loop, we had an improvement in the success probability for \(k=3\) marked vertices, which were \(p=0.745\) and evolved to \(p=0.999\) with \(m=3\) self-loops.
Fig. 2(b) shows the probability of success for the weight \(l=(n/N)\cdot k\). Compared to the results obtained by Souza et al. (2021) for searching multiple adjacent marked vertices using a single self-loop, there was an improvement in the probability of success. As we can see in Table 4, two results are significant, the search for \(k=3\) marked vertices, with \(9\) self-loops allowed to increase the probability from \(p=0.386\) to \(p=0.999\). For \(k=4\) marked vertices, with \(4\) self-loops permitted to increase the probability from \(p=0.639\) to \(p=0.996\).
Again, let us analyze two different scenarios for the same weight values. Fig. 2(c) shows the success probabilities for searching only adjacent marked vertices while Fig. 2(c) shows the success probabilities for searching adjacent and non-adjacent marked vertices both using the weight \(l=n^{2}/N\). Note that the behaviors are very similar. However, in the scenario where there are only adjacent marked vertices, which is the case in Fig. 2(c), a more significant number of self-loops is necessary when the density of adjacent marked vertices is small. Table 5 compares the success probabilities and the number of self-loops for the cases where the number of marked vertices is the same. Again, note that the results are similar except for \(k=3\), where there was a significant increase in the number of self-loops.
\begin{table}
\end{table}
Table 3: Ideal number of self-loops and maximum probability of success for searching adjacent and non-adjacent marked vertices. (2(a)) weight value \(l=n^{2}/N\). (2(b)) weight value \(l=(n^{2}/N)\cdot k\).
\begin{table}
\end{table}
Table 4: Comparison between the success probabilities and the number of self-loops to search for adjacent marked vertices with the weight \(l=(n/N)\cdot k\). (4a) shows the results obtained by Souza et al. (2021) using a single self-loop. (4b) shows the results in this work using multiple self-loops.
Figure 2: The coefficient of variation of the MSLQW – PPI to search for adjacent and non-adjacent marked vertices. The results are represented in percentage terms. (a), (b), (c), and (d) represents the coefficient of variation of the results presented in Figures 0(a), 0(b), 0(c), and 0(d) for the weight values \(l=n/N\), \(l=(n/N)\cdot k\), \(l=n^{2}/N\), and \(l=(n^{2}/N)\cdot k\), respectively.
Now, consider Fig. (d)d. It shows the success probabilities to search for adjacent marked vertices using the self-loop weight \(l=(n^{2}/N)\cdot k\). Compared with the results of the scenario presented in Fig. (d)d, we notice a very similar behavior where for a small density of marked vertices, a greater number of self-loops is necessary. However, when this density of marked vertices increases, the number of self-loops decreases to the point of approaching the results presented by Souza et al. (2023) for the same weight value \(l=(n^{2}/N)\cdot k\). Table 6 shows the number of self-loops needed to obtain the maximum probabilities of success. Comparing with the results presented in Table (b)b for the same numbers of marked vertices, it is possible to see that, a greater number of self-loops are required to achieve success probabilities close to \(1\) when there are only adjacent marked vertices. However, for \(k=3\), \(m=30\) was insufficient.
Figure 3: The probability of success of the MSLQW – PPI to search for adjacent marked vertices with \(n=12\) and \(N=4096\) vertices. (a) weight value \(l=n/N\). (b) weight value \(l=(n/N)\cdot k\). (c) weight value \(l=n^{2}/N\). (d) weight value \(l=(n^{2}/N)\cdot k\).
## 5 Conclusions
In this work, we analyzed the application of MSLQW - PPI in two scenarios based on the type of marked vertices: adjacent and non-adjacent. In the first scenario, the two kinds of marked vertices were searched. Here, we analyzed the relative position of the non-adjacent marked vertices to verify your influence on the results about adjacent vertices. For this, the coefficient of variation was also used to verify the dispersion of results around the maximum success probability mean value as Souza et al. (2023). In the second scenario, we analyzed only the adjacent marked vertices. The dependence on the self-loop weight value is inherent to the lackadaisical quantum walk. Therefore, the composition of the weights is essential. Thus, all analyses were made considering the four weight values. However, when applied to MSLQW - PPI, a strategy of weight distribution is equally necessary because of the multiple self-loop. The strategy of weight distribution in this work was the same used by Souza et al. (2023), _i.e._, the weight \(l=l^{\prime}/m\).
In the first scenario, to search for adjacent and non-adjacent vertices, according to the results, the relative position of non-adjacent marked vertices does not have a significant influence, considering a numerical precision of four digits. The results obtained by Souza et al. (2021, 2023) to search for non-adjacent marked vertices indicate that the weight values influenced the maximum success probabilities according to weight values \(l=(n/N)\cdot k\) and \(l=(n^{2}/N)\cdot k\). Moreover, when we analyzed the coefficient of variation, we saw that a minor variability coincides with the maximum probability of success. Fig. 2 shows that the number of marked vertices influences the results causing a more significant variability. In the second scenario, the results obtained in the search for adjacent marked vertices present very similar results to the search for adjacent and non-adjacent marked vertices, except in some cases. For example, when the weight \(l=(n^{2}/N)\cdot k\), the number of self-loops increases considerably.
In summary, we conclude that for MSLQW - PPI, there is a dependence between the weight value, the vertex type, the number of marked vertices, and the number of self-loops needed to obtain success probabilities close to \(1\). In the search for adjacent and non-adjacent marked vertices, the weight that presented the best results was \(l=(n^{2}/N)\cdot k\). In the search for adjacent marked vertices, three of four weights presented the best results: \(l=(n/N)\cdot k\), \(l=(n^{2}/N)\), and \(l=(n^{2}/N)\cdot k\). The results presented in Fig. 3 show that from a certain \(k\), the self-loops converge to a certain quantity. In future works, we intend to apply this methodology to evaluate the MSLQW - PPI in other \(d\)-regular structures with samples that contain adjacent marked vertices. We intend to verify the convergence of the number of multiple self-loops for a specific \(m\) from a certain \(k\) for the weight value \(l=(n^{2}/N)\cdot k\).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \(k\) & \(p\) & \(m\) & \(k\) & \(p\) & \(m\) \\ \hline
3 & **0.681** & **30** & 9 & 0.998 & 20 \\
4 & **0.943** & **30** & 10 & 0.996 & 19 \\
5 & 0.995 & 30 & 11 & 0.997 & 18 \\
6 & 0.999 & 28 & 12 & 0.997 & 18 \\
7 & 0.998 & 24 & 13 & 0.997 & 17 \\
8 & 0.998 & 21 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Maximum success probability and number of self-loops to search for adjacent marked vertices with the weight \(l=(n^{2}/N)\cdot k\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Figures} \\ \cline{2-5} & \multicolumn{2}{c}{1c} & \multicolumn{2}{c}{3c} \\ \cline{2-5} k & p & m & p & m \\ \hline
3 & 0.999 & 12 & 0.991 & 30 \\
5 & 0.997 & 5 & 0.998 & 7 \\
7 & 0.996 & 3 & 0.995 & 3 \\
9 & 0.996 & 2 & 0.995 & 2 \\
11 & 0.983 & 2 & 0.986 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between the probability of success and number of self-loops for two scenarios. The column for 1c represents the results of searching for adjacent and non-adjacent vertices, and the column for 3c to search for adjacent vertices. | 量子検索アルゴリズムに強い関心が寄せられ、特に隣接する複数の解決策を持つ問題には大きな関心が寄せられています。ハイパークスにおいて、量子系のエネルギーの一部がターゲット状態に隣接する状態に保持され、ターゲット状態の観察を減らす可能性があります。この論文では、マルチセルフループ欠陥量子ウォークと部分的な相転換を用いてハイパークスの複数の隣接する識別頂点を探しています。識別頂点の種類などの側面も考慮し、多重セルフループと重量組成を用いています。2つのシナリオを分析しました。まず、隣接する識別頂点と非隣接する識別頂点の位置関係。次に、非隣接する識別頂点のみを分析しました。ここでは、部分的な相転換を用いることで、量子ウォークはターゲット状態の波動幅を増幅し、成功確率を$1$に近い値に |
2309.00090 | Benford's Law under Zeckendorf expansion | In the literature, Benford's Law is considered for base-b expansions where
b>1 is an integer. In this paper, we investigate the distribution of leading
"digits" of a sequence of positive integers under other expansions such as
Zeckendorf expansion, and declare what Benford's Law should be under
generalized Zeckendorf expansion. | Sungkon Chang, Steven J. Miller | 2023-08-31T19:16:07 | http://arxiv.org/abs/2309.00090v1 | # Benford's Law under Zeckendorf expansion
###### Abstract
In the literature, Benford's Law is considered for base-\(b\) expansions where \(b>1\) is an integer. In this paper, we investigate the distribution of leading "digits" of a sequence of positive integers under other expansions such as Zeckendorf expansion, and declare what Benford's Law should be under generalized Zeckendorf expansion.
## 1 Introduction
Introduced in [2, 18] is a probability distribution of the leading decimal digits of a sequence of positive integers, known as _Benford's Law_, and the exponential sequences such as \(\{3^{n}\}\) are standard examples of sequences that satisfy Benford's Law. Given \(d\in\{1,2,3,\ldots,9\}\), the probability of having the leading digit \(d\) in the decimal expansion of \(3^{n}\) is \(\log_{10}\frac{d+1}{d}\), and this distribution is Benford's Law. In fact, given a block \(B\) of digits of any length, the probability of having the leading block \(B\) in the decimal expansion of \(3^{n}\) is given by a similar logarithmic formula as well, and this is known as _strong Benford's Law;_ see Example 1.9. It is indeed a special property that a sequence has convergent proportions for each leading digit. For example, the proportion of odd integers \(2n-1\leq M\) with leading digit \(d\) oscillates, and does not converge as \(M\to\infty\); see Section 4.10.
In the literature, Benford's Law is considered for base-\(b\) expansions where \(b>1\) is an integer. For example, the probabilities of the binary expansions of integer powers of \(3\) having the leading binary digits \(100_{2}\) and \(101_{2}\) are \(\log_{2}\frac{2^{2}+1}{2^{2}}\) and \(\log_{2}\frac{2^{2}+2}{2^{2}+1}\), respectively; for later reference, we may rewrite the values as follows:
\[\log_{2}\frac{1+2^{-2}}{1}\approx 0.322,\quad\log_{2}\frac{1+2^{-1}}{1+2^{-2}} \approx 0.264. \tag{1}\]
In this paper, we shall consider the distribution of leading "digits" of a sequence of positive integers under other expansions such as Zeckendorf expansion [19]. For example, let \(\{F_{n}\}_{n=1}^{\infty}\) for \(n\geq 1\) be the shifted Fibonacci sequence, i.e., \(F_{n+2}=F_{n+1}+F_{n}\) for all \(n\in\mathbb{N}\) and \(F_{1}=1\) and \(F_{2}=2\), and consider two Zeckendorf expansions: \(3^{5}=F_{12}+F_{5}+F_{2}\) and \(3^{8}=F_{18}+F_{16}+F_{14}+F_{11}+F_{7}+F_{5}\). Similar to the way the binary expansions are denoted, we may write
\[3^{5}=100000010010_{F},\quad 3^{8}=101010010001010000_{F}\]
where \(1\)'s are inserted at the \(k\)th place from the right if \(F_{k}\) is used in the expansions.
**Definition 1.1**.: Let \(A=\{0,1\}\). Given \(\{s,n\}\subset\mathbb{N}\), let \(n=\sum_{k=1}^{M}a_{k}F_{M-k+1}\) be the Zeckendorf expansion of \(n\) (where \(a_{1}=1\)). We define \(\mathrm{LB}_{s}(n):=(a_{1},\ldots,a_{s})\in A^{s}\) if \(M\geq s\); otherwise, \(\mathrm{LB}_{s}(n)\) is undefined. The tuple \(\mathrm{LB}_{s}(n)\) is called _the leading block of \(n\) with length \(s\) under Zeckendorf expansion_.
For example, \(\mathrm{LB}_{3}(3^{5})=(1,0,0)\), \(\mathrm{LB}_{3}(3^{8})=(1,0,1)\), and \(\mathrm{LB}_{6}(3^{8})=(1,0,1,0,1,0)\). Since \(\mathrm{LB}_{2}(n)=(1,0)\) for all integers \(n\geq 2\), it is only meaningful to consider the first three or more Zeckendorf digits. We prove Theorem 1.3 in this note.
**Definition 1.2**.: Given a conditional statement \(P(n)\) where \(n\in\mathbb{N}\), and a subset \(A\) of \(\mathbb{N}\), let us define
\[\mathrm{Prob}\left\{\,n\in A:P(n)\text{ is true}\,\right\}:=\lim_{n\to \infty}\frac{\#\{k\in A:P(k)\text{ is true},\ k\leq n\}}{\#\{k\in A:k\leq n\}}.\]
For example, if \(A=\{n\in\mathbb{N}:n\equiv 2\mod 3\}\), then \(\mathrm{Prob}\left\{\,n\in A:n\equiv 1\mod 5\,\right\}=\frac{1}{5}\). If \(A\) is finite, the limit always exists.
Let \(\phi\) be the Golden ratio. The following is an analogue of Benford's Law under binary expansion demonstrated in (1).
**Theorem 1.3**.: _Let \(a>1\) be an integer._
\[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{3}(a^{n})=(1,0, 0)\,\right\} =\,\log_{\phi}(1+\phi^{-2})\approx.672,\] \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{3}(a^{n})=(1,0, 1)\,\right\} =\,\log_{\phi}\frac{\phi}{1+\phi^{-2}}\approx.328.\]
In particular, they exist! Although the probabilities are different from the binary cases, the structure of the log expressions in Theorem 1.3 is quite similar to that of the binary expansions in (1), i.e., the denominators of the quotients express the leading digits in power expansions with respect to their bases. The exponential sequences \((a^{n})_{n=1}^{\infty}\) where \(a>1\) is an integer are standard sequences that satisfy Benford's Law under base-\(b\) expansion. Motivated from these standard examples, we define Benford's Law under Zeckendorf expansion to be the above distribution of the leading blocks \((1,0,0)\) and \((1,0,1)\) under Zeckendorf expansion; see Definition 3.6.
The exponential sequences \(\{a^{n}\}_{n=1}^{\infty}\) are standard sequences for so-called _strong Benford's Law under base-\(b\) expansion_ as well; see Example 1.9. We introduce below the probability of the leading Zeckendorf digits of \(a^{n}\) with arbitrary length, which is a generalization of Theorem 1.3; this result is rewritten in Theorem 3.8 with more compact notation.
**Definition 1.4**.: Let \(A=\{0,1\}\), and let \(s\geq 2\) be an integer. Let \(\mathbf{b}=(b_{1},b_{2},\ldots,b_{s})\in A^{s}\) such that \(b_{1}=1\) and \(b_{k}b_{k+1}=0\) for all \(1\leq k\leq s-1\). We define \(\widetilde{\mathbf{b}}\) to be a tuple \((\widetilde{b}_{1},\ldots,\widetilde{b}_{s})\in A^{s}\) as follows. If \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}<F_{s+1}\), then \(\widetilde{b}_{k}\) for \(1\leq k\leq s\) are defined to be integers in \(A\) such that \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}=\sum_{k=1}^{s}\widetilde{b}_{k}F_{s-k+1}\) and \(\widetilde{b}_{k}\widetilde{b}_{k+1}=0\) for all \(1\leq k\leq s-1\). If \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}=F_{s+1}\), then \(\widetilde{b}_{1}:=\widetilde{b}_{2}:=1\), and \(\widetilde{b}_{k}:=0\) for all \(3\leq k\leq s\).
For the case of \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}<F_{s+1}\), the existence of the tuple \(\widetilde{\mathbf{b}}\) is guaranteed by Zeckendorf's Theorem.
**Theorem 1.5**.: _Let \(a>1\) and \(s\geq 2\) be integers. Let \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\) be tuples defined in Definition 1.4. Then,_
\[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{s}(a^{n})=\mathbf{b}\,\right\} =\,\log_{\phi}\frac{\sum_{k=1}^{s}\widetilde{b}_{k}\phi^{-(k-1)}}{\sum_{k=1}^{ s}b_{k}\phi^{-(k-1)}}.\]
For example,
\[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(a^{n})=(1,0,0,0,1,0)\right\} =\,\log_{\phi}\frac{1+\phi^{-3}}{1+\phi^{-4}}\approx 0.157\] \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(a^{n})=(1,0,1,0,1,0)\right\} =\,\log_{\phi}\frac{1+\phi^{-1}}{1+\phi^{-2}+\phi^{-4}}\] \[=\,\log_{\phi}\frac{\phi}{1+\phi^{-2}+\phi^{-4}}\approx 0.119.\]
As in Benford's Law under Zeckendorf expansion, we define the probability distributions described in Theorem 3.8 to be _strong Benford's Law under Zeckendorf expansion_; see Definition 3.9.
Exponential sequences are standard examples for Benford's Laws, but some exponential sequences do not satisfy Benford's Law under some base-\(b\) expansion. Let us demonstrate examples under Zeckendorf expansion. Let \(\{G_{n}\}_{n=1}^{\infty}\) be the sequence given by \(G_{k}=F_{2k}+F_{k}\) for \(k\in\mathbb{N}\). Then, given an integer \(s>1\), the \(s\) leading Zeckendorf digits of \(G_{k}\) is \(100\cdots 00_{F}\) as \(k\to\infty\) since the gap \(2k-k=k\) between the indices of \(F_{2k}\) and \(F_{n}\) approaches \(\infty\). Thus, \(\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{s}(G_{n})=(1,0,0,\ldots,0) \right\}=1\) for all \(s\in\mathbb{N}\), and the probabilities of other digits of length \(s\) are all (asymptotically) \(0\). Similar probability distributions occur for the Lucas sequence \(\{K_{n}\}_{n=1}^{\infty}\) given by \(K_{k+2}=K_{k+1}+K_{k}\) for \(k\in\mathbb{N}\) and \((K_{1},K_{2})=(2,1)\). Given \(s\in\mathbb{N}\), the probabilities of having leading Zeckendorf digits of length \(s\) are entirely concentrated on one particular string of digits. For example, \(\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{10}(K_{n})=(1,0,0,0,1,0,0,0,1,0)\right\}=1\), and the probabilities of having other digits of length \(10\) is all (asymptotically) \(0\); see Example 5.10 for full answers.
Generalized Zeckendorf expansions are introduced in [10, 17]. In Section 6, we prove Theorem 6.9 on the probability of the leading digits of \(a^{n}\) with arbitrary length under generalized Zeckendorf expansion, and define these probability distributions to be strong Benford's Law under generalized Zeckendorf expansion; see Definition 6.10. As in the concept of _absolute normal numbers_[12], we introduce in Definition 6.15 the notion of _absolute Benford's Law_, which is the property of satisfying strong Benford's Law under all generalized Zeckendorf expansions. For example, the sequence given by \(K_{n}=\left\lfloor\frac{\phi}{\sqrt{5}}(\frac{89}{55})^{n}\right\rfloor\) for \(n\in\mathbb{N}\) satisfies strong Benford's Law under all generalized Zeckendorf expansions; see Example 6.18. Its first fifteen values are listed below:
\[(1,1,3,4,8,12,21,34,55,89,144,233,377,610,988).\]
They are nearly equal to the Fibonacci terms as \(\frac{89}{55}\) is the \(10\)th convergent of the continued fraction of \(\phi\). The differences amplify as we look at higher terms, and even under Zeckendorf expansion, this sequence satisfies strong Benford's Law.
It is also natural to consider sequences that have different distributions, and in this note we investigate other distributions of leading digits under generalized Zeckendorf expansions as well. In the following paragraphs, we shall explain this approach using base-\(10\) expansion. The results for other expansions are introduced in Section 5 and 6.
Strong Benford's Law for the sequence \(\{3^{n}\}_{n=1}^{\infty}\) under decimal expansion follows from the equidistribution of the fractional part of \(\log_{10}(3^{n})\) on the interval \((0,1)\). We realized that the function \(\log_{10}(x)\) is merely a tool for calculating the leading digits, and that other distributions of leading digits naturally emerge as we modified the function \(\log_{10}(x)\).
We noticed that the frequency of leading digits converges when a continuation of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\) has convergent behavior over the intervals \([n,n+1]\), and we phrase it more precisely below.
**Definition 1.6**.: Let \(\{H_{n}\}_{n=1}^{\infty}\) be an increasing sequence of positive integers. A continuous function \(h:[1,\infty)\to\mathbb{R}\) is called a _uniform continuation of \(\{H_{n}\}_{n=1}^{\infty}\)_ if \(h(n)=H_{n}\) for all \(n\in\mathbb{N}\), and the following sequence of functions \(h_{n}:[0,1]\to[0,1]\) uniformly converges to an increasing (continuous) function:
\[h_{n}(p)=\frac{h(n+p)-h(n)}{h(n+1)-h(n)}.\]
If \(h\) is a uniform continuation of \(\{H_{n}\}_{n=1}^{\infty}\), let \(h_{\infty}:[0,1]\to[0,1]\) denote the increasing continuous function given by \(h_{\infty}(p)=\lim_{n\to\infty}h_{n}(p)\).
Theorem 1.8 below is a version specialized for decimal expansion. The proof of this theorem is similar to, and much simpler than the proof of Theorem 5.6 for Zeckendorf expansion, and we leave it to the reader.
**Definition 1.7**.: If \(\alpha\in\mathbb{R}\), we denote the fractional part of \(\alpha\) by \(\operatorname{frc}(\alpha)\). Given a sequence \(\{K_{n}\}_{n=1}^{\infty}\) of real numbers, we say, \(\operatorname{frc}(K_{n})\)_is equidistributed_ if \(\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{frc}(K_{n})\leq \beta\,\big{\}}=\beta\) for all \(\beta\in[0,1]\).
For example, consider the sequence \(\{\operatorname{frc}(n\pi)\}_{n=1}^{\infty}\) where \(\pi\approx 3.14\) is the irrational number. Then, by Weyl's Equidistribution Theorem, \(\operatorname{frc}(n\pi)\) is equidistributed on the interval \([0,1]\). The sequence \((\sin^{2}(n))_{n=1}^{\infty}\) is an example of sequences that have \(\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\sin^{2}(n)\leq\beta\,\big{\}}\) defined for each \(\beta\in[0,1]\), and the probability is \(\frac{1}{\pi}\cos^{-1}(1-2\beta)\). Thus, it is not equidistributed on \([0,1]\).
**Theorem 1.8**.: _Let \(h:[1,\infty)\to\mathbb{R}\) be a uniform continuation of the sequence \(\{10^{k-1}\}_{n=1}^{\infty}\). Then, there is a sequence \(\{K_{n}\}_{n=1}^{\infty}\) of positive integers approaching \(\infty\) (see Theorem 6.19 for the description of \(K_{n}\)) such that \(\operatorname{frc}\big{(}h^{-1}(K_{n})\big{)}\) is equidistributed._
_Let \(\{K_{n}\}_{n=1}^{\infty}\) be a sequence of positive integers approaching \(\infty\) such that \(\operatorname{frc}\big{(}h^{-1}(K_{n})\big{)}\) is equidistributed. Let \(d\) be a positive integer of \(s\) decimal digits. Then, the probability of the \(s\) leading decimal digits of \(K_{n}\) being \(d\) is equal to_
\[{h_{\infty}}^{-1}\left(\frac{(d+1)-10^{s-1}}{9\cdot 10^{s-1}}\right)-{h_{ \infty}}^{-1}\left(\frac{d-10^{s-1}}{9\cdot 10^{s-1}}\right).\]
**Example 1.9**.: Let \(h:[1,\infty)\to\mathbb{R}\) be the function given by \(h(x)=10^{x-1}\). Then, \(h\) is a uniform continuation of the sequence \(\{10^{n-1}\}\), and \(h_{\infty}(p)=\frac{1}{9}(10^{p}-1)\). By Theorem 6.19, the
sequence \(\{K_{n}\}_{n=1}^{\infty}\) with the equidistribution property is given by \(K_{n}=\lfloor 10^{n+\operatorname{frc}(n\pi)}\rfloor\), but there are simpler sequences such as \(\{3^{n}\}_{n=1}^{\infty}\) that have the property.
By Theorem 1.8, the probability of the \(s\) leading decimal digits of \(K_{n}\) being \(d\) is equal to
\[\log_{10}\frac{d+1}{10^{s-1}}-\log_{10}\frac{d}{10^{s-1}}\,=\,\log_{10}\left(1 +\frac{1}{d}\right)\]
where \(d\in\mathbb{N}\) has \(s\) decimal digits. This distribution is known as strong Benford's Law under base-10 expansion, and we may say that strong Benford's Law under base-10 expansion arises from the logarithmic continuation of \(\{10^{n-1}\}_{n=1}^{\infty}\). For this reason, we call \(h(x)\,a\)_Benford continuation of the base-10 sequence_.
**Example 1.10**.: Let \(h:[1,\infty)\to\mathbb{R}\) be the function whose graph is the union of the line segments from \((n,10^{n-1})\) to \((n+1,10^{n})\) for all \(n\in\mathbb{N}\). Let \(\{K_{n}\}_{n=1}^{\infty}\) be the sequence given by \(K_{n}=\left\lfloor 10^{n+\log_{10}(9\operatorname{frc}(n\pi)+1)}\right\rfloor\) as described in Theorem 6.19. Then, the fractional part \(\operatorname{frc}\left(h^{-1}(K_{n})\right)\) is equidistributed. The limit function \(h_{\infty}\) defined in Theorem 1.8 is given by \(h_{\infty}(p)=p\) for \(p\in[0,1]\), and given a decimal expansion \(d\) of length \(s\), the probability of the \(s\) leading decimal digits of \(K_{n}\) being \(d\) is (uniformly) equal to \(1/(9\cdot 10^{s-1})\) by Theorem 1.8.
The first ten values of \(K_{n}\) are
\[(22,354,4823,60973,737166,8646003,99203371,219467105,\,3469004940,47433388230).\]
For example, if we look at many more terms of \(K\), then the first two digits \(22\) of \(K_{1}\) will occur as leading digits with probability \(1/90\approx 0.011\), and the probability for the digits \(99\) is also \(1/90\). As in constructing a normal number, it's tricky to construct a sequence of positive integers with this property, and prove that it has the property. Let us note here that the \(s\) leading decimal digits of the sequence \(\{n\}_{n=1}^{\infty}\) has frequency close to \(1/(9\cdot 10^{s-1})\), but it oscillates and does not converge as more terms are considered; see Theorem 4.10 for a version under Zeckendorf expansion. In Example 5.4, we demonstrate the "line-segment" continuation of the Fibonacci sequence.
In Example 5.7, we use a more refined "line segment continuation", and demonstrate a uniform continuation that generates the distribution of leading blocks that satisfies strong Benford's Law up to the 4th digits, but does not satisfy the law for the leading blocks of length \(>4\).
Theorem 1.8 suggests that given a uniform continuation \(h\) of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\), we associate certain distributions of leading digits, coming from the equidistribution property. It's natural to consider the converse that given a sequence \(\{K_{n}\}_{n=1}^{\infty}\) with "continuous distribution of leading digits" of arbitrary length, we associate a certain uniform continuation of \(\{10^{n-1}\}_{n=1}^{\infty}\). Theorem 1.11 below is a version for base-10 expansion. In Section 5, we introduce results on this topic for the Fibonacci sequence \(\{F_{n}\}_{n=1}^{\infty}\). The proof of Theorem 1.11 is similar to, and simpler than Theorem 5.18 for the Fibonacci expansion, and leave it to the reader.
**Theorem 1.11**.: _Let \(\{K_{n}\}_{n=1}^{\infty}\) be a sequence of positive integers approaching \(\infty\). Let \(h_{K}^{*}:[0,1]\to[0,1]\) be the function given by \(h_{K}^{*}(0)=0\), \(h_{K}^{*}(1)=1\), and_
\[h_{K}^{*}(\tfrac{1}{9}(\beta-1))\,=\,\lim_{s\to\infty}\operatorname{Prob}\big{\{} \,n\in\mathbb{N}:\text{The s leading decimal digits of $K_{n}$ is $\leq\left\lfloor 10^{s-1}\beta\right\rfloor$}\,\big{\}} \tag{2}\]
_where \(\beta\) varies over the real numbers in the interval \([1,10)\) and we assume that the RHS of (2) is defined for all \(\beta\in[1,10)\). If \(h_{K}^{*}\) is an increasing continuous function, then there is a uniform continuation \(h\) of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\) such that \({h_{\infty}}^{-1}=h_{K}^{*}\), and \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \cdot
The remainder of this paper is organized as follows. In Section 2, the notations for sequences and coefficient functions are introduced. In Section 3, the distribution of leading blocks of exponential sequences under Zeckendorf expansion is introduced, and Benford's Law and strong Benford's Law under Zeckendorf expansion are declared. Introduced in Section 4 are the method of calculating the distribution results introduced in Section 3, and also the distribution results for monomial sequences \(\{n^{a}\}_{n=1}^{\infty}\). In Section 5, we introduce a general approach to the distributions of leading blocks under Zeckendorf expansion that are different from that of Benford's Law. The approach establishes the correspondence between the continuations of the Fibonacci sequences and the distributions of leading blocks under Zeckendorf expansion. In Section 6, we introduce definitions and results that generalize the contents of Sections 3, 4, and 5 for generalized Zeckendorf expansions. The absolute Benford's Law mentioned earlier in this section is properly introduced in Section 6 as well. In Section 7, the Benford behavior introduced in Theorem 1.14 is generalized for the setting of two generalized Zeckendorf expansions.
## 2 Notation and definitions
**Notation 2.1**.: Let \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\), and let \(\Omega_{n}:=\{k\in\mathbb{N}:k\leq n\}\). For simpler notation, let us use a capital letter for a sequence of numbers, and use the infinite tuple notation for listing its values, e.g., \(Q=(2,4,6,8,\ldots)\). We use the usual subscript notation for individual values, e.g., \(Q_{3}=6\).
**Definition 2.2**.: Tuples \((c_{1},c_{2},\ldots,c_{t})\in\mathbb{N}_{0}^{t}\) where \(t\in\mathbb{N}\) are called _coefficient functions of length_\(t\) if \(c_{1}>0\). If \(\epsilon\) is a coefficient function of length \(t\), we denote the \(k\)th entry by \(\epsilon(k)\) (if \(k\leq t\)), and its length \(t\) by \(\operatorname{len}(\epsilon)\). For a coefficient function \(\epsilon\), let \(\epsilon*Q\) denote \(\sum_{k=1}^{t}\epsilon(k)Q_{t-k+1}\) where \(t=\operatorname{len}(\epsilon)\), and let \(\epsilon\cdot Q\) denote \(\sum_{k=1}^{t}\epsilon(k)Q_{k}\).
If \(\epsilon=(4,1,6,2)\) and \(Q\) is a sequence, then \(\epsilon*Q=4Q_{4}+Q_{3}+6Q_{2}+2Q_{1}\), and \(\epsilon\cdot Q=4Q_{1}+Q_{2}+6Q_{3}+2Q_{4}\).
## 3 Benford's Law for Zeckendorf expansions
Let \(a\) and \(b\) be two integers \(>1\) such that \(\gcd(a,b)=1\). The sequence \(K\) be the sequence given by \(K_{n}=a^{n}\) is a standard example of sequences that satisfy Benford's Law under base-\(b\) expansion. We shall declare the behavior of the leading digits of the Zeckendorf expansion of \(a^{n}\) to be Benford's Law under Zeckendorf expansion.
Let us begin with formulating Zeckendorf's Theorem in terms of coefficient functions.
**Definition 3.1**.: Let \(\mathscr{F}\) be the set of coefficient functions \(\epsilon\) such that \(\epsilon(k)\leq 1\) for all \(k\leq\operatorname{len}(\epsilon)\), and \(\epsilon(k)\epsilon(k+1)=0\) all \(k\leq\operatorname{len}(\epsilon)-1\). Let \(F\) be the shifted Fibonacci sequence such that \(F_{n+2}=F_{n+1}+F_{n}\) for all \(n\in\mathbb{N}\) and \((F_{1},F_{2})=(1,2)\). Let \(\phi\) be the golden ratio, let \(\omega:=\phi^{-1}\), and let \(\widehat{F}=(1,\omega,\omega^{2},\ldots)\) be the sequence given by \(\widehat{F}_{n}=\omega^{n-1}\).
Recall the product notation from Definition 2.2.
**Theorem 3.2** ([19], Zeckendorf's Theorem).: _For each positive integer \(n\), there is a unique coefficient function \(\epsilon\in\mathscr{F}\) such that \(n=\epsilon*F\)._
Recall the example \(3^{5}=F_{12}+F_{5}+F_{2}\). If \(\epsilon=(1,0,0,0,0,0,0,1,0,0,1,0)\), then \(\epsilon\in\mathscr{F}\) and \(3^{5}=c*F\).
**Definition 3.3**.: The expression \(n=c*F\) where \(n\in\mathbb{N}\) and \(\epsilon\in\mathscr{F}\) is called _the \(\mathscr{F}\)-expansion of \(n\)_ or _the Zeckendorf expansion of \(n\)_.
### Benford's Law
If \(\epsilon\in\mathscr{F}\) and \(\operatorname{len}(\epsilon)\geq 2\), then \((\epsilon(1),\epsilon(2))=(1,0)\) is always the case, and hence, the probability of having \((\epsilon(1),\epsilon(2))=(1,0)\) is \(1\). For the purpose of demonstration, we consider the first three entries of \(\epsilon\).
To denote arbitrarily many _leading blocks of coefficient functions_, which are defined in Definition 3.4 below, we shall use the boldface font and subscripts, e.g., \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\), and in particular, \(\mathbf{b}_{k}\) for \(k=1,2\) are not numbers, but tuples. The reader must not be confused with the entries of a sequence \(Q\), e.g., \(Q_{1}\) and \(Q_{2}\), which are numbers, and we use the regular font for sequences.
**Definition 3.4**.: A coefficient function of length \(s\) is also called _a leading block of length \(s\)_ in the context of investigating the frequency of leading blocks, and it is denoted with boldface fonts, e.g. \(\mathbf{b}=(1,0,0,1)\in\mathscr{F}\), \(\mathbf{b}(3)=0\), and \(\mathbf{b}(4)=1\). Let \(\mathscr{F}_{3}:=\{\mathbf{b}_{1},\mathbf{b}_{2}\}\) where \(\mathbf{b}_{1}=(1,0,0)\), \(\mathbf{b}_{2}=(1,0,1)\) are leading blocks of length \(3\), and the set is called _the set of leading blocks of length \(3\) under \(\mathscr{F}\)-expansion_. If \(\mathbf{b}\in\mathscr{F}_{3}\) and \(\mathbf{b}=\mathbf{b}_{1}\), then define \(\widetilde{\mathbf{b}}:=\mathbf{b}_{2}\), and and if \(\mathbf{b}\in\mathscr{F}_{3}\) and \(\mathbf{b}=\mathbf{b}_{2}\), then define \(\widetilde{\mathbf{b}}:=(1,1,0)\).
The block \(\widetilde{\mathbf{b}}=(1,1,0)\) is not a member of \(\mathscr{F}\), and hence, does not occur as the leading block of an \(\mathscr{F}\)-expansion, but it's convenient to use for Theorem 3.5, where we rely on the equality \(\widetilde{\mathbf{b}}\cdot(1,\omega^{1},\omega^{2})=\phi\); see Definitions 2.2 and 3.1. The block \(\widetilde{\mathbf{b}}\) makes the statements of Definition 3.6 below more aesthetic, and the principle of defining an exclusive block such as \((1,1,0)\) for other generalized Zeckendorf expansions will be explained in Definition 3.7 and Section 6.
The following is a special version of Corollary 4.7, and it is Theorem 1.3 written in terms of the dot product and blocks. Recall the notation \(\operatorname{LB}_{s}\) from Definition 1.1, the set \(\mathscr{F}_{3}\) from Definition 3.4, the sequence \(\widehat{F}\) from Definition 3.1, and the dot product from Definition 2.2.
**Theorem 3.5**.: _Let \(K\) be a sequence given by \(K_{n}=a^{n}\) where \(a>1\) is an integer. Then, given \(\mathbf{b}\in\mathscr{F}_{3}\),_
\[\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_{n})= \mathbf{b}\,\right\}\;=\;\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {F}}{\mathbf{b}\cdot\widehat{F}}.\]
Motivated from the distribution of these standard sequences, we introduce the following definition.
**Definition 3.6**.: A sequence \(K\) of positive integers is said to _satisfy \(\mathscr{F}\)-Benford's Law_ or _satisfy Benford's Law under \(\mathscr{F}\)-expansion_ if given \(\mathbf{b}\in\mathscr{F}_{3}\),
\[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,\colon\operatorname{LB}_{3}(K_{n })=\mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot \widehat{F}}{\mathbf{b}\cdot\widehat{F}}.\]
Let us demonstrate how the structure of the formulas in Definition 3.6 compares with the one for base-10 expansion. Consider the two leading blocks \(\mathbf{c}_{1}=(2,1,2)\) and \(\mathbf{c}_{2}=(2,1,3)\) for base-10 expansion. Let \(b=10\). Then, strong Benford's Law for decimal expansion requires the probability of having the leading block \(\mathbf{c}_{1}\) to be \(\log_{10}\frac{213}{212}\), which is equal to
\[\log_{b}\frac{\mathbf{c}_{2}\cdot(1,b^{-1},b^{-2})}{\mathbf{c}_{1}\cdot(1,b^ {-1},b^{-2})}\,=\,\log_{b}\frac{b^{2}\mathbf{c}_{2}\cdot(1,b^{-1},b^{-2})}{b^ {2}\mathbf{c}_{1}\cdot(1,b^{-1},b^{-2})}\,=\,\log_{b}\frac{\mathbf{c}_{2}\cdot (b^{2},b,1)}{\mathbf{c}_{1}\cdot(b^{2},b,1)}\,=\,\log_{10}\frac{213}{212}.\]
The first expression in terms of the negative powers of \(b\) is analogous to the ones in Definition 3.6.
### Strong Benford's Law
Under base-\(b\) expansion, a sequence \(K\) is said to satisfy strong Benford's Law if the probability of the first \(M\) leading digits of \(K_{n}\) satisfies a certain logarithmic distribution, and exponential sequences \(\{a^{n}\}_{n=1}^{\infty}\) where \(a>1\) is an integer are standard examples that satisfy strong Benford's Law under base-\(b\) expansion. In Corollary 4.7, we calculate the distribution of leading blocks of arbitrary length of the Zeckendorf expansions of exponential sequence \(\{a^{n}\}_{n=1}^{\infty}\). We declare this distribution to be _strong Benford's Law under Zeckendorf expansion_. We state the formal definition below.
Recall the convolution \(*\) from Definition 2.2.
**Definition 3.7**.: Given an integer \(s\geq 2\), let \(\mathscr{F}_{s}:=\{\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{\ell}\}\) be the finite set of the leading blocks of length \(s\) occurring in the \(\mathscr{F}\)-expansions of the positive integers such that \(1+\mathbf{b}_{k}*F=\mathbf{b}_{k+1}*F\) for all \(k\leq\ell-1\). The leading block \(\mathbf{b}_{\ell}\) is called _the largest leading block of length \(s\) under \(\mathscr{F}\)-expansion_.
If \(s\) is even, then let \(\mathbf{b}_{\ell+1}:=(1,0,1,0,\ldots,1,0,1,1)\), and if \(s\) is odd, then it is \(\mathbf{b}_{\ell+1}:=(1,0,1,0,\ldots,1,1,0)\). If \(\mathbf{b}=\mathbf{b}_{k}\in\mathscr{F}_{s}\), then we denote \(\mathbf{b}_{k+1}\) by \(\widetilde{\mathbf{b}}\).
Notice that the existence of \(\widetilde{\mathbf{b}}\) defined above is guaranteed by Zeckendorf's Theorem. Let us demonstrate examples of \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\). Let \(\mathbf{b}=(1,0,0,0,1,0)\in\mathscr{F}_{6}\). Then, \(\widetilde{\mathbf{b}}=(1,0,0,1,0,0)\in\mathscr{F}_{6}\), and \(1+\mathbf{b}*F=\widetilde{\mathbf{b}}*F\). If we list the coefficient functions in \(\mathscr{F}_{6}\) with respect to the lexicographical order, then \(\widetilde{\mathbf{b}}\) is the immediate successor of \(\mathbf{b}\) if \(\mathbf{b}\neq(1,0,1,0,1,0)\).
For each case of \(s\) being even or odd, the largest leading block \(\mathbf{b}\) of length \(s\) satisfies \(1+\mathbf{b}*F=\widetilde{\mathbf{b}}*F\). If \(\mathbf{b}^{\prime}=(1,0,1,0,1,0)\), then \(\widetilde{\mathbf{b}}^{\prime}=(1,0,1,0,1,1)\), and below we shall demonstrate
that the equality \(\widetilde{\mathbf{b}}^{\prime}\cdot\widehat{F}=\sum_{k=0}^{2}\omega^{2k}+\omega^ {5}=\phi\) makes the sum of the probabilities in Theorem 3.8 and Definition 3.9 be 1.
Let us compare this setup with the case of base-10 expansion. Let \(\mathbf{c}=(4,5,6,7,8,9)\) be the leading block of length 6 for base-10 expansion, and let the sequence \(H\) given by \(H_{n}=10^{n-1}\) be the "base" sequence. Then, \(1+\mathbf{c}*H=\widetilde{\mathbf{c}}*H\) where \(\widetilde{\mathbf{c}}=(4,5,6,7,9,0)\). If we list all the coefficient functions of length 6, with respect to the lexicographical order, that are legal for base-10 expansion, then \(\widetilde{\mathbf{c}}\) is the immediate successor of \(\mathbf{c}\). If \(\mathbf{c}^{\prime}=(9,10,9,9,9,9)\), then we let \(\widetilde{\mathbf{c}}^{\prime}=(9,10,0,0,0,0)\), and \(\sum_{n=1}^{6}\widetilde{\mathbf{c}}^{\prime}(n)10^{n-1}=1+\mathbf{c}^{\prime }*H=10^{6}\). If strong Benford's Law under base-10 expansion is satisfied, the probability of having the leading block \(\mathbf{c}^{\prime}\) under base-10 expansion is
\[\log_{10}\frac{\widetilde{\mathbf{c}}^{\prime}*H}{\mathbf{c}^{\prime}*H}\,=\, \log_{10}\frac{\widetilde{\mathbf{c}}^{\prime}\cdot\widehat{H}}{\mathbf{c}^{ \prime}\cdot\widehat{H}}\,=\,1-\log_{10}\mathbf{c}^{\prime}\cdot\widehat{H}\]
where \(\widehat{H}\) is the sequence given by \(\widehat{H}_{n}=10^{-(n-1)}\).
Recall the sequence \(\widehat{F}\) from Definition 3.1.
**Theorem 3.8**.: _Let \(K\) be a sequence of positive integers given by \(K_{n}=ab^{n}(1+o(1))\) where a and \(b\) are positive real numbers such that \(\log_{\phi}b\) is irrational. Then, given \(\mathbf{b}\in\mathscr{F}_{s}\) where \(s\geq 2\),_
\[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {F}}{\mathbf{b}\cdot\widehat{F}}.\]
Proof.: It follows immediately from Corollary 4.7.
Let us demonstrate below that the probabilities add up to 1 for \(s=6\), but the argument is sufficiently general to be extended for all cases of \(s\). Let \(\mathscr{F}_{6}=(\mathbf{b}_{1},\ldots,\mathbf{b}_{\ell})\) such that \(\mathbf{b}_{k+1}=\widetilde{\mathbf{b}}_{k}\) for all \(1\leq k\leq\ell\). Then, \(\mathbf{b}_{1}=(1,0,0,0,0,0)\) and \(\mathbf{b}_{\ell}=(1,0,1,0,1,0)\). Then, \(\mathbf{b}_{\ell+1}=(1,1,0,0,0,0)\), and
\[\sum_{k=1}^{\ell}\log_{\phi}\frac{\widetilde{\mathbf{b}}_{k}\cdot\widehat{F}} {\mathbf{b}_{k}\cdot\widehat{F}}\,=\,\sum_{k=1}^{\ell}\log_{\phi}(\mathbf{b}_ {k+1}\cdot\widehat{F})-\log_{\phi}(\mathbf{b}_{k}\cdot\widehat{F})\,=\,\log_{ \phi}(\mathbf{b}_{\ell+1}\cdot\widehat{F})-\log_{\phi}1\,=\,1.\]
**Definition 3.9**.: Let \(K\) be a sequence of positive integers approaching \(\infty\). Then, \(K\) is said to _satisfy strong Benford's Law under \(\mathscr{F}\)-expansion_ if given \(\mathbf{b}\in\mathscr{F}_{s}\) where \(s\geq 2\),
\[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat{ F}}{\mathbf{b}\cdot\widehat{F}}.\]
**Example 3.10**.: Let \(K\) be a sequence satisfying strong Benford's Law under \(\mathscr{F}\)-expansion, e.g., \(\{2^{n}\}_{n=1}^{\infty}\); see Theorem 3.8. Let \(\mathbf{b}=(1,0,0,0,1,0)\), so \(\widetilde{\mathbf{b}}=(1,0,0,1,0,0)\). Then,
\[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{6}(K_{n})= \mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{1+\omega^{3}}{1+\omega^{4}}\approx 0.157.\]
Calculations
Notice that \(\log_{b}(x)\) makes it convenient to calculate the distribution of the leading digits of exponential sequences \(\{a^{n}\}_{n=1}^{\infty}\) under base-\(b\) expansion where \(b>1\) is an integer. In this section, we introduce an analogue of \(\log_{b}(x)\) for Zeckendorf expansion in Section 4.1, and use it for various calculations.
As mentioned in the introduction, these functions are merely a tool for calculating the leading digits, and in Section 5, we consider other continuations, and demonstrate their connections to different distributions of leading digits.
### An analytic continuation of the Fibonacci sequence
Below we introduce an analytic continuation of the Fibonacci sequence.
**Definition 4.1**.: Let \(\alpha=\frac{\phi}{\sqrt{5}}\), and define \(\mathfrak{F}:\mathbb{R}\to\mathbb{R}\) be the function given by
\[\mathfrak{F}(x)=\alpha(\phi^{x}+\phi^{-x}\cos(\pi x)\phi^{-2}).\]
We call the function \(a\)_Benford continuation of the Fibonacci sequence_.
Notice that \(F_{n}=\frac{1}{\sqrt{5}}(\phi^{n+1}-(-1/\phi)^{n+1})=\frac{\phi}{\sqrt{5}}( \phi^{n}+(-1)^{n}\phi^{-(n+2)})\). Thus, \(\mathfrak{F}\) is a real analytic continuation of \(F_{n}\), so \(\mathfrak{F}(n)=F_{n}\) for all \(n\in\mathbb{N}\). It is an increasing function on \([1,\infty)\). Let \(\mathfrak{F}^{-1}\) denote the inverse function of \(\mathfrak{F}:[1,\infty)\to\mathbb{R}\). Comparing it with the case of base-10 expansion, we find that \(10^{x-1}\) is an analytic continuation of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\), and its inverse is \(1+\log_{10}(x)\), which is the main object for the equidistribution for Benford's Law under base-10 expansion. The equidistribution property described in Theorem 4.5 is associated with strong Benford's Law under \(\mathscr{F}\)-expansion, and the name of the function is due to this connection.
**Lemma 4.2**.: _For real numbers \(x\geq 1\), we have \(\mathfrak{F}(x)=\alpha\phi^{x}+O(\phi^{-x})\), and_
\[\mathfrak{F}^{-1}(x)\;=\;\log_{\phi}(x)-\log_{\phi}(\alpha)+O(1/x^{2}).\]
Proof.: Let \(y=\alpha\phi^{x}+\alpha\phi^{-x}\cos(\pi x)\phi^{-2}\) and let \(w=\alpha\phi^{-x}\cos(\pi x)\phi^{-2}=O(\phi^{-x})\). Since \(y=\alpha\phi^{x}+o(1)\), we have \(w=O(1/y)\). Then, \(y=\alpha\phi^{x}+w\) implies
\[x \;=\;\log_{\phi}(y-w)-\log_{\phi}\alpha\;=\;\log_{\phi}(y)-\log _{\phi}\alpha+\log_{\phi}(1-w/y)\] \[\;=\;\log_{\phi}(y)-\log_{\phi}\alpha+O(|w|/y)\;=\;\log_{\phi}(y )-\log_{\phi}\alpha+O(1/y^{2}).\]
### Equidistribution
Recall the set \(\mathscr{F}_{s}\) of leading blocks from Definition 3.7. In this section, having a leading block \(\mathbf{b}\in\mathscr{F}_{s}\) is interpreted in terms of the fractional part of the values of \(\widetilde{\mathfrak{F}}^{-1}\).
**Definition 4.3**.: Given \(\epsilon\in\mathbb{N}_{0}^{t}\) and an integer \(s\leq t\), let \(\epsilon|s:=(\epsilon(1),\ldots,\epsilon(s))\).
Recall \(\widehat{F}\) from Definition 3.1 and the product notation from Definition 2.2.
**Lemma 4.4**.: _Let \(K\) be a sequence of positive real numbers approaching \(\infty\), and let \(s\) be an integer \(\geq 2\). Let \(\mathbf{b}\in\mathscr{F}_{s}\), and let \(A_{\mathbf{b}}:=\{n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b}\}\). Then, there are real numbers \(\gamma_{n}=o(1)\) and \(\widetilde{\gamma}_{n}=o(1)\) such that \(n\in A_{\mathbf{b}}\) if and only if_
\[\log_{\phi}\mathbf{b}\cdot\widehat{F}+\gamma_{n}\;\leq\;\operatorname{frc} \bigl{(}\widetilde{\mathfrak{F}}^{-1}(K_{n})\bigr{)}\;<\;\log_{\phi}\widetilde {\mathbf{b}}\cdot\widehat{F}+\widetilde{\gamma}_{n} \tag{3}\]
_where \(\widetilde{\gamma}_{n}=0\) if \(\mathbf{b}\) is the largest block of length \(s\)._
Proof.: Suppose that \(n\in\mathbb{N}\) is sufficiently large, so that \(\mathbf{b}^{\prime}:=\operatorname{LB}_{s}(K_{n})\) exists. By Zeckendorf's Theorem, there is \(\mu\in\mathscr{F}\) such that \(K_{n}=\mu*F\), so \(m:=\operatorname{len}(\mu)\geq s\), and \(\mathbf{b}^{\prime}=\mu|s\). There are \(\epsilon\in\mathscr{F}\) of length \(m\) and a coefficient function \(\check{\epsilon}\) of length \(m\) such that \(\epsilon|s=\mathbf{b}^{\prime}\), \(\check{\epsilon}|s=\widetilde{\mathbf{b}}^{\prime}\), \(\epsilon(k)=\check{\epsilon}(k)=0\) for all \(k>s\), so \(\epsilon*F\leq K_{n}<\check{\epsilon}*F\). Recall \(\alpha\) from Definition 4.1. Then,
\[\epsilon*F\;=\;\alpha\sum_{k=1}^{s}\epsilon(k)\phi^{m-k+1}+O(1)\;=\;\alpha\phi ^{m}(1+o(1))\sum_{k=1}^{s}\epsilon(k)\omega^{k-1}\;=\;\alpha\phi^{m}(1+o(1)) \,\mathbf{b}^{\prime}\cdot\widehat{F}.\]
By Lemma 4.2,
\[\widetilde{\mathfrak{F}}^{-1}(\epsilon*F)\;=\;m+\log_{\phi}(\mathbf{b}^{ \prime}\cdot\widehat{F})+\gamma_{n},\quad\gamma_{n}\;=\;o(1).\]
Similarly, we have \(\widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)=m+\log_{\phi}(\widetilde {\mathbf{b}}^{\prime}\cdot\widehat{F})+\widetilde{\gamma}_{n}\) where \(\widetilde{\gamma}_{n}=o(1)\). If \(\mathbf{b}^{\prime}\) is the largest block of length \(s\), then \(\check{\epsilon}*F=F_{m+1}\), and hence, \(\widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)=m+1\), which implies \(\widetilde{\gamma}_{n}=0\). In general, \(\check{\epsilon}*F\leq F_{m+1}\), so \(\widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)\leq m+1\).
Thus, if \(n\in A_{\mathbf{b}}\), then \(\mathbf{b}^{\prime}=\mathbf{b}\), and
\[\epsilon*F\leq K_{n}\;<\;\check{\epsilon}*F\Rightarrow\widetilde{ \mathfrak{F}}^{-1}(\epsilon*F)\leq\widetilde{\mathfrak{F}}^{-1}(K_{n})\;<\; \widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)\] \[\qquad\Rightarrow\log_{\phi}\mathbf{b}\cdot\widehat{F}+\gamma_{n }\;\leq\;\operatorname{frc}\bigl{(}\widetilde{\mathfrak{F}}^{-1}(K_{n}) \bigr{)}\;<\;\log_{\phi}\widetilde{\mathbf{b}}\cdot\widehat{F}+\widetilde{ \gamma}_{n}.\]
There is no difficulty in reversing this argument, and we leave the proof of the converse to the reader.
**Theorem 4.5**.: _Let \(K\) be an increasing sequence of positive integers such that \(\operatorname{frc}\bigl{(}\widetilde{\mathfrak{F}}^{-1}(K_{n})\bigr{)}\) is equidistributed. Then, \(K\) satisfies strong Benford's Law under the \(\mathscr{F}\)-expansion._
Proof.: Notice that \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b} \,\right\}\) where \(s\geq 2\) is equal to the probability of \(n\) satisfying (3). Let \(t\in\mathbb{N}\). Then, there is an integer \(M_{t}\) such that \(\left|\gamma_{n}\right|\) and \(\left|\widetilde{\gamma}_{n}\right|\) are \(<1/t\) for all \(n\geq M_{t}\). Thus, by Lemma 4.4,
\[\operatorname{Prob}\left\{\,k\in\Omega_{n}:\operatorname{LB}_{s}(K_{n})=B\, \right\}+o(1)\]
\[\leq\,\operatorname{Prob}\left\{k\in\Omega_{n}:\log_{\phi}\mathbf{b}\cdot \widehat{F}-\frac{1}{t}\,\leq\,\operatorname{frc}\left(\widehat{\mathfrak{F}} ^{-1}(K_{n})\right)\,<\,\log_{\phi}\widetilde{\mathbf{b}}\cdot\widehat{F}+ \frac{1}{t}\right\}+o(1)\]
\[\Rightarrow\,\limsup_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{s}(K_{n})=\mathbf{b}\,\right\}\,\leq\,\log_{\phi}\frac{ \widetilde{\mathbf{b}}\cdot\widehat{F}}{\mathbf{b}\cdot\widehat{F}}+\frac{2}{ t}.\]
\[\operatorname{Prob}\left\{\,k\in\Omega_{n}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}+o(1)\]
\[\geq\,\operatorname{Prob}\left\{\,k\in\Omega_{n}:\log_{\phi}\mathbf{b}\cdot \widehat{F}+\frac{1}{t}\leq\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1 }(K_{n})\right)\,<\,\log_{\phi}\widetilde{\mathbf{b}}\cdot\widehat{F}-\frac{1} {t}\,\right\}+o(1)\]
\[\Rightarrow\,\liminf_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{s}(K_{n})=\mathbf{b}\,\right\}\,\geq\,\log_{\phi}\frac{ \widetilde{\mathbf{b}}\cdot\widehat{F}}{\mathbf{b}\cdot\widehat{F}}-\frac{2}{ t}.\]
Since \(\liminf\) and \(\limsup\) are independent of \(t\), we prove that \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}=\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat{F}}{ \mathbf{b}\cdot\widehat{F}}\).
The converse of Theorem 4.5 is true as well, i.e., if \(K\) satisfies strong Benford's Law under \(\mathcal{F}\)-expansion, then \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed. We shall prove it in Section 5.
The following lemma is useful, and it is probably known.
**Lemma 4.6**.: _Let \(h:\mathbb{N}\to\mathbb{R}\) be a function such that \(\operatorname{frc}(h(n))\) is equidistributed, and let \(E:\mathbb{N}\to\mathbb{R}\) be a function such that \(E(n)\to 0\) as \(n\to\infty\). Then, \(\operatorname{frc}(h(n)+E(n))\) is equidistributed._
**Corollary 4.7**.: _Let \(K\) be a sequence of positive integers given by \(K_{n}=ab^{n}(1+o(1))\) where \(a\) and \(b\) are positive real numbers such that \(\log_{\phi}b\) is irrational. Then, \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed, and hence, given \(\mathbf{b}\in\mathcal{F}_{s}\) where \(s\geq 2\),_
\[\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {F}}{\mathbf{b}\cdot\widehat{F}}.\]
Proof.: By Lemma 4.2,
\[\widehat{\mathfrak{F}}^{-1}(K_{n})\,=\,n\log_{\phi}b-\log_{\phi}(a/\alpha)+ \log_{\phi}(1+o(1))+o(1).\]
Since \(\log_{\phi}b\) is irrational, by Weyl's Equidistribution Theorem, \(\operatorname{frc}\left(n\log_{\phi}b\right)\) is equidistributed, and by the lemma, \(\operatorname{frc}\left(n\log_{\phi}b+o(1)\right)\) is equidistributed. Shifting it by a constant \(-\log_{\phi}(a/\alpha)\) does not change the equidistribution property, and this concludes the proof.
For example, if \(K\) is a sequence given by \(K_{n}=\sum_{k=1}^{N}a_{k}\,b_{k}^{\,n}\) where \(a_{k},b_{k}\in\mathbb{Z}\), \(a_{1}>0\), and \(b_{1}>|b_{k}|\) for all \(k\geq 2\), then \(K_{n}=a_{1}b_{1}^{n}(1+o(1))\), and \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed. Many increasing sequences \(K\) of positive integers given by a linear recurrence with constant positive integer coefficients satisfy \(K_{n}=ab^{n}(1+o(1))\) where \(\log_{\phi}(b)\) is irrational, and hence, \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed.
### The leading blocks of integer powers
Let \(a\) be a positive integer, and let \(K\) be the sequence given by \(K_{n}=n^{a}\). Then, \(K\) does not satisfy Benford's Law under the base-10 expansion, but it has a close relationship with Benford's Law [14]. In this section, we show that both statements are true under \(\mathscr{F}\)-expansion as well. Recall \(\Omega_{n}\) from Notation 2.1 and \(\mathscr{F}_{3}\) from Definition 3.4, and let \(\mathbf{b}_{1}:=(1,0,0)\in\mathscr{F}_{3}\). We also introduce the oscillating behavior of \(\operatorname{Prob}\left\{\,k\in\Omega_{n}:\operatorname{LB}_{3}(K_{k})= \mathbf{b}_{1}\,\right\}\) as \(n\to\infty\), and hence, \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_{n})= \mathbf{b}_{1}\,\right\}\) does not exist.
**Example 4.8**.: Let \(K\) be the sequence given by \(K_{n}=n\), and let \(t>0\) be a large integer. Given a sufficiently large positive random integer \(n<F_{t+1}\), let \(n=\mu*F\) be the \(\mathscr{F}\)-expansion, and \(M:=\operatorname{len}(\mu)\). Notice that \(\operatorname{LB}_{3}(n)=\mathbf{b}_{1}\) if and only if \(n=F_{M}+m\) where \(0\leq m<F_{M-2}\). Thus, there are \(F_{M-2}\) integers \(n\) in \([1,F_{t+1})\) such that \(F_{M}\leq n<F_{M+1}\) and \(\operatorname{LB}_{3}(n)=\mathbf{b}_{1}\). Thus,
\[\operatorname{Prob}\left\{\,n\in\Omega_{F_{t+1}}:\operatorname{LB}_{3}(n)= \mathbf{b}_{1}\,\right\}\,=\,\left(\frac{1}{F_{t+1}}\sum_{M=3}^{t}F_{M-2} \right)+o(1)=\left(\frac{1}{F_{t+1}}\sum_{M=3}^{t}\alpha\phi^{M-2}+o(1)\right) +o(1)\\ =\,\frac{1}{\alpha\phi^{t+1}+o(1)}\frac{\alpha\phi^{t-1}}{\phi-1} +o(1)\,=\,\frac{1}{\phi^{2}(\phi-1)}+o(1)\,=\,\phi-1+o(1)\]
as function of \(t\). However, by Theorem 4.10, we have
\[\limsup_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{3}(k)=\mathbf{b}_{1}\,\right\}\,=\,\frac{\phi+1}{\phi+2} \approx.724,\] \[\liminf_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{3}(k)=\mathbf{b}_{1}\,\right\}\,=\,\phi-1\approx.618.\]
Thus, \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(n)= \mathbf{b}_{1}\,\right\}\) does not exist.
Recall \(\mathfrak{F}\) from Definition 4.1, and its inverse \(\mathfrak{F}^{-1}\). We use the function \(\mathfrak{F}\) to more generally handle the distribution of the leading blocks of \(\{n^{a}\}_{n=1}^{\infty}\) with any length. Given a positive integer \(m\), let \(A_{m}=\{n\in\mathbb{N}:n<F_{m}^{1/a}\}\).
**Lemma 4.9**.: _If \(\beta\in[0,1]\), then_
\[\operatorname{Prob}\left\{\,n\in A_{m}:\operatorname{frc}\left(\mathfrak{F}^{- 1}(n^{a})\right)\leq\beta\,\right\}\,=\,\frac{\phi^{\beta/a}-1}{\phi^{1/a}-1}+ O(m\phi^{-m/a}).\]
Proof.: Let \(m\in\mathbb{N}\), and let \(n\in A^{\prime}_{m+1}:=A_{m+1}-A_{m}\), so that \(F_{m}\leq n^{a}<F_{m+1}\) and \(m\leq\mathfrak{F}^{-1}(n^{a})<m+1\). Thus, given a real number \(\beta\in[0,1]\),
\[\left\{\,n\in A^{\prime}_{m+1}:\operatorname{frc}\left(\mathfrak{F}^{- 1}(n^{a})\right)\leq\beta\,\right\} \,=\,\left\{\,n\in A^{\prime}_{m+1}:m\leq\mathfrak{F}^{-1}(n^{a}) \leq m+\beta\,\right\}\] \[\,=\,\left\{\,n\in A^{\prime}_{m+1}:\operatorname{frc}\left( \mathfrak{F}^{-1}(n^{a})\right)\leq\beta\,\right\} \,=\,\mathfrak{F}(m+\beta)^{1/a}-\mathfrak{F}(m)^{1/a}+O(1)\] \[\,=\,\alpha^{1/a}\phi^{(m+\beta)/a}-\alpha^{1/a}\phi^{m/a}+O(1)\] \[\,=\,\alpha^{1/a}\phi^{(m+\beta)/a}\gamma-\alpha^{1/a}\phi^{m/a} \gamma+O(m),\quad\gamma=\frac{\phi^{1/a}}{\phi^{1/a}-1}.\]
This proves that
\[\mathrm{Prob}\left\{\,n\in A_{m+1}:\mathrm{frc}\left(\widehat{ \mathfrak{F}}^{-1}(n^{a})\right)\leq\beta\,\right\} \,=\,\frac{\alpha^{1/a}\phi^{(m+\beta)/a}\gamma-\alpha^{1/a}\phi^{m/ a}\gamma+O(m)}{F_{m+1}^{1/a}+O(1)}\] \[\,=\,\frac{\phi^{\beta/a}\gamma-\gamma+O(m\phi^{-m/a})}{\phi^{1/a} +O(\phi^{-m/a})}\,=\,\frac{\phi^{\beta/a}-1}{\phi^{1/a}-1}+O(m\phi^{-m/a}).\]
Recall from Lemma 4.4 that
\[\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{LB}_{3}(n^{a})=\mathbf{b}_{1}\, \right\}=\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{frc}\left(\widehat{ \mathfrak{F}}^{-1}(n^{a})\right)\leq\delta_{1}+o(1)\,\right\}\]
where \(\delta_{1}:=\log_{\phi}\frac{\widehat{\mathbf{b}}_{1}:\widehat{F}}{\mathbf{b }_{1}:\widehat{F}}\). Thus, as \(m\to\infty\), by Lemma 4.9,
\[\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{LB}_{3}(n^{a})=\mathbf{b}_{1}\, \right\}\,\to\,\frac{\phi^{\delta_{1}/a}-1}{\phi^{1/a}-1}\,=\,\frac{(1+\omega ^{2})^{1/a}-1}{\phi^{1/a}-1}\]
where \(\omega=\phi^{-1}\). Let us show that
\[\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{LB}_{3}(n^{a})=\mathbf{b}_{1}\, \right\}\,\not\to\,\delta_{1}\]
as \(m\to\infty\). We claim that the ratio \(\frac{(1+\omega^{2})^{1/a}-1}{\phi^{1/a}-1}\) is not equal to \(\delta_{1}=\log_{\phi}(1+\omega^{2})\). Since \(a\in\mathbb{N}\), the ratio is an algebraic number over \(\mathbb{Q}\). However, by the Gelfand-Schneider Theorem, \(\log_{\phi}(1+\omega^{2})\) is a transcendental number. Thus, \(K\) does not satisfy Benford's Law under the \(\mathscr{F}\)-expansion.
However, as noted in [14] for base-\(b\) expansions, we have
\[\lim_{a\to\infty}\lim_{m\to\infty}\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{ LB}_{3}(n^{a})=\mathbf{b}_{1}\,\right\}\,=\,\lim_{a\to\infty}\frac{\phi^{ \delta_{1}/a}-1}{\phi^{1/a}-1}\,=\,\delta_{1}\,=\,\log_{\phi}(1+\omega^{2}).\]
Even though the leading blocks of \(K_{n}\) do not satisfy Benford's Law under \(\mathscr{F}\)-expansion, the limiting behavior of high power sequences for special values of \(n\) resembles Benford's Law.
Recall \(\Omega_{n}\) from Definition 2.1. Let us use Lemma 4.9 to prove that \(\mathrm{Prob}\left\{\,k\in\Omega_{n}:\mathrm{frc}\left(\widehat{\mathfrak{F}}^ {-1}(K_{k})\right)\leq\beta\,\right\}\) oscillates, and does not converge.
**Theorem 4.10**.: _Let \(\beta\) be a real number in \([0,1]\), and let \(\,r:=(\phi^{\beta/a}-1)/(\phi^{1/a}-1)\). Given an integer \(n>1\), let \(\widehat{\mathfrak{F}}^{-1}(n^{a})=m+p\) where \(\,p=\mathrm{frc}\left(\widehat{\mathfrak{F}}^{-1}(n^{a})\right)\) and \(\,m\in\mathbb{N}\). Then,_
\[P_{n}:=\mathrm{Prob}\left\{\,k\in\Omega_{n}:\mathrm{frc}\left(\widehat{ \mathfrak{F}}^{-1}(K_{k})\right)\leq\beta\,\right\}\,=\,\,\begin{cases}\frac{r+ \phi^{p/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})&\text{if }0\leq p\leq\beta\\ \frac{r+\phi^{p/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})&\text{if }\beta<p<1\end{cases}.\]
_In particular,_
\[\limsup P_{n}=r\phi^{1/a-\beta/a}=\beta+O(1/a),\quad and\quad\liminf P_{n}=r= \beta+O(1/a).\]
Proof.: Let \(m\) be a sufficiently large positive integer, and let \(n\in A_{m+1}-A_{m}\). Let \(n=\mathfrak{F}(m+p)^{1/a}\) for \(p\in[0,1)\). If \(p\leq\beta\), then, \(\operatorname{frc}\left(\mathfrak{F}^{-1}(n^{a})\right)=\operatorname{frc} \left(\mathfrak{F}^{-1}\mathfrak{F}(m+p)\right)=\operatorname{frc}(m+p)=p\leq\beta\), and if \(p>\beta\), then, \(\operatorname{frc}\left(\mathfrak{F}^{-1}(n^{a})\right)=p>\beta\). Thus,
\[\left\{n\in A_{m+1}-A_{m}:\operatorname{frc}\left(\mathfrak{F}^{-1}(n^{a}) \right)\leq\beta\right\}\;=\;\left\{n\in A_{m+1}-A_{m}:n\leq\mathfrak{F}(m+ \beta)^{1/a}\right\}.\]
If \(n\leq\mathfrak{F}(m+\beta)^{1/a}\), i.e., \(p\leq\beta\), then by Lemma 4.9
\[P_{n} =\;\frac{1}{n}\left(\operatorname{Prob}\left\{\;k\in A_{m}: \operatorname{frc}\left(\mathfrak{F}^{-1}(k^{a})\right)\leq\beta\;\right\}\; \#A_{m}+n-\mathfrak{F}(m)^{1/a}+O(1)\right)\] \[=\;\frac{r\mathfrak{F}(m)^{1/a}+O(m)+\mathfrak{F}(m+p)^{1/a}- \mathfrak{F}(m)^{1/a}}{\mathfrak{F}(m+p)^{1/a}+O(1)}\] \[=\;\frac{r+O(m\phi^{-m/a})+\phi^{p/a}-1}{\phi^{p/a}+O(\phi^{-m/a })}\;=\;\frac{r+\phi^{p/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})\]
If \(n>\mathfrak{F}(m+\beta)^{1/a}\), i.e., \(p>\beta\), then
\[P_{n}=\frac{r+\phi^{\beta/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})\;=\; \frac{r\phi^{1/a}}{\phi^{p/a}}+O(m\phi^{-m/a}).\] \[\text{Thus, }\limsup P_{n}=\frac{r+\phi^{\beta/a}-1}{\phi^{\beta/a }}=\frac{r\phi^{1/a}}{\phi^{\beta/a}},\quad\liminf P_{n}=\frac{r\phi^{1/a}}{ \phi^{1/a}}=r.\]
Thus, \(\operatorname{Prob}\left\{\;n\in\mathbb{N}:\operatorname{frc}\left( \mathfrak{F}^{-1}(K_{n})\right)\leq\beta\;\right\}\) does not converge, but \(\operatorname{frc}\left(\mathfrak{F}^{-1}(K_{n})\right)\) is almost equidistributed for large values of \(a\).
**Example 4.11**.: Let \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\) be the blocks defined in Example 3.10, and let \(K\) be the sequence given by \(K_{n}=n^{2}\). By Lemma 4.4, if \(D:=\{n\in\mathbb{N}:\operatorname{LB}_{6}(K_{n})=\mathbf{b}\}\), then for \(n\in D\),
\[\log_{\phi}(1+\omega^{4})+o(1)\;<\;\operatorname{frc}\left(\mathfrak{F}^{-1}(K _{n})\right)\;<\;\log_{\phi}(1+\omega^{3})+o(1)\]
where the upper and lower bounds are functions of \(n\in D\). Let \(\beta=\log_{\phi}(1+\omega^{4})\) and \(\widetilde{\beta}=\log_{\phi}(1+\omega^{3})\). Recall \(\Omega_{n}\) from Definition 2.1. Then,
\[\operatorname{Prob}\left\{\;k\in\Omega_{n}:\operatorname{LB}_{6}( K_{k})=\mathbf{b}\;\right\}=\] \[\operatorname{Prob}\left\{\;k\in\Omega_{n}:\operatorname{frc} \left(\mathfrak{F}^{-1}(K_{n})\right)<\widetilde{\beta}\;\right\}\;-\; \operatorname{Prob}\left\{\;k\in\Omega_{n}:\operatorname{frc}\left(\mathfrak{F} ^{-1}(K_{n})\right)<\beta\;\right\}\;+\;o(1).\]
Let \(r=(\phi^{\beta/2}-1)/(\phi^{1/2}-1)\) and \(\widetilde{r}=(\phi^{\widetilde{h}/2}-1)/(\phi^{1/2}-1)\), and let \(n=\mathfrak{F}(m+p)^{1/a}\) where \(p=\operatorname{frc}\big{(}\mathfrak{F}^{-1}(n^{a})\big{)}\in[0,1)\). Then, by Theorem 4.10, we have
\[\operatorname{Prob}\big{\{}\,k\in\Omega_{n}:\operatorname{LB}_{6}(K_{k})= \mathbf{b}\,\big{\}}\;=\;\begin{cases}\frac{\widetilde{r}+\phi^{\rho^{\beta 2}}-1}{\phi^{\beta/2}}-\frac{r+\phi^{\rho^{\beta 2}}-1}{\phi^{\beta/2}}+o(1)&\text{ if }p \leq\beta\;,\\ \frac{\widetilde{r}+\phi^{\rho^{\beta 2}}-1}{\phi^{\rho^{2}}}-\frac{r+\phi^{\rho^{ \beta 2}}-1}{\phi^{\rho^{2}}}+o(1)&\text{ if }\beta<p\leq\widetilde{\beta}\\ \frac{\widetilde{r}+\phi^{\widetilde{h}/2}-1}{\phi^{\rho^{2}}}-\frac{r+\phi^ {\rho^{\beta 2}}-1}{\phi^{\rho^{2}}}+o(1)&\text{ if }p>\widetilde{\beta}\;.\end{cases}\]
\[\Rightarrow\limsup_{n}\operatorname{Prob}\big{\{}\,k\in\Omega_{n}: \operatorname{LB}_{6}(K_{k})=\mathbf{b}\,\big{\}}\;=\;\frac{\widetilde{r}+ \phi^{\widetilde{h}/2}-1}{\phi^{\widetilde{h}/2}}-\frac{r+\phi^{\beta/2}-1}{ \phi^{\widetilde{h}/2}}\approx 0.1737\]
\[\liminf_{n}\operatorname{Prob}\big{\{}\,k\in\Omega_{n}:\operatorname{LB}_{6}(K _{k})=\mathbf{b}\,\big{\}}\;=\;\frac{\widetilde{r}+\phi^{\widetilde{h}/2}-1} {\phi^{\beta/2}}-\frac{r+\phi^{\beta/2}-1}{\phi^{\beta/2}}\approx 0.1419.\]
## 5 Other continuations
Reflecting upon Lemma 4.4 and Theorem 4.5, we realized that we could consider different continuations of the Fibonacci sequence \(F\), and ask which sequence satisfies the equidistribution property, and which distributions its leading blocks follow. Let us demonstrate the idea in Example 5.4. The claims in this example can be proved using Theorem 5.6. Recall the Benford continuation \(\mathfrak{F}\) from Definition 4.1.
**Definition 5.1**.: Given \(n\in\mathbb{N}\), let \(\mathfrak{F}_{n}:[0,1]\to[0,1]\) be the increasing function given by
\[\mathfrak{F}_{n}(p)\,:=\,\frac{\mathfrak{F}(n+p)-\mathfrak{F}(n)}{\mathfrak{F }(n+1)-\mathfrak{F}(n)}=\frac{\mathfrak{F}(n+p)-\mathfrak{F}(n)}{F_{n-1}}\;=\; \phi(\phi^{p}-1)+o(1)\]
where \(F_{0}:=1\). Let \(\mathfrak{F}_{\infty}:[0,1]\to[0,1]\) be the increasing function given by \(\mathfrak{F}_{\infty}(p)=\phi(\phi^{p}-1)\).
Recall uniform continuations of sequences from Definition 1.6.
**Lemma 5.2**.: _The function \(\mathfrak{F}\) is a uniform continuation of \(F\)._
Proof.: Notice that \(\mathfrak{F}_{n}(p)=\phi(\phi^{p}-1)+\gamma(n,p)\) where \(\big{|}\gamma(n,p)\big{|}<C/\phi^{n}\) where \(C\) is independent of \(p\) and \(n\). Thus, it uniformly converges to \(\phi(\phi^{p}-1)\).
**Lemma 5.3**.: _Let \(p\in[0,1]\) be a real number. Then, \(\mathfrak{F}(n+\mathfrak{F}_{n}{}^{-1}(p))=F_{n}+(F_{n+1}-F_{n})p\)._
Proof.: Let \(p^{\prime}=\mathfrak{F}_{n}{}^{-1}(p)\). Then, \(\mathfrak{F}_{n}(p^{\prime})=p\), and hence, \(\frac{\mathfrak{F}(n+p^{\prime})-\mathfrak{F}(n)}{F_{n+1}-F_{n}}=p\). The assertion follows from the last equality.
**Example 5.4**.: Let \(f:[1,\infty)\to\mathbb{R}\) be the increasing continuous function whose graph is the union of the line segments from \((n,F_{n})\) to \((n+1,F_{n+1})\) for \(n\in\mathbb{N}\). Then, \(f_{\infty}(p)=p\) for all \(p\in[0,1]\). Let \(K\) be the sequence given by \(K_{n}=\big{|}\mathfrak{F}(n+\mathfrak{F}_{n}{}^{-1}(\operatorname{frc}(n\pi))) \big{|}\). Then, by Lemma 5.3,
\[f^{-1}\big{(}\mathfrak{F}(n+\mathfrak{F}_{n}{}^{-1}(\operatorname{frc}(n\pi))) \big{)}=n+\operatorname{frc}(n\pi)\;\Rightarrow\;\operatorname{frc}\big{(}f^{-1 }(K_{n})\big{)}=\operatorname{frc}(n\pi)+o(1),\]
which is equidistributed.
Recall \(\mathcal{F}_{s}\) from Definition 3.7 where \(s\geq 2\), and let \(\mathbf{b}\in\mathcal{F}_{s}\). Recall \(\widehat{F}\) from Definition 3.1 and the product notation from Definition 2.2. Then, by Theorem 5.6,
\[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{s}(K_{n})=\mathbf{b}\,\right\} \;=\;\phi(\widetilde{\mathbf{b}}\cdot\widehat{F}-\mathbf{b}\cdot\widehat{F}) \;=\;\phi^{-s+2}(\widetilde{\mathbf{b}}\ast\overline{F}-\mathbf{b}\ast \overline{F})\]
where \(\overline{F}\) is the sequence given by \(\overline{F}_{n}=\phi^{n-1}\). If \(\mathbf{b}(s)=0\), then \(\omega^{s-2}(\widetilde{\mathbf{b}}\ast\overline{F}-\mathbf{b}\ast\overline{ F})=\omega^{s-2}\), and if \(\mathbf{b}(s)=1\), then \(\omega^{s-2}(\widetilde{\mathbf{b}}\ast\overline{F}-\mathbf{b}\ast\overline{ F})=\omega^{s-1}\). For example, if \(s=6\), then
\[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(K_{n})\;=\;(1,0,0,1,0)\right\}\;=\;\omega^{5}\] \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(K_{n})\;=\;( 1,0,1,0,1,0)\right\}\;=\;\omega^{4}.\]
It's nearly a uniform distribution.
Let us show that the probabilities add up to \(1\). Notice that \(\#\mathcal{F}_{s}=F_{s-1}\), \(\#\{\mathbf{b}\in\mathcal{F}_{s}:\mathbf{b}(s)=0\}=F_{s-2}\), and and \(\#\{\mathbf{b}\in\mathcal{F}_{s}:\mathbf{b}(s)=1\}=F_{s-3}\). Then, by Binet's Formula, the following sum is equal to \(1\):
\[\sum_{\mathbf{b}\in\mathcal{F}_{s}}\omega^{s-2}(\widetilde{\mathbf{b}}\ast \overline{F}-\mathbf{b}\ast\overline{F})\;=\;\frac{F_{s-2}}{\phi^{s-2}}+\frac{ F_{s-3}}{\phi^{s-1}}=1.\]
By Lemma 5.3, we have \(K_{n}=\left\{F_{n}+(F_{n+1}-F_{n})\mathrm{frc}(n\pi)\right\}\) for \(n\in\mathbb{N}\), and the following are the first ten values of \(K_{n}\):
\[(1,2,3,6,11,19,33,36,64,111).\]
Let us introduce and prove the main results on continuations.
**Lemma 5.5**.: _Let \(f\) be a uniform continuation of \(F\), and let \(K\) be a sequence of positive real numbers approaching \(\infty\). Then, \(\mathrm{frc}\left(f^{-1}(\left\lfloor K_{n}\right\rfloor)\right)=\mathrm{frc} \left(f^{-1}(K_{n})\right)+o(1)\)._
Proof.: Let \(n\in\mathbb{N}\). Then, \(F_{m}\leq\left\lfloor K_{n}\right\rfloor\leq K_{n}<F_{m+1}\) for \(m\in\mathbb{N}\) depending on \(n\). Let \(K_{n}=f(m+p)\) and \(\left\lfloor K_{n}\right\rfloor=f(m+p^{\prime})\) where \(p,p^{\prime}\in[0,1]\) are real numbers, which depend on \(n\). Then, \(F_{m}+f_{m}(p^{\prime})(F_{m+1}-F_{m})+O(1)=F_{m}+f_{m}(p)(F_{m+1}-F_{m})\), and hence, \(f_{m}(p^{\prime})+o(1)=f_{m}(p)\). Thus,
\[f^{-1}(K_{n})\;=\;m+p=m+{f_{m}}^{-1}\left(f_{m}(p^{\prime})+o(1)\right)\;=\;m +{f_{m}}^{-1}\left(f_{\infty}(p^{\prime})+o(1)\right).\]
By the uniform convergence,
\[=\;m+{f_{\infty}}^{-1}\left(f_{\infty}(p^{\prime})+o(1)\right)+o(1)\;=\;m+{f_{ \infty}}^{-1}\left(f_{\infty}(p^{\prime})\right)+o(1)\;=\;m+p^{\prime}+o(1).\]
Therefore, \(\mathrm{frc}\left(f^{-1}(K_{n})\right)=\mathrm{frc}\left(f^{-1}(\left\lfloor K _{n}\right\rfloor)\right)+o(1)\).
**Theorem 5.6**.: _Let \(f:[1,\infty)\rightarrow\mathbb{R}\) be a uniform continuation of \(F\). Then there is a sequence \(K\) of positive integers approaching \(\infty\), e.g., \(K_{n}=\left\lfloor\widehat{\mathfrak{F}}\left(n+\widehat{\mathfrak{F}}_{n}^{-1 }\circ f_{n}(\mathrm{frc}(n\pi)\right)\right\rfloor\), such that \(\mathrm{frc}\left(f^{-1}(K_{n})\right)\) is equidistributed._
_Let \(K\) be a sequence of of positive integers approaching \(\infty\) such that \(\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed. Let \(\mathbf{b}\in\mathscr{F}_{s}\) where \(s\geq 2\). Then,_
\[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s} (K_{n})=\mathbf{b}\,\big{\}} =\,{f_{\infty}}^{-1}\circ\mathfrak{F}_{\infty}(\log_{\phi} \widetilde{\mathbf{b}}\cdot\widehat{F})-{f_{\infty}}^{-1}\circ\mathfrak{F}_{ \infty}(\log_{\phi}\mathbf{b}\cdot\widehat{F})\] \[=\,{f_{\infty}}^{-1}\bigl{(}\phi(\widetilde{\mathbf{b}}\cdot \widehat{F}-1)\bigr{)}-{f_{\infty}}^{-1}\bigl{(}\phi(\mathbf{b}\cdot\widehat{F} -1)\bigr{)}\,. \tag{4}\]
Proof.: Let \(x\geq 1\) be a real number, and let \(F_{n}\leq x<F_{n+1}\) for \(n\in\mathbb{N}\). Since \(\mathfrak{F}\) and \(f\) are increasing continuations of \(F\), there are two unique real numbers \(p\) and \(p^{\prime}\) in \([0,1]\) such that \(x=\mathfrak{F}(n+p)=f(n+p^{\prime})\). We claim that
\[f^{-1}(x)=n+{f_{n}}^{-1}(\mathfrak{F}_{n}(p)), \tag{5}\]
and \(\mathfrak{F}^{-1}(x)=n+\mathfrak{F}_{n}^{-1}(f_{n}(p^{\prime}))\). To prove the claim, note
\[\mathfrak{F}(n+p)=f(n+p^{\prime}) \,\Rightarrow\,{F_{n}}+\mathfrak{F}_{n}(p)({F_{n+1}}-{F_{n}}) \,=\,{F_{n}}+{f_{n}}(p^{\prime})({F_{n+1}}-{F_{n}})\] \[\,\Rightarrow\,p^{\prime}={f_{n}}^{-1}(\mathfrak{F}_{n}(p)),\,p= \mathfrak{F}_{n}^{-1}(f_{n}(p^{\prime})).\]
Then \(f(n+p^{\prime})=x\) and \(\mathfrak{F}(n+p)=x\) imply the claim.
Let \(\overline{K}\) and \(K\) be the sequences given by \(\overline{K}_{n}=\mathfrak{F}\bigl{(}n+\mathfrak{F}_{n}^{-1}\circ f_{n}( \operatorname{frc}(n\pi))\bigr{)}\) and \(K_{n}=\left\lfloor\overline{K}_{n}\right\rfloor\). Given \(n\in\mathbb{N}\), let \(p_{n}=\mathfrak{F}_{n}^{-1}\circ f_{n}(\operatorname{frc}(n\pi))\). Then,
\[f^{-1}(\overline{K}_{n})\,=\,n+{f_{n}}^{-1}\bigl{(}\mathfrak{F}_{n}(p_{n}) \bigr{)}\,=\,n+\operatorname{frc}(n\pi)\,.\]
Thus, \(\operatorname{frc}\Bigl{(}f^{-1}(\overline{K}_{n})\Bigr{)}\) is equidistributed. If we further assume that \(f\) is a uniform continuation, then, by Lemmas 4.6 and 5.5, \(\operatorname{frc}\Bigl{(}f^{-1}(\left\lfloor\overline{K}_{n}\right\rfloor) \Bigr{)}=\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed as well.
Let \(K\) be a sequence of of positive integers approaching \(\infty\) such that \(\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed. Let \(\mathbf{b}\in\mathscr{F}_{s}\), and let \(A_{\mathbf{b}}:=\{n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b}\}\). Let \(n\in A_{\mathbf{b}}\), and \(F_{m}\leq K_{n}<F_{m+1}\) for \(m\in\mathbb{N}\) depending on \(n\). Let \(K_{n}=\mathfrak{F}(m+p)=f(m+p^{\prime})\) where \(p\) and \(p^{\prime}\) are real numbers in \([0,1]\) depending on \(n\).
Then, by Lemma 4.4,
\[\log_{\phi}\mathbf{b}\cdot\widehat{F}+o(1)\,\,<\,\operatorname{ frc}\bigl{(}\mathfrak{F}^{-1}(K_{n})\bigr{)}\,<\,\log_{\phi}\widetilde{\mathbf{b}} \cdot\widehat{F}+o(1)\] \[\Rightarrow \log_{\phi}\mathbf{b}\cdot\widehat{F}+o(1)\,<\,\operatorname{ frc}\bigl{(}m+\mathfrak{F}_{n}^{-1}(f_{n}(p^{\prime}))\bigr{)}\,<\,\log_{\phi} \widetilde{\mathbf{b}}\cdot\widehat{F}+o(1)\] \[\Rightarrow {f_{n}}^{-1}\circ\mathfrak{F}_{n}(\log_{\phi}\mathbf{b}\cdot \widehat{F}+o(1))\,<\,p^{\prime}\,<\,{f_{n}}^{-1}\circ\mathfrak{F}_{n}(\log_{ \phi}\widetilde{\mathbf{b}}\cdot\widehat{F}+o(1))\] \[\Rightarrow {f_{\infty}}^{-1}\circ\mathfrak{F}_{\infty}(\log_{\phi}\mathbf{b} \cdot\widehat{F})+o(1)\,<\,{\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}} \,<\,{f_{\infty}}^{-1}\circ\mathfrak{F}_{\infty}(\log_{\phi}\widetilde{ \mathbf{b}}\cdot\widehat{F})+o(1).\]
Since \(\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed, the above inequalities imply the assertion (4).
Let us demonstrate a continuation, for which the distribution of leading blocks of length \(4\) coincides with that of strong Benford's Law, but the distribution does not coincide for higher length blocks.
**Example 5.7**.: Consider \(\mathscr{F}_{4}=\{\mathbf{b}_{1},\mathbf{b}_{2},\mathbf{b}_{3}\}\), i.e.,
\[\mathbf{b}_{1}=(1,0,0,0),\ \mathbf{b}_{2}=(1,0,0,1),\ \mathbf{b}_{3}=(1,0,1,0).\]
Let \(p_{k}=\log_{\phi}(\mathbf{b}_{k}\cdot\widehat{F})<1\) for \(k=1,2,3\), and let \(p_{0}=0\) and \(p_{4}=1\). For each \(n\in\mathbb{N}\), define \(f_{n}:[0,1]\to[0,1]\) to be the function whose graph is the union of line segments from \((p_{k},\widehat{\mathfrak{s}}_{\infty}(p_{k}))\) to \((p_{k+1},\widehat{\mathfrak{s}}_{\infty}(p_{k+1}))\) for \(k=0,1,2,3\). Notice that \(f_{n}\) is defined independently of \(n\), and that it defines a uniform continuation \(f:[1,\infty)\to[1,\infty)\) such that \(f_{\infty}=f_{n}\) for all \(n\in\mathbb{N}\) as follows: Given \(x\in[1,\infty)\), find \(n\in\mathbb{N}\) such that \(n\leq x<n+1\), and define \(f(x)=F_{n}+f_{n}(x-n)(F_{n+1}-F_{n})\).
Note that \(f_{\infty}(p_{k})=\widehat{\mathfrak{s}}_{\infty}(p_{k})\), i.e., \(f_{\infty}{}^{-1}(\widehat{\mathfrak{s}}_{\infty}(p_{k}))=p_{k}\) for \(k=0,1,2,3\). By Theorem 5.6, if \(\operatorname{\mathrm{missing}}\left(f^{-1}(K_{n})\right)\) is equidistributed, we have
\[\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{4}( K_{n})=\mathbf{b}_{k}\,\right\}\ =\ p_{k+1}-p_{k}\ =\ \log_{\phi}\frac{\widetilde{\mathbf{b}}_{k}\cdot\widehat{F}}{\mathbf{b}_{k} \cdot\widehat{F}}\]
where \(\widetilde{\mathbf{b}}_{3}=(1,0,1,1)\) as defined Definition 3.7. However, the leading blocks of length \(>4\) do not satisfy Benford's Law under \(\mathscr{F}\)-expansion.
The following is an example where \(f_{\infty}\) is analytic.
**Example 5.8**.: Let \(f:[1,\infty)\to\mathbb{R}\) be the function given by \(f(n+p)=F_{n}+(F_{n+1}-F_{n})p^{2}\) where \(n\in\mathbb{N}\) and \(p\in[0,1)\). Then, \(f_{\infty}(p)=p^{2}\).
Let \(K\) be the sequence given by \(K_{n}=\left\lfloor\widehat{\mathfrak{s}}(n+\widehat{\mathfrak{s}}_{n}{}^{-1} (p^{2}))\right\rfloor\), and let \(\mathbf{b}\in\mathscr{F}_{s}\). Then, Theorem 5.6,
\[\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_ {n})=\mathbf{b}\,\right\}=\sqrt{\phi(\widetilde{\mathbf{b}}\cdot\widehat{F}- 1)}-\sqrt{\phi(\mathbf{b}\cdot\widehat{F}-1)}.\]
### Converse
Let's consider the converse of Theorem 5.6, i.e., given a sequence \(K\) of positive integers approaching \(\infty\), let us construct a uniform continuation \(f\), if possible, such that \(\operatorname{\mathrm{missing}}\left(f^{-1}(K_{n})\right)\) is equidistributed. Recall the set \(\mathscr{F}_{s}\) from Definition 3.7.
**Definition 5.9**.: A sequence \(K\) of positive integers approaching \(\infty\) is said to have _strong leading block distribution under \(\mathscr{F}\)-expansion_ if \(\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_ {n})=\mathbf{b}\,\right\}\) exists for each integer \(s\geq 2\) and each \(\mathbf{b}\in\mathscr{F}_{s}\).
**Example 5.10**.: Let \(K\) be the Lucas sequence, i.e., \(K=(2,1,3,4,\ldots)\) and \(K_{n+2}=K_{n+1}+K_{n}\). Recall that \(F_{n}=\frac{1}{10}(5+\sqrt{5})\phi^{n}(1+o(1))\) and \(K_{n}=\frac{1}{2}(\sqrt{5}-1)\phi^{n}(1+o(1))\), and let \(\alpha=\frac{1}{10}(5+\sqrt{5})\) and \(a=\frac{1}{2}(\sqrt{5}-1)\). Then, by Lemma 4.2,
\[\operatorname{\mathrm{missing}}\left(\widehat{\mathfrak{s}}^{-1}(K_{n})\right) =-\log_{\phi}(a/\alpha)+o(1)\approx.328+o(1).\]
By Lemma 4.4, the leading block of \(K_{n}\) being \(\mathbf{b}_{1}=(1,0,0)\) is determined by whether \(0\leq\operatorname{\mathrm{missing}}\left(\widehat{\mathfrak{s}}^{-1}(K_{n}) \right)<\log_{\phi}(1+\omega^{2})\approx.67\). Thus, \(\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_ {n})=\mathbf{b}_{1}\,\right\}=1\), and \(\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_ {n})=\mathbf{b}_{2}\,\right\}=0\).
In fact, the sequence \(K\) has strong leading block distribution. Recall \(\widehat{F}\) from Definition 3.1, and let us claim that \(\mathbf{b}\cdot\widehat{F}\neq\frac{\alpha}{a}=\frac{1}{10}(5+3\sqrt{5})\) for all \(s\in\mathbb{N}\) and \(\mathbf{b}\in\mathcal{F}_{s}\). Notice that
\[\frac{\alpha}{\alpha}-1=\sum_{k=1}^{\infty}\omega^{4k}. \tag{6}\]
The equality (6) is called _the Zeckendorf expansion of a real number in \((0,1)\)_ since it is a power series expansion in \(\omega\) where no consecutive powers are used; a formal definition is given in Definition 5.11 below. By the uniqueness of Zeckendorf expansions of the real numbers in \((0,1)\), the above infinite sum in (6) is not equal to any finite sum \(\mathbf{b}\cdot\widehat{F}-1\) where \(\mathbf{b}\in\mathcal{F}_{s}\); see Theorem 5.13.
Let \(s\) be an integer \(\geq 2\), and let \(\mathcal{F}_{s}=\{\mathbf{b}_{1},\ldots,\mathbf{b}_{\ell}\}\). Then, there is \(k\in\mathbb{N}\) such that \(\mathbf{b}_{k}\cdot\widehat{F}\;<\;\frac{\alpha}{a}\;<\;\mathbf{b}_{k+1}\cdot \widehat{F}.\) This implies that
\[\log_{\phi}(\mathbf{b}_{k}\cdot\widehat{F})\;<\;\log_{\phi}(\tfrac{\alpha}{a} )\;<\;\log_{\phi}(\mathbf{b}_{k+1}\cdot\widehat{F}).\]
Since \(\operatorname{\mathrm{frc}}\big{(}\widehat{y}^{-1}(K_{n})\big{)}=\log_{\phi}( \alpha/a)+o(1)\) for all \(n\in\mathbb{N}\), by Lemma 4.4, we have \(\operatorname{\mathrm{Prob}}\big{\{}\;n\in\mathbb{N}\colon\operatorname{LB}_{ s}(K_{n})=\mathbf{b}_{k}\;\big{\}}=1\). For example, consider the case of \(s=9\), and notice that \(\omega^{4}+\omega^{8}<\frac{\alpha}{a}-1<\omega^{4}+\omega^{7}\) by (6). Then, we have \(\mathbf{b}\cdot\widehat{F}<\frac{\alpha}{a}<\widetilde{\mathbf{b}}\cdot \widehat{F}\) where
\[\mathbf{b}=(1,0,0,0,1,0,0,0,1)\;\;\text{and}\;\;\widetilde{\mathbf{b}}=(1,0,0,0,1,0,0,1,0),\]
and the probability of having the leading block \(\mathbf{b}\) in the values of the Lucas sequence is \(1\).
Recall uniform continuations from Definition 1.6. Since the distribution of the leading blocks of the Lucas sequence \(K\) is concentrated on one particular block in \(\mathcal{F}_{s}\) for each \(s\), there does not exist a uniform continuation \(f\), described in Theorem 5.6, whose equidistribution is associated with the leading block distributions of the Lucas sequence \(K\). For a uniform continuation to exist, the values of the leading block distributions must be put together into a continuous function, and below we formulate the requirement more precisely.
**Definition 5.11**.: Let \(\mathbf{I}\) denote the interval \((0,1)\) of real numbers. An infinite tuple \(\mu\in\prod_{k=1}^{\infty}\mathbb{N}_{0}\) is called a _Zeckendorf expression for \(\mathbf{I}\)_ if \(\mu(k)\leq 1\), \(\mu(k)\mu(k+1)=0\), and for all \(j\in\mathbb{N}_{0}\), the sequence \(\{\mu(j+n)\}_{n=1}^{\infty}\) is not equal to the sequence \(\{1+(-1)^{n+1}\}/2\}_{n=1}^{\infty}=(1,0,1,0,\ldots)\). Let \(\mathcal{F}^{*}\) be the set of Zeckendorf expressions for \(\mathbf{I}\).
Given \(s\in\mathbb{N}\) and \(\mu\in\mathcal{F}^{*}\), let \(\mu|s:=(\mu(1),\ldots,\mu(s))\). Given \(s\in\mathbb{N}\) and \(\{\mu,\tau\}\subset\mathcal{F}^{*}\), we declare \(\mu|s<\tau|s\) if \(\mu|s\cdot\widehat{F}<\tau|s\cdot\widehat{F}\), which coincides with the lexicographical order on \(\mathcal{F}\).
**Notation 5.12**.: Given a sequence \(Q\) of real numbers, and \(\mu\in\prod_{k=1}^{\infty}\mathbb{N}_{0}\), we define \(\mu\cdot Q:=\sum_{k=1}^{\infty}\mu(k)Q_{k}\), which may or may not be a convergent series.
**Theorem 5.13** ([10], Zeckendorf Theorem for \(\mathbf{I}\)).: _Given a real number \(\beta\in\mathbf{I}\), there is a unique \(\mu\in\mathcal{F}^{*}\) such that \(\beta=\sum_{k=1}^{\infty}\mu(k)\omega^{k}=(\mu\cdot\widehat{F})\omega\)._
For the uniqueness of \(\mu\) in the theorem, we require the infinite tuples such as \((0,1,0,1,0,\ldots)\) to be not a member of \(\mathcal{F}^{*}\) since \(\sum_{k=1}^{\infty}\omega^{2k}=\omega\), which is analogous to \(0.0999\ldots=0.1\) in decimal expansion.
**Proposition 5.14** ([10]).: _Let \(\{\mu,\tau\}\subset\mathcal{F}^{*}\). Then, \(\mu\cdot\widehat{F}<\tau\cdot\widehat{F}\) if and only if \(\mu|s<\tau|s\) for some \(s\in\mathbb{N}\)_
Given a sequence with strong leading block distribution, we shall construct a function on \(\mathbf{I}\) in Definition 5.16 below, and it is well-defined by Lemma 5.15.
**Lemma 5.15**.: _Given a real number \(\beta\in\mathbf{I}\), there is a unique \(\mu\in\mathcal{F}^{*}\) such that \(\mu(1)=1\) and \(\phi(\mu\cdot\widehat{F}-1)=\beta\)._
Proof.: Let \(\widehat{F}^{*}\) be the sequence defined by \(\widehat{F}^{*}_{n}=\omega^{n}\). Given a real number \(\beta\in\mathbf{I}\), we have \(0<\omega+\beta\omega^{2}<1\). By Theorem 5.13, there are is \(\mu\in\mathcal{F}^{*}\) such that \((\mu\cdot\widehat{F})\omega=\mu\cdot\widehat{F}^{*}=\omega+\beta\omega^{2}\), which implies \(\phi(\mu\cdot\widehat{F}-1)=\beta\). We claim that \(\mu(1)=1\). If \(\mu(1)=0\), then by Proposition 5.14, \(\omega+\beta\omega^{2}=\mu\cdot\widehat{F}^{*}=(0,\ldots)\cdot\widehat{F}^{*}< \omega=(1,0,0,\ldots)\cdot\widehat{F}^{*}\), which implies a false statement \(\beta\omega^{2}<0\). Thus, \(\mu(1)=1\).
Recall from Definition 5.11 the definition of inequalities on tuples.
**Definition 5.16**.: Let \(K\) be a sequence of positive integers with strong leading block distribution under \(\mathcal{F}\)-expansion such that given \(\mu\in\mathcal{F}^{*}\) and an integer \(s\geq 2\) such that \(\mu(1)=1\), the following limit exists:
\[\lim_{s\to\infty}\operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{ s}(K_{n})\leq\mu|s\,\big{\}} \tag{7}\]
where \(\mu|s\) is identified in \(\mathcal{F}_{s}\).
Let \(f^{*}_{K}:[0,1]\to[0,1]\) be the function given by \(f^{*}_{K}(0)=0\), \(f^{*}_{K}(1)=1\), and \(f^{*}_{K}(\phi(\mu\cdot\widehat{F}-1))\) is equal to the value in (7). If \(f^{*}_{K}\) is continuous and increasing, then \(K\) is said to _have continuous leading block distribution under \(\mathcal{F}\)-expansion_.
**Lemma 5.17**.: _Let \(K\) be a sequence with continuous leading block distribution under \(\mathcal{F}\)-expansion, and let \(f^{*}_{K}\) be the function defined in Definition 5.16. Let \(\mu\in\mathcal{F}^{*}\) such that there is \(t\in\mathbb{N}\) such that \(\mu(1)=1\) and \(\mu(k)=0\) for all \(k>t\). Then, \(f^{*}_{K}(\phi(\mu|t\cdot\widehat{F}-1))\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\, \mathrm{LB}_{t}(K_{n})\leq\mu|t\,\big{\}}\)._
Proof.: Notice that if \(s>t\), then
\[\{n\in\mathbb{N}\,:\,\mathrm{LB}_{s}(K_{n})\leq\mu|s\}\subset\{n \in\mathbb{N}\,:\,\mathrm{LB}_{t}(K_{n})\leq\mu|t\}\] \[\Rightarrow \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{s}(K _{n})\leq\mu|s\,\big{\}}\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{t}(K_{n})\leq\mu|t \,\big{\}}\] \[\lim_{s\to\infty}\operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,: \,\mathrm{LB}_{s}(K_{n})\leq\mu|s\,\big{\}}=f^{*}_{K}(\phi(\mu\cdot\widehat{F} -1))\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{t}(K_{n})\leq\mu|t \,\big{\}}\]
Since \(\mu|t\cdot\widehat{F}=\mu\cdot\widehat{F}\),
\[\Rightarrow f^{*}_{K}(\phi(\mu|t\cdot\widehat{F}-1))\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\, \mathrm{LB}_{t}(K_{n})\leq\mu|t\,\big{\}}\,.\]
Recall uniform continuations from Definition 1.6.
**Theorem 5.18**.: _Let \(K\) be a sequence with continuous leading block distribution under \(\mathscr{F}\)-expansion. Let \(f_{K}^{*}\) be the function defined in Definition 5.16. Then, there is a uniform continuation \(f\) of \(F\) such that \({f_{\infty}}^{-1}=f_{K}^{*}\) and \(\operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{n})\big{)}\) is equidistributed._
Proof.: Let \(f:[1,\infty)\to\mathbb{R}\) be the function given by \(f(x)=F_{n}+(F_{n+1}-F_{n})(f_{K}^{*})^{-1}(p)\) where \(x=n+p\) and \(p=\operatorname{\mathrm{frc}}(x)\). Then, \(f\) is a uniform continuation of \(F_{n}\) since \((f_{K}^{*})^{-1}\) is independent of \(n\). Then, \({f_{\infty}}=(f_{K}^{*})^{-1}\), i.e., \({f_{\infty}}^{-1}=f_{K}^{*}\).
Let \(\beta\in(0,1)\) be a real number, and below we show that \(\operatorname{\mathrm{Prob}}\big{\{}\,n\in\mathbb{N}:\operatorname{\mathrm{ frc}}\big{(}f^{-1}(K_{n})\big{)}\leq\beta\,\big{\}}\) exists, and it is equal to \(\beta\). Recall \(\mathfrak{F}\) from Definition 4.1 and \(\mathfrak{F}_{n}\) from Definition 5.1. Let \(n\in\mathbb{N}\), and let \(m\in\mathbb{N}\) such that \(F_{m}\leq K_{n}<F_{m+1}\). Then, \(K_{n}=f(m+p_{n}^{\prime})=\mathfrak{F}(m+p_{n})\) where \(p_{n},p_{n}^{\prime}\in[0,1]\), i.e., \(f_{\infty}(p_{n}^{\prime})=\mathfrak{F}_{m}(p_{n})\). By Theorem 5.13 and Lemma 5.15, there is a unique \(\mu\in\mathscr{F}^{*}\) such that \(f_{\infty}(\beta)=\phi(\mu\cdot\widehat{F}-1)\) and \(\mu(1)=1\). Recall \(\mathfrak{F}_{\infty}\) from Definition 5.1. Notice that
\[\operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{n})\big{)}\,=\,p_{n}^ {\prime}\,\leq\,\beta\,{\Rightarrow}\,{f_{\infty}}^{-1}(\mathfrak{F}_{m}(p_{n }))\,\leq\,\beta\,\Rightarrow\,p_{n}\,{\leq}\,{\mathfrak{F}_{m}}^{-1}(f_{ \infty}(\beta))\] \[\Rightarrow \operatorname{\mathrm{frc}}\big{(}\mathfrak{F}^{-1}(K_{n})\big{)} \,\leq\,{\mathfrak{F}_{m}}^{-1}(f_{\infty}(\beta))\,=\,{\mathfrak{F}_{ \infty}}^{-1}(f_{\infty}(\beta))+o(1)\,=\,{\log_{\phi}(\mu\cdot\widehat{F})}+ o(1).\]
Fix an integer \(t\geq 2\). By Proposition 5.14, we have \(\mu\cdot\widehat{F}=\mu|t\cdot\widehat{F}+\gamma_{t}<\mu|t\cdot\widehat{F}\) where \(\gamma_{t}\geq 0\) and \(\widetilde{\mu|t}\in\mathscr{F}_{t}\) is as defined Definition 3.7. Since \({\log_{\phi}(\widetilde{\mu|t}\cdot\widehat{F})}-{\log_{\phi}(\mu\cdot\widehat {F})}>0\), there is \(M_{t}\in\mathbb{N}\) such that for all \(n\geq M_{t}\),
\[\Rightarrow \operatorname{\mathrm{frc}}\big{(}\mathfrak{F}^{-1}(K_{n}))\big{)}\, \leq\,{\log_{\phi}(\mu\cdot\widehat{F})}+o(1)\,<\,{\log_{\phi}(\widetilde{\mu| t}\cdot\widehat{F})}.\]
By Lemma 4.4, this implies \(\operatorname{\mathrm{LB}}_{t}(K_{n})\leq\mu|t\). Recall \(\Omega_{n}=\{k\in\mathbb{N}:k\leq n\}\);
\[\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}:\operatorname{ \mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}+o(1)\,\leq\, \operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}:\operatorname{\mathrm{ LB}}_{t}(K_{k})\leq\mu|t\,\big{\}}+o(1)\] \[\Rightarrow \,\limsup_{n}\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}: \operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}\,\leq \,\operatorname{\mathrm{Prob}}\big{\{}\,n\in\mathbb{N}:\operatorname{\mathrm{ LB}}_{t}(K_{n})\leq\mu|t\,\big{\}}.\]
Let us work on the \(\liminf\) of the probability. Since \(\beta\neq 0\), there is \(t_{0}>1\) such that \(\mu(t_{0})>0\). Thus, if \(t>t_{0}\) is sufficiently large, then there are at least two entries \(1\) in \(\mu|t\), and \(\mu|t\) has more entries after the second entry of \(1\) from the left. Recall the product \(*\) from Definition 2.2. This choice of \(t\) allows us to have the unique coefficient functions \(\tilde{\mu}\) and \(\widehat{\mu}\) in \(\mathscr{F}_{t}\) such that \(1+\tilde{\mu}*F=\widehat{\mu}*F\) and \(1+\tilde{\mu}*F=\mu|t*F\). Then, by Lemma 4.4,
\[\operatorname{\mathrm{LB}}_{t}(K_{n})\,{\leq}\,\tilde{\mu}\, \Rightarrow\,\operatorname{\mathrm{frc}}\big{(}\mathfrak{F}^{-1}(K_{n})\big{)} \,<\,{\log_{\phi}(\widehat{\mu}\cdot\widehat{F})}+o(1)\] \[\Rightarrow \,p_{n}\,<\,{\mathfrak{F}_{m}}^{-1}(\phi(\widehat{\mu}\cdot \widehat{F}-1))+o(1)\] \[\Rightarrow \,{\mathfrak{F}_{m}}(p_{n})\,=\,f_{\infty}(p_{n}^{\prime})\,<\, {\phi}(\widehat{\mu}\cdot\widehat{F}-1)+o(1)\] \[\Rightarrow \,{p_{n}^{\prime}}\,=\,\operatorname{\mathrm{frc}}\big{(}f^{-1}(K _{n})\big{)}\,<\,{f_{\infty}}^{-1}(\phi(\widehat{\mu}\cdot\widehat{F}-1))+o(1)\] \[\,<\,{f_{\infty}}^{-1}(\phi(\mu|t\cdot\widehat{F}-1))\quad\text{by Proposition \ref{prop:1},}\] \[\,\leq\,{f_{\infty}}^{-1}(\phi(\mu\cdot\widehat{F}-1))\,=\,\beta\] \[\Rightarrow \,\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}: \operatorname{\mathrm{LB}}_{t}(K_{k})\,{\leq}\,\tilde{\mu}\,\big{\}}+o(1)\, \leq\,\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}:\operatorname{ \mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}+o(1)\] \[\Rightarrow \,\operatorname{\mathrm{Prob}}\big{\{}\,n\in\mathbb{N}: \operatorname{\mathrm{LB}}_{t}(K_{n})\,{\leq}\,\tilde{\mu}\,\big{\}}\,\leq\, \liminf_{n}\,\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}: \operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}\]
By Lemma 5.17,
\[{f_{\infty}}^{-1}(\phi(\hat{\mu}\cdot\widehat{F}-1))\,\leq\,\liminf_{n}\,\, \operatorname{Prob}\big{\{}\,k\in\Omega_{n}:\operatorname{frc}\big{(}f^{-1}(K_{k })\big{)}\leq\beta\,\big{\}}\,.\]
It is given that \(\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{t}(K_{n})\leq \mu|t\,\big{\}}\to{f_{\infty}}^{-1}(\phi(\mu\cdot\widehat{F}-1))\) as \(t\to\infty\). Let us calculate the other bound;
\[2+\tilde{\mu}*F\,=\,\mu|t*F\,\,\Rightarrow\,\,\,2+\sum_{k=1}^{t }\tilde{\mu}(k)F_{t-k+1}=\sum_{k=1}^{t}\mu(k)F_{t-k+1}\] \[\,\,\Rightarrow\,\,\,2+\sum_{k=1}^{t}\tilde{\mu}(k)\Big{(} \alpha\phi^{t-k+1}+O(\phi^{-t+k-1})\Big{)}\,=\,\,\sum_{k=1}^{t}\mu(k)\Big{(} \alpha\phi^{t-k+1}+O(\phi^{-t+k-1})\Big{)}\] \[\,\,\Rightarrow\,\,O(1)+\alpha\sum_{k=1}^{t}\tilde{\mu}(k)\phi^ {t-k+1}\,=\,\,\alpha\sum_{k=1}^{t}\mu(k)\phi^{t-k+1}\] \[\,\,\Rightarrow\,\,O(\phi^{-t})+\sum_{k=1}^{t}\tilde{\mu}(k) \omega^{k-1}\,=\,\,\sum_{k=1}^{t}\mu(k)\omega^{k-1}\] \[\,\,\Rightarrow\,\,{o(1)}+\tilde{\mu}\cdot\widehat{F}\,=\,\,\mu |t\cdot\widehat{F}\,\,\Rightarrow\,\,\tilde{\mu}\cdot\widehat{F}\to\mu\cdot \widehat{F}\] \[\,\,\Rightarrow\,{f_{\infty}}^{-1}(\phi(\hat{\mu}\cdot\widehat{ F}-1))\to{f_{\infty}}^{-1}(\phi(\mu\cdot\widehat{F}-1))\,=\,\,\beta.\]
It is clear that if \(f\) is a uniform continuation of \(F\), and \(K\) is a sequence of positive integers approaching \(\infty\) such that \(\operatorname{frc}\big{(}f^{-1}(K_{n})\big{)}\) is equidistributed, then, by Lemma 4.4, \(K\) has continuous leading block distribution under \(\mathscr{F}\)-expansion. Therefore, we have the following.
**Theorem 5.19**.: _Let \(K\) be a sequence of positive integers approaching \(\infty\). Then, \(K\) has continuous leading block distribution under \(\mathscr{F}\)-expansion if and only if there is a uniform continuation \(f\) of \(F\) such that \(\operatorname{frc}\big{(}f^{-1}(K_{n})\big{)}\) is equidistributed._
## 6 Benford's Law under generalized Zeckendorf expansion
The contents in Sections 3, 4, and 5 are for Zeckendorf expansion, but the arguments of the proofs apply to the setup for generalized Zeckendorf expansion without difficulties. In this section, we introduce definitions and results for generalized Zeckendorf expansion without proofs, but only refer to the corresponding theorems for Zeckendorf expansion proved in the earlier sections.
### Generalized Zeckendorf expansion
Let us review the generalized Zeckendorf expansion. Recall \(\mathbb{N}_{0}\) from Definition 2.1
**Definition 6.1**.: Given a tuple \(L=(a_{1},a_{2},\ldots,a_{N})\in\mathbb{N}_{0}^{N}\) where \(N\geq 2\) and \(a_{1}>0\), let \(\Theta\) be the following infinite tuple in \(\prod_{k=1}^{\infty}\mathbb{N}_{0}\):
\[(a_{1},a_{2},\ldots,a_{N-1},a_{N},a_{1},a_{2},\ldots,a_{N-1},a_{N},\ldots)\]
where the finite tuple \((a_{1},a_{2},\ldots,a_{N-1},a_{N})\) repeats. Let \(\Theta(k)\) denote the \(k\)th entry of \(\Theta\), and let \(\Theta|s=(\Theta(1),\ldots,\Theta(s))\) for \(s\in\mathbb{N}\).
Recall len from Definition 2.2. Let \(\mathscr{H}^{\circ}\) be the recursively-defined set of tuples \(\epsilon\) with arbitrary finite length such that \(\epsilon\in\mathscr{H}^{\circ}\) if and only if there is smallest \(s\in\mathbb{N}_{0}\) such that \(\epsilon|s=\Theta|s\), \(\epsilon(s+1)<\Theta(s+1)\), and \((\epsilon(s+2),\ldots,\epsilon(n))\in\mathscr{H}^{\circ}\) where \(n=\operatorname{len}(\epsilon)\) and \(s\) is allowed to be \(\operatorname{len}(\epsilon)\). Let \(\mathscr{H}:=\{\epsilon\in\mathscr{H}^{\circ}:\epsilon(1)>0\}\). The set \(\mathscr{H}\) is called a _periodic Zeckendorf collection of coefficient functions for positive integers_, and \(L\) is called _a principal maximal block of the periodic Zeckendorf collection \(\mathscr{H}\)_.
Notice that if \(L=(1,0,1,0)\) is a principal maximal block of the periodic Zeckendorf collection \(\mathscr{H}\), then \(L^{\prime}=(1,0)\) is a principal maximal block of \(\mathscr{H}\) as well. For this reason, the indefinite article was used in the statement of the definition of principal maximal blocks.
**Example 6.2**.: Let \(\mathscr{H}\) be the (periodic) Zeckendorf collection determined by the principal maximal block \(L=(3,2,1)\). Then, \(\Theta=(3,2,1,3,2,1,\ldots)\), and \((0)\) and \((3,2,1)\) are members of \(\mathscr{H}^{\circ}\). For \((0)\in\mathscr{H}^{\circ}\), we set \(s=0\) in Definition 6.1, and for \((3,2,1)\in\mathscr{H}^{\circ}\), we set \(s=3\).
Let \(\epsilon=(3,2,0)\) and \(\mu=(3,1,3,2,0)\). For \(\epsilon\), if \(s=2\), by the definition, we have \(\epsilon\in\mathscr{H}\). For \(\mu\), if \(s=1\), then \(\mu|1=\Theta|1\), \(\mu(2)<\Theta(2)\), and \((\mu(3),\ldots,\mu(5))=\epsilon\in\mathscr{H}^{\circ}\). Listed below are more examples of members of \(\mathscr{H}\):
\[(3,2,1,3,2,1),\,(3,0,0,3),\,(1,2,3,1,0,3),\,(1,2,3,1,1,0).\]
Recall the product notation from Definition 2.2
**Definition 6.3**.: Let \(\mathscr{H}\) be a set of coefficient functions, and let \(H\) be an increasing sequence of positive integers. If given \(n\in\mathbb{N}\), there is a unique \(\epsilon\in\mathscr{H}\) such that \(\epsilon*H=n\), then \(H\) is called a _fundamental sequence_ of \(\mathscr{H}\), and the expression \(\epsilon*H\) is called an \(\mathscr{H}\)-_expansion_.
If \(\mathscr{H}\) is a periodic Zeckendorf collection for positive integers, then, by Theorem 6.4 below, there is a unique fundamental sequence of \(\mathscr{H}\).
**Theorem 6.4** ([10, 17]).: _Let \(\mathscr{H}\) be a periodic Zeckendorf collection, and let \(L=(a_{1},\ldots,a_{N})\) be its principal maximal block. Then, there is a unique fundamental sequence \(H\) of \(\mathscr{H}\), and it is given by the following recursion:_
\[H_{n+N}\,=\,a_{1}H_{n+N-1}+\cdots+a_{N-1}H_{n+1}+(1+a_{N})H_{n} \ \ \text{for all}\ n\in\mathbb{N},\ \text{and} \tag{8}\] \[H_{n}\,=\,1+\sum_{k=1}^{n-1}a_{k}H_{n-k}\ \ \text{for all}\ \ 1\leq n\leq N+1.\]
If \(L=(1,0)\), then its periodic Zeckendorf collection is \(\mathcal{F}\) defined in Definition 3.1, and its fundamental sequence is the Fibonacci sequence. If \(L=(9,9)\), then the fundamental sequence \(H\) is given by \(H_{n}=10^{n-1}\), and \(\epsilon*H\) for \(\epsilon\in\mathcal{H}\) are base-10 expansions.
**Definition 6.5**.: Let \(L=(a_{1},\ldots,a_{N})\) be the list defined in Definition 6.1. Let \(\psi=\psi_{\mathcal{H}}=\psi_{L}\) be the dominant real zero of the polynomial \(g=g_{\mathcal{H}}=g_{L}(x):=x^{N}-\sum_{k=1}^{N-1}a_{k}x^{N-k}-(1+a_{N})\), and \(\theta:=\psi^{-1}\). Let \(\widehat{H}\) be the sequence given by \(\widehat{H}_{n}=\theta^{n-1}\).
By (8), the sequence \(\widehat{H}\) in Definition 6.5 satisfies
\[\widehat{H}_{n}\,=\,a_{1}\widehat{H}_{n+1}+\cdots+a_{N-1}\widehat{H}_{n+N-1}+ (1+a_{N})\widehat{H}_{n+N}\quad\text{for all }n\in\mathbb{N}. \tag{9}\]
The following proposition is proved in [10, Lemma 43] and [16, Lemma 2.1].
**Proposition 6.6**.: _Let \(L=(a_{1},\ldots,a_{N})\) be the list defined in Definition 6.1, and let \(g=x^{N}-\sum_{k=1}^{N-1}a_{k}x^{N-k}-(1+a_{N})\) be the polynomial. Then, \(g\) has one and only one positive real zero \(\psi\), it is a simple zero, and there are no other complex zeros \(z\) such that \(|z|\geq\psi\)._
**Theorem 6.7**.: _Let \(\mathcal{H}\) be a periodic Zeckendorf collection with a principal maximal block \(L=(a_{1},\ldots,a_{N})\), and let \(H\) be the fundamental sequence of \(\mathcal{H}\). Then \(H_{n}=\delta\psi^{n}+O(\psi^{rn})\) for \(n\in\mathbb{N}\) where \(\delta\) and \(r\) are positive (real) constants, \(r<1\), and \(\psi\) is the dominant zero defined in Definition 6.5._
Proof.: Let \(g\) be the characteristic polynomial of degree \(N\) defined in Definition 6.5, and let \(\{\lambda_{1},\ldots,\lambda_{m}\}\) be the set of \(m\) distinct (complex) zeros of \(g\) where \(m\leq N\) and \(\lambda_{1}=\psi\). Then, by Proposition 6.6, we have \(|\lambda_{k}|<\psi\) for \(2\leq k\leq m\). Since \(\psi\) is a simple zero, by the generalized Binet's formula [15], there are polynomials \(h_{k}\) for \(2\leq k\leq m\) and a constant \(\delta\) such that \(H_{n}=\delta\psi^{n}+\sum_{k=2}^{m}h_{k}(n)\lambda_{k}^{n}\) for \(n\in\mathbb{N}\). Thus, there is a positive real number \(r<1\) such that \(H_{n}=\delta\psi^{n}+O(\psi^{rn})\) for \(n\in\mathbb{N}\).
Notice that \(\lim_{n\to\infty}H_{n}/\psi^{n}=\delta\), and let us show that \(\delta\) is a positive real number, and in particular, it is non-zero. By [11, Theorem 5.1],
\[\delta\,=\,\lim_{n\to\infty}\frac{H_{n}}{\psi^{n}}\,=\,\frac{1}{\psi g^{\prime}( \psi)}\sum_{k=1}^{N}\frac{H_{k}}{(k-1)!}\left[\frac{d^{k-1}}{dx^{k-1}}\frac{g( x)}{x-\psi}\,\right]_{x=0}. \tag{10}\]
By the product rule, we have
\[\left[\frac{d^{k-1}}{dx^{k-1}}\frac{g(x)}{x-\psi}\,\right]_{x=0}\,=\,\left[ \sum_{j=0}^{k-1}\binom{k-1}{j}g^{(j)}(x)(x-\psi)^{-1-j}\prod_{t=1}^{j}(-t) \right]_{x=0}.\]
Notice that if \(1\leq j\leq N-1\), then \(g^{(j)}(0)=-a_{N-j}j!\leq 0\), and if \(g(0)=-(1+a_{N})<0\). The inequality \((-\psi)^{-1-j}\prod_{t=1}^{j}(-t)<0\) for all \(0\leq j\leq k-1\) follows immediately from considering the cases of \(j\) being even or odd. Thus, the summands in (10) are non-negative, and some are positive. This concludes the proof of \(\delta\) being a positive real number.
For the remainder of the paper, let \(\mathcal{H}\), \(H\), and \(\psi\) be as defined in Definition 6.1.
### Strong Benford's Law
Let us begin with definitions related to leading blocks under \(\mathcal{H}\)-expansion.
**Definition 6.8**.: Let \(n=\epsilon*H\) for \(n\in\mathbb{N}\) and \(\epsilon\in\mathcal{H}\). If \(s\leq\operatorname{len}(\epsilon)\), then \((\epsilon(1),\ldots,\epsilon(s))\in\mathcal{H}\) is called _the leading block of \(n\) with length \(s\) under \(\mathcal{H}\)-expansion_. Recall that \(N=\operatorname{len}(L)\). If \(N\leq s\leq\operatorname{len}(\epsilon)\), let \(\operatorname{LB}_{s}^{\mathcal{H}}(n)\), or simply \(\operatorname{LB}_{s}(n)\) if the context is clear, denote the leading block of length \(s\), and if \(s\leq\operatorname{len}(\epsilon)\) and \(s<N\), then let \(\operatorname{LB}_{s}^{\mathcal{H}}(n)\) or simply \(\operatorname{LB}_{s}(n)\) denote \((\epsilon(1),\ldots,\epsilon(s),0,\ldots,0)\in\mathbb{N}_{0}^{N}\). If \(s>\operatorname{len}(\epsilon)\), \(\operatorname{LB}_{s}(n)\) is declared to be undefined.
Recall the product \(*\) from Definition 2.2. Given an integer \(s\geq N\), let \(\mathcal{H}_{s}:=\{\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{\ell}\}\) be the finite set of the leading blocks of length \(s\) occurring in the \(\mathcal{H}\)-expansions of \(\mathbb{N}\) such that \(1+\mathbf{b}_{k}*H=\mathbf{b}_{k+1}*H\) for all \(k\leq\ell-1\). Recall the truncation notation from Definition 4.3. If \(1\leq s<N\), then let \(\mathcal{H}_{s}:=\{\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{\ell}\}\) be the finite set of the leading blocks of length \(N\) occurring in the \(\mathcal{H}\)-expansions of \(\mathbb{N}\) such that \(\mathbf{b}_{k}(j)=0\) for all \(1\leq k\leq\ell\) and \(j>s\) and \(1+\mathbf{b}_{k}|s*H=\mathbf{b}_{k+1}|s*H\) for all \(k\leq\ell-1\). The leading block \(\mathbf{b}_{\ell}\) is called _the largest leading block in \(\mathcal{H}_{s}\)_.
The exclusive block \(\mathbf{b}_{\ell+1}\) is a coefficient function of length \(s\) defined as follows. If \(s\geq N\), \(s\equiv p\pmod{N}\), and \(0\leq p<N\), then
\[\mathbf{b}_{\ell+1}:=(a_{1},\ldots,a_{N-1},a_{N},\ldots,a_{1},\ldots,a_{N-1},1 +a_{N},c_{1},\ldots,c_{p})\]
where \(c_{k}=0\) for all \(k\). If \(1\leq s<N\), then \(\mathbf{b}_{\ell+1}:=(a_{1},\ldots,a_{N-1},1+a_{N})\). If \(\mathbf{b}\) is a leading block \(\mathbf{b}_{k}\in\mathcal{H}_{s}\), then we denote \(\mathbf{b}_{k+1}\) by \(\widetilde{\mathbf{b}}\).
If \(s<N\), then the leading blocks \(\mathbf{b}\) in \(\mathcal{H}_{s}\) has lengths \(N\) with \(N-s\) last entries of \(0\), and this case is treated as above in order to make \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\) in the statement and proof of Lemma 4.4 fit into the case of periodic Zeckendorf collections; see Lemma 6.13.
By [10, Definition 2 & Lemma 3] and Theorem 6.4, the subscript numbering of \(\mathbf{b}_{k}\in\mathcal{H}_{s}\) for \(1\leq k\leq\ell\) coincides with the lexicographical order on the coefficient functions. If \(\mathbf{b}\) is the largest leading block in \(\mathcal{H}_{s}\) where \(s\geq N\), then
\[\mathbf{b}=(\ldots,a_{1},\ldots,a_{N},a_{1},\ldots,a_{p})\text{ if }s\equiv p \pmod{N}\text{ and }0\leq p<N\text{,}\]
and \(1+\mathbf{b}*H=\widetilde{\mathbf{b}}*H=(\ldots,a_{1},\ldots,1+a_{N},0,\ldots,0 )*H=H_{s+1}\) where the last \(p\) entries of \(\widetilde{\mathbf{b}}\) are zeros. If \(s\equiv 0\pmod{N}\) and \(\mathbf{b}\) is the largest leading block in \(\mathcal{H}_{s}\), then
\[\widetilde{\mathbf{b}}=(a_{1},\ldots,a_{N-1},a_{N},\ldots,a_{1},\ldots,a_{N-1},1+a_{N}).\]
If \(s<N\) and \(\mathbf{b}\) is the largest leading block in \(\mathcal{H}_{s}\), then \(\widetilde{\mathbf{b}}=(a_{1},\ldots,a_{N-1},1+a_{N})\). Recall \(\widehat{H}\) from Definition 6.5. For all cases, if \(\mathbf{b}\) is the largest leading block in \(\mathcal{F}_{s}\), then \(\widetilde{\mathbf{b}}\cdot\widehat{H}=\psi\).
The proof of Theorem 6.9 below follows immediately from Lemma 6.12 and Theorem 6.14.
**Theorem 6.9**.: _Let \(K\) be a sequence of positive integers such that \(K_{n}=ab^{n}(1+o(1))\) where \(a\) and \(b\) are positive real numbers such that \(\log_{\psi}b\) is irrational. Then, given \(\mathbf{b}\in\mathcal{H}_{s}\),_
\[\operatorname{Prob}\left\{\,n\in\mathbb{N}\,\colon\operatorname{LB}_{s}(K_{n })=\mathbf{b}\,\right\}\;=\;\log_{\psi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {H}}{\mathbf{b}\cdot\widehat{H}}.\]
Motivated from the leading block distributions of the exponential sequences considered in Theorem 6.9, we declare strong Benford's Law under \(\mathscr{H}\)-expansion as follows.
**Definition 6.10**.: A sequence \(K\) of positive integers is said to _satisfy strong Benford's Law under \(\mathscr{H}\)-expansion_ if given \(\mathbf{b}\in\mathscr{H}_{s}\),
\[\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}\;=\;\log_{\psi}\frac{\widetilde{\mathbf{b}}\cdot \widehat{H}}{\mathbf{b}\cdot\widehat{H}}.\]
### Benford continuation of \(H\)
We used a real analytic continuation of the Fibonacci sequence for Zeckendorf expansion, but as demonstrated in the earlier sections, the leading block distributions are determined by its limit \(\mathfrak{F}_{\infty}\). Thus, rather than using a real analytic continuation of \(H\), we may use the limit version directly, which is far more convenient. By Theorem 6.7, \(H_{n}=\delta\psi^{n}+O(\psi^{rn})=\delta\psi^{n}(1+o(1))\) where \(\delta\) and \(r<1\) are positive real constants, and we define the following:
**Definition 6.11**.: Let \(\mathfrak{H}:[1,\infty)\to\mathbb{R}\) be the function given by
\[\mathfrak{H}(x)=H_{n}+(H_{n+1}-H_{n})\frac{\psi^{p}-1}{\psi-1}\]
where \(x=n+p\) and \(p=\operatorname{frc}(x)\), and it is called _a Benford continuation of \(H\)_.
Recall Definition 1.6. Then, \(\mathfrak{H}\) is a uniform continuation of \(H\), and \(\mathfrak{H}_{\infty}(p)=\frac{\psi^{p}-1}{\psi-1}\) for all \(p\in[0,1]\). We leave the proof of the following to the reader.
**Lemma 6.12**.: _For real numbers \(x\in[1,\infty)\), we have \(\mathfrak{H}(x)=\delta\psi^{x}(1+o(1))\), and \(\mathfrak{H}^{-1}(x)=\log_{\psi}(x)-\log_{\psi}\delta+o(1)\)._
Recall \(\mathscr{H}_{s}\) from Definition 6.8 and \(\widehat{H}\) from Definition 6.5.
**Lemma 6.13**.: _Let \(K\) be a sequence of positive real numbers approaching \(\infty\). Let \(\mathbf{b}\in\mathscr{H}_{s}\), and let \(A_{\mathbf{b}}:=\{n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b}\}\). Then, there are real numbers \(\gamma_{n}=o(1)\) and \(\widetilde{\gamma}_{n}=o(1)\) such that \(n\in A_{\mathbf{b}}\) if and only if_
\[\log_{\psi}\mathbf{b}\cdot\widehat{H}+\gamma_{n}\;\leq\;\operatorname{frc} \left(\mathfrak{H}^{-1}(K_{n})\right)\;<\;\log_{\psi}\widetilde{\mathbf{b}} \cdot\widehat{H}+\widetilde{\gamma}_{n}, \tag{11}\]
_where \(\widetilde{\gamma}_{n}=0\) when \(\mathbf{b}\) is the largest leading block of length \(s\)._
There is no difficulty in applying the arguments of the proof of Lemma 4.4 to Lemma 6.13, and we leave the proof to the reader.
Recall Definition 6.10.
**Theorem 6.14**.: _Let \(K\) be an increasing sequence of positive integers such that \(\operatorname{frc}\left(\mathfrak{H}^{-1}(K_{n})\right)\) is equidistributed. Then, \(K\) satisfies strong Benford's Law under the \(\mathscr{H}\)-expansion._
There is no difficulty in applying the arguments of the proof of Theorem 4.5 to Theorem 6.14, and we leave the proof to the reader.
### Absolute Benford's Law
Introduced in [10] is a full generalization of Zeckendorf expressions, which is based on the very principle of how Zeckendorf expressions are constructed in terms of lexicographical order. In this most general sense, the collection \(\mathcal{H}\) in Definition 6.1 is called a periodic Zeckendorf collection of coefficient functions. We believe that a property concerning all periodic Zeckendorf collections may be noteworthy, and as in the notion of normal numbers, we introduce the following definition.
**Definition 6.15**.: A sequence \(K\) of positive integers is said to _satisfy absolute Benford's Law_ if \(K\) satisfies strong \(\mathcal{H}\)-Benford's Law for each periodic Zeckendorf collection \(\mathcal{H}\).
Recall the Lucas sequence \(K=(2,1,3,4,\ldots)\) from Example 5.10. It satisfies strong Benford's Law under all base-\(b\) expansions, but it does not satisfy strong Benford's Law under Zeckendorf expansion. Thus, the Lucas sequence does not satisfy absolute Benford's Law.
**Theorem 6.16**.: _Let \(\gamma\) be a positive real number such that \(\gamma\) is not equal to \(\psi^{r}\) for any \(r\in\mathbb{Q}\) and any dominant real zero \(\psi\) of \(g_{\mathcal{H}}\) where \(\mathcal{H}\) is as defined in Definition 6.5. Let \(K\) be the sequence given by \(K_{n}=\left\lfloor\gamma^{n}\right\rfloor\). Then, \(K\) satisfies absolute Benford's Law._
Proof.: Let \(H\) and \(\psi\) be as defined in Definitions 6.3 and 6.5, and let \(\mathfrak{H}\) be the Benford continuation defined in Definition 6.11. Note that \(\psi\) is algebraic. Notice that \(\left\lfloor\gamma^{n}\right\rfloor=\gamma^{n+o(1)}\), and \(\log_{\psi}(\gamma)\) is irrational. Thus, by Lemma 6.12,
\[\mathfrak{H}^{-1}(K_{n})\,=\,(n+o(1))\log_{\psi}(\gamma)-\log_{\psi}(\delta)+ o(1)\,=\,n\log_{\psi}(\gamma)-\log_{\psi}(\delta)+o(1).\]
By Weyl's Equidistribution Theorem,
\[\Rightarrow\,\operatorname{Prob}\left\{\,n\in\mathbb{N}\,\colon\operatorname{ frc}\left(\mathfrak{H}^{-1}(K_{n})\right)\leq\beta\,\right\}\,=\,\operatorname{Prob} \left\{\,n\in\mathbb{N}\,\colon\operatorname{frc}\left(n\log_{\psi}(\gamma) \right)\leq\beta\,\right\}\,=\,\beta.\]
By Theorem 6.14, \(K\) satisfies Benford's Law under \(\mathcal{H}\)-expansion.
**Corollary 6.17**.: _Let \(\gamma>1\) be a real number that is not an algebraic integer. Then, the sequence \(K\) given by \(K_{n}=\left\lfloor\gamma^{n}\right\rfloor\) satisfies absolute Benford's Law._
Proof.: The dominant real zero \(\psi\) defined in Definition 6.5 is an algebraic integer, and so is \(\psi^{r}\) for all \(r\in\mathbb{Q}\). Thus, if \(\gamma\in\mathbb{R}\) is not an algebraic integer, then by Theorem 6.16, \(K\) satisfies absolute Benford's Law.
**Example 6.18**.: Let \(K\) be the sequence given by \(K_{n}=\left\lfloor\frac{\phi}{\sqrt{5}}(\frac{89}{55})^{n}\right\rfloor\), which is considered in the introduction. Since \(\frac{89}{55}\) is not an algebraic integer, by Corollary 6.17, the sequence \(K\) satisfies absolute Benford's Law.
### Other Continuations
Recall Definition 1.6, and that \(H\) is the fundamental sequence of \(\mathcal{H}\) defined in Definition 6.3. As in Section 5, we relate other continuations of \(H\) to the distributions of leading blocks under \(\mathcal{H}\)-expansion.
Recall the Benford continuation \(\mathfrak{H}\) from Definition 6.11, uniform continuations \(h\) and \(h_{\infty}\) from Definition 1.6, and the definition of \(\widetilde{\mathbf{b}}\) from Definition 6.8.
**Theorem 6.19**.: _Let \(h:[1,\infty)\to\mathbb{R}\) be a uniform continuation of \(H\). Then, there is a sequence \(K\) of positive integers approaching \(\infty\), e.g., \(K_{n}=\big{|}\mathfrak{H}(n+\mathfrak{H}_{n}{}^{-1}\circ h_{n}\big{(}\mathrm{ frc}(n\pi)\big{)}\big{|}\), such that \(\mathrm{frc}\left(h^{-1}(K_{n})\right)\) is equidistributed._
_Let \(K\) be a sequence of of positive integers approaching \(\infty\) such that \(\mathrm{frc}\left(h^{-1}(K_{n})\right)\) is equidistributed. Let \(\mathbf{b}\in\mathcal{H}_{s}\). Then,_
\[\mathrm{Prob}\big{\{}\,n\in\mathbb{N}:\mathrm{LB}_{s}(K_{n})= \mathbf{b}\,\big{\}} =\,h_{\infty}{}^{-1}\circ\mathfrak{H}_{\infty}(\log_{\psi} \widetilde{\mathbf{b}}\cdot\widehat{H})-h_{\infty}{}^{-1}\circ\mathfrak{H}_ {\infty}(\log_{\psi}\mathbf{b}\cdot\widehat{H})\] \[=\,h_{\infty}{}^{-1}\Bigg{(}\frac{\widetilde{\mathbf{b}}\cdot \widehat{H}-1}{\psi-1}\Bigg{)}-h_{\infty}{}^{-1}\Bigg{(}\frac{\mathbf{b}\cdot \widehat{H}-1}{\psi-1}\Bigg{)}.\]
There is no difficulty in applying the arguments of the proof of Theorem 5.6 to Theorem 6.19, and we leave the proof to the reader.
Recall that \(\mathbf{I}=(0,1)\). As in Definition 5.11, we introduce expressions for \(\mathbf{I}\) that are associated with \(\mathcal{H}\). Recall also the infinite tuple \(\Theta\), \(\theta\), and \(\widehat{H}\), from Definitions 6.1 and 6.5.
**Definition 6.20**.: An infinite tuple \(\mu\in\prod_{k=1}^{\infty}\mathbb{N}_{0}\) is called an \(\mathcal{H}\)-_expression for \(\mathbf{I}\)_ if there is a smallest \(i\in\mathbb{N}\) such that \(\mu(i)>0\), \((\mu(i),\ldots,\mu(k))\in\mathcal{H}\) for all \(k\geq i\), and for all \(j\in\mathbb{N}_{0}\), the sequence \(\{\mu(j+n)\}_{n=1}^{\infty}\) is not equal to the sequence \(\{\Theta(n)\}_{n=1}^{\infty}\). Let \(\mathcal{H}^{*}\) be the set of \(\mathcal{H}\)-expressions for \(\mathbf{I}\).
Given \(s\in\mathbb{N}\) and \(\{\mu,\tau\}\subset\mathcal{H}^{*}\), we declare \(\mu|s<\tau|s\) if \(\mu|s\cdot\widehat{H}<\tau|s\cdot\widehat{H}\), which coincides with the lexicographical order on \(\mathbb{N}_{0}^{s}\). We define \(\mu\cdot\widehat{H}:=\sum_{k=1}^{\infty}\mu(k)\theta^{k-1}\), which is a convergent series.
Theorem 6.21 and Proposition 6.22 below are proved in [10].
**Theorem 6.21** (Zeckendorf Theorem for \(\mathbf{I}\)).: _Given a real number \(\beta\in\mathbf{I}\), there is a unique \(\mu\in\mathcal{H}^{*}\) such that \(\beta=\sum_{k=1}^{\infty}\mu(k)\theta^{k}=(\mu\cdot\widehat{H})\theta\)._
**Proposition 6.22**.: _Let \(\{\mu,\tau\}\subset\mathcal{H}^{*}\). Then, \(\mu\cdot\widehat{H}<\tau\cdot\widehat{H}\) if and only if \(\mu|s<\tau|s\) for some \(s\in\mathbb{N}\)._
By Theorem 6.21, Proposition 6.22 and (9), the function from \(\{\mu\in\mathcal{F}^{*}:\mu(1)=1\}\) to \([0,1)\) given by the following is bijective:
\[\mu\mapsto\frac{\mu\cdot\widehat{H}-1}{\psi-1},\]
and hence, \(h_{K}^{*}\) defined in Definition 6.23 is well-defined.
**Definition 6.23**.: Let \(K\) be a sequence of positive integers approaching \(\infty\) such that given \(\mu\in\mathscr{H}^{*}\) such that \(\mu(1)=1\), the following limit exists:
\[\lim_{s\to\infty}\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB} _{s}(K_{n})\leq\mu|s\,\big{\}}\,. \tag{12}\]
Let \(h_{K}^{*}:[0,1]\to[0,1]\) be the function given by \(h_{K}^{*}(0)=0\), \(h_{K}^{*}(1)=1\), and \(h_{K}^{*}\left(\frac{\mu\cdot\hat{H}-1}{\psi-1}\right)\) is equal to the value in (12). If \(h_{K}^{*}\) is continuous and increasing, then \(K\) is said to _have continuous leading block distribution under \(\mathscr{H}\)-expansion_.
**Theorem 6.24**.: _Let \(K\) be a sequence with continuous leading block distribution under \(\mathscr{H}\)-expansion. Let \(h_{K}^{*}\) be the function defined in Definition 6.23. Then, there is a uniform continuation \(h\) of \(H_{n}\) such that \(h_{\infty}{}^{-1}=h_{K}^{*}\) and \(\operatorname{frc}\big{(}h^{-1}(K_{n})\big{)}\) is equidistributed._
There is no difficulty in applying the arguments of the proof of Theorem 5.18 to Theorem 6.24, and we leave the proof to the reader.
## 7 Benford behavior within expansions
As mentioned in the introduction, Benford's Law under base-\(b\) expansion arises with Zeckendorf expansion, and let us review this result, which is available in [4].
Let \(\mathscr{K}\) be a periodic Zeckendorf collection defined in Definition 6.1, and let \(K\) be the fundamental sequence of \(\mathscr{K}\), defined in Definition 6.3. Let \(S\) be an infinite subset of \(\{K_{n}:n\in\mathbb{N}\}\) such that \(q(S):=\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:K_{n}\in S\,\big{\}}\) exists. Recall the product \(*\) from Definition 2.2. For a randomly selected integer \(n\in[1,K_{t+1})\), let \(\mu*K\) be the \(\mathscr{K}\)-expansion of \(n\), let \(M=\operatorname{len}(\mu)\), and define
\[P_{t}(n):=\frac{\sum_{k=1}^{M}\mu(k)\chi_{S}(K_{k})}{\sum_{k=1}^{M}\mu(k)} \tag{13}\]
where \(\chi_{S}\) is the characteristic function on \(\{K_{k}:k\in\mathbb{N}\}\), i.e., \(\chi_{S}(K_{k})=1\) if \(K_{k}\in S\) and \(\chi_{S}(K_{k})=0\), otherwise. Proved in [3] is that given a real number \(\epsilon>0\), the probability of \(n\in[1,K_{t+1})\) such that \(|P_{t}(n)-q(S)|<\epsilon\) is equal to \(1+o(1)\) as a function of \(t\). For Benford behavior, we let \(S\) be the set of \(K_{n}\) that have leading (fixed) leading decimal digit \(d\). Then, \(q(S)=\log_{10}(1+\frac{1}{d})\), and the probability of having a summand \(K_{n}\) with leading digit \(d\) within the \(\mathscr{K}\)-expansion is nearly \(q(S)\) most of the times.
This result immediately applies to our setup. Let \(\mathscr{H}\) and \(H\) be as defined in Definition 6.1 different from \(\mathscr{K}\) and \(K\). For example, let \(\mathscr{H}\) be the base-\(b\) expressions, and let \(\mathscr{K}\) be the Zeckendorf expressions. Then, \(H\) is the sequence given by \(H_{n}=b^{n-1}\) and \(K=F\) is the Fibonacci sequence. Recall from Definition 6.8 that \(\mathscr{H}_{s}\) is a set of leading blocks under \(\mathscr{H}\)-expansion, and that \(\operatorname{LB}_{s}^{\mathscr{H}}(n)\) denotes the leading block of \(n\) in \(\mathscr{H}_{s}\) under \(\mathscr{H}\)-expansion. By Corollary 4.7, the sequence \(K\) satisfies (strong) Benford's Law under \(\mathscr{H}\)-expansion, i.e.,
\[\operatorname{Prob}\Big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s}^{\mathscr{H }}(K_{n})=\mathbf{b}\,\Big{\}}\,=\,\log_{\psi}\frac{\tilde{\mathbf{b}}\cdot \widehat{H}}{\mathbf{b}\cdot\widehat{H}}\]
where \(\mathbf{b}\in\mathscr{H}_{s}\) and \(\psi=b\), and this is Benford's Law under base-\(b\) expansion. The case considered in the introduction is that \(\mathscr{H}\) is the Zeckendorf expansion and \(\mathscr{K}\) is the binary expansion. The following is a corollary of [4, Theorem 1.1]. Recall Definition 6.5.
**Theorem 7.1**.: _Let \(\mathscr{H}\) and \(H\) be as defined in Definition 6.1, and Let \(K\) be the fundamental sequence of a periodic Zeckendorf collection \(\mathscr{K}\) such that \(\psi^{r}_{\mathscr{H}}\neq\psi_{\mathscr{K}}\) for all \(r\in\mathbb{Q}\) where \(\psi_{\mathscr{H}}\) and \(\psi_{\mathscr{K}}\) are the dominant real zeros of \(g_{\mathscr{H}}\) and \(g_{\mathscr{K}}\), respectively. Given \(\mathbf{b}\in\mathscr{H}_{s}\), let \(S_{\mathbf{b}}:=\langle K_{n}:\mathrm{LB}_{s}^{\mathscr{H}}(K_{n})=\mathbf{b},\ n\in\mathbb{N}\rangle\). For a randomly selected integer \(n\in[1,K_{t+1})\), let \(P_{t}(n)\) be the proportion defined in (13) with respect to \(S=S_{\mathbf{b}}\). Then, given a real number \(\epsilon>0\), the probability of \(n\in[1,K_{t+1})\) such that_
\[\left|P_{t}(n)-\log_{\psi_{\mathscr{K}}}\frac{\widetilde{\mathbf{b}}\cdot \widehat{H}}{\mathbf{b}\cdot\widehat{H}}\right|\,<\,\epsilon\]
_is equal to \(1+o(1)\) as a function of \(t\)._
## 8 Future work
Instead of the leading digit, one can look at the distribution of the digit in the second, third, or generally any location. For a sequence that is strong Benford, the further to the right we move in location, the more uniform is the distribution of digits. A natural question is to ask whether or not a similar phenomenon happens with Zeckendorf decompositions, especially as there is a natural furthest to the right one can move.
We can also look at signed Zeckendorf decompositions. Alpert [1] proved that every integer can be written uniquely as a sum of Fibonacci numbers and their additive inverses where two if two consecutive summands have the same sign then their indices differ by at least \(4\) and if they are of opposite sign then their indices differ by at least \(3\). We now have more possibilities for the leading block, and one can ask about the various probabilities. More generally, one can consider the \(f\)-decompositions introduced in [13], or the non-periodic Zeckendorf collections introduced in [10].
Additionally, one can explore sequences where there is no longer a unique decomposition, see for example [5, 6, 7, 8, 9], and ask what is the distribution of possible leading blocks. There are many ways we can formulate this question. We could look at all legal decompositions, we could look at what happens for specific numbers, we could look at what happens for specific types of decompositions, such as those arising from the greedy algorithm or those that use the fewest or most summands.
| 文学において、ベンフォードの法則は、ベースbの拡張において、b>1が整数である場合に考慮されています。この論文では、正の整数列の最初の"桁"の分布について、Zeckendorf拡張などの他の拡張において調査し、ベンフォードの法則が一般化されたZeckendorf拡張下でどのように定義されるかを宣言します。 |
2309.11823 | Low luminosity observation of BeXRB source IGR J21347+4737 | In this paper, we report the results of the detailed temporal and spectral
studies of the BeXRB J21347+4737 based on the data from the NuSTAR and
\textit{SWIFT/XRT} in a wide energy range of 0.5-50 keV. Coherent pulsation
with a period of 322.738$\;\pm\;0.018$ s was found in the light curve,
implying, the source pulsation has spun down by 0.341 s $yr^{-1}$ when compared
with the coherent pulsation estimated from XMM Newton more than 7 years ago.
The pulse profile of the source demonstrates energy dependence and has
evolved with time. The pulse fraction of the source observed by NuSTAR
initially decreases with energy upto $\sim$15 keV, followed by a non-monotonic
increasing trend above 15 keV. The source spectrum can be well approximated by
an absorbed power-law model with modification by an exponential cutoff at high
energies. The absorbed flux of the source is
$4\times10^{-11}\;erg\;cm^{-2}\;s^{-1}$ and its corresponding luminosity is
$3.5\times10^{35}\;erg\;s^{-1}$. The study of pulse-phase resolved spectroscopy
shows a strong variation of spectral parameters on the phase. No additional
emission or absorption features in the form of Fe line or Cyclotron lines were
observed both in the phase-averaged and phase-resolved spectra of IGR
J21347+4737. | Manoj Ghising, Ruchi Tamang, Binay Rai, Mohammed Tobrej, Bikash Chandra Paul | 2023-09-21T06:49:31 | http://arxiv.org/abs/2309.11823v1 | # Low luminosity observation of BeXRB source IGR J21347+4737
###### Abstract
In this paper, we report the results of the detailed temporal and spectral studies of the BeXRB J21347+4737 based on the data from the NuSTAR and _SWIFT/XRT_ in a wide energy range of 0.5-50 keV. Coherent pulsation with a period of 322.738 \(\pm\) 0.018 s was found in the light curve, implying, the source pulsation has spun down by 0.341 s \(yr^{-1}\) when compared with the coherent pulsation estimated from XMM Newton more than 7 years ago. The pulse profile of the source demonstrates energy dependence and has evolved with time. The pulse fraction of the source observed by NuSTAR initially decreases with energy upto \(\sim\)15 keV, followed by a non-monotonic increasing trend above 15 keV. The source spectrum can be well approximated by an absorbed power-law model with modification by an exponential cutoff at high energies. The absorbed flux of the source is \(4\times 10^{-11}\)\(erg\)\(cm^{-2}\)\(s^{-1}\) and its corresponding luminosity is \(3.5\times 10^{35}\)\(erg\)\(s^{-1}\). The study of pulse-phase resolved spectroscopy shows a strong variation of spectral parameters on the phase. No additional emission or absorption features in the form of Fe line or Cyclotron lines were observed both in the phase-averaged and phase-resolved spectra of IGR J21347+4737.
accretion, accretion discs - stars: neutron - pulsars: individual: IGR J21347+4737
## 1 Introduction
High Mass X-ray Binaries (HMXBs) are binary systems consisting of an early-type massive star denoted as the donor star and a compact object that is either a black hole or a neutron star. Multi-wavelength studies of HMXBs provide exceptional astrophysical laboratories in understanding stellar evolution, accretion physics, and gravitational wave events. HMXBs are categorized into two classes _viz._ Be/X-ray Binaries (BeXRBs) and Super Giant X-ray Binaries (SGXBs) (Reig et al. 2011). The BeXRB system is known for harboring a neutron star and a fast spinning early-type star with an equatorial circumstellar disc (Reig et al. 2011). Be stars are a subset of B-type stars, non-supergiant fast rotating, luminosity class III-IV stars, which at some point in past have shown spectral lines in emission (Porter & Rivinius 2003). These systems represent an extreme case of X-ray variability spanning up to four orders of magnitude. They are most of the time in a quiescent state and show transient character. They are best studied by various observatories during bright outbursts when the count rate is significantly enhanced. These systems undergo two types of outbursts- Type I and Type II (Reig & Nespoli 2013). However, the luminosity in the case of Type-II outburst activity is 1-2 orders of magnitude more than Type-I outbursts. Type-II outbursts are less frequent compared to Type-I which are periodic, function of the orbital period. The type-II outbursts last relatively longer than Type-I and last for a large fraction of an orbital period or several orbital periods (few sources reported undergoing Type-II outbursts are EXO 2030+375 (Wilson, Finger & Camero 2008) and GRO J2058+42 (Molkov et al. 2019) while Type-I covers a small fraction of the orbital period (0.2-0.3 \(P_{\odot}\)) (few sources reported undergoing Type-I outbursts are LXP 38.55 (Vasilopoulos et al. 2016a), GX 304-1 (Jaisawai et al. 2016).
The BeXRB IGR J21347+4737 was discovered using INTEGRAL/IBIS in 2002 (Krivonos 2007; Bird 2007). Chandra observations (Sazonov et al. 2008) helped in determining accurate source position and which allowed identification of optical counterpart using observations in the optical band; the optical counterpart has also been observed by Masetti et al. (2009) with an optical spectral appearance completely different from the findings of Bikmaev et al. (2008). Masetti et al. (2009) report on a source distance of about 5.8 kpc. The optical counterpart is present in the Gaia DR3 catalogue (Gaia DR3 1978365123143522176), where it has a distance of 5.1 kpc. An important property of this star is that it is a shell Be star, implying that it is observed almost edge on (e.g. Reig & Fabregat 2015). The coherent pulsation of the source was first reported at 320.35 s by Reig & Zezas (2014). During the all-sky X-ray monitoring campaign of IBIS, the source under
went from the active state (2002 Dec - 2004 Feb) to inactivity (2004 Mar. - 2007 Feb.). The average X-ray flux (17-60 keV) of the source during the period of activity was \((2.3\pm 0.4)\times~{}10^{-11}~{}erg~{}cm^{-2}~{}s^{-1}\) (Bikmaev et al. 2008). The source flux was reported to have increased to \(7\times 10^{-11}~{}erg~{}cm^{-2}~{}s^{-1}\) in the 4-12 keV band in comparison to \(1.3\times 10^{-11}~{}erg~{}cm^{-2}~{}s^{-1}\) during the second all-sky survey of ART-XC telescope onboard the SRG observatory. The SRG/ART-XC reported a possible beginning of a new outburst from the source, however, later observations of the NuSTAR about 14 days after found that the source flux decreased to 1.49 \(\times~{}10^{-11}~{}erg~{}cm^{-2}~{}s^{-1}\), which hinted that the source was not entering the outburst state.
In this paper, we probe the detailed coverage of X-ray timing and spectral properties of the BeXRB IGR J21347+4737 in the broadband 0.5-50 keV energy range. The _Swift_ and NuSTAR observation has been considered for analysis.
### NuSTAR
The Nuclear Spectroscopic Telescope Array (NuSTAR), is a NASA space-based X-ray telescope for studying high-energy X-rays from astrophysical sources. It was the first hard X-ray focussing telescope that operates in the energy range (3-79) keV and consists of two identical X-ray telescope modules that have their own Focal plane modules referred as grma and fpmb (Harrison et al. 2013). The telescope provides an X-ray imaging, timing, and spectroscopy with the angular resolution of 18 arcsec and spectral resolution of 400 eV at 10 keV. The light curves and the spectra were analyzed using the latest version of heasoft _v6.30.1_. The data reduction of the source IGR J21347+4737 was processed using the standard software nustardas v1.9.7. In order to get the clean event files for the respective modules, we run the mission-specific task nuppeeline. Using the standard _ftool_ xselect combined with the SaoImage ne89 application software, the source and background regions were selected where a circular region of 80" around the source center was considered as the region file for the source and the background region of the same size was taken away from the source as the background region file. The light curve file and the spectra file were obtained by running the script nuproducts making use of the region files obtained earlier. The background subtraction of the light curve for both the instruments fpma and fpmb were done by using _ftool_ lcmath and finally, barycentric corrections were performed with the help of _ftool_ barycorr.
### XMM-Newton
The XMM-Newton data were extracted so that we could conduct a comparative analysis in the soft energy band. The XMM-Newton epic instrument data are reduced by using the Science Analysis System software sas _version 20.0.0_. In order to minimize the pile-up effect, both the mos and pn data were taken in small window mode. The epic date were screened and filtered using the script epchain for pn-detector while emchain for mos 1 and mos 2 detectors. We have excluded all events at the edge of ccd and from bad pixels by setting flag=0 and selecting the pn events with pattern in the range 0-4, and the mos data with pattern\(\leq\)12. The source photons are extracted by considering a circular aperture of radius 25 arcsec, and the background is also taken from the same ccd chip with a circular region of radius 25 arcsec. Light curves at different energies were extracted by using the concatenated and calibrated epic event available in the pps pipeline products. The light curves were barycentred to the solar frame by using the task barycen.
### Swift
The _Swift/X-ray Telescope(XRT)_ data was utilised to understand the spectral fit in the soft energy state (1-10) keV. The _Swift_ observatory also includes other two instruments _viz._ the Burst Alert Telescope (BAT) and UV/Optical Telescope (UVOT) but not considered in the present manuscript. The combined three instruments XRT, BAT and UVOT covers an energy range of 0.002-150 keV. The observatory _Swift_ performs regular monitoring in the energy range (15-50) keV of the X-ray sky (Krimm et al. 2013). The _Swift/XRT_ operates in the energy range 0.5-10 keV. Its data were extracted by imposing the mission-specific task krtpipeline. The event files was extracted by using _ftool_ xselect. The extracted image was viewed by using the astronomical imaging software Ds9. A circular region of 30 arcsec and background region of the same size was considered for observations in photon counting mode.
## 2 Timing Analysis
### Pulse period estimation
The source and background light curves with a bin size of 2 s were considered for _NuSTAR_ for pulse period estimation. An approximate pulse period of the source was detected by using heasarc task rowspec and the precise value were determined by making use of epoch folding technique (Leahy et al. 1983) through the efsearch tool. The precise pulse period of the source IGR J21347+4737 were detected at 322.738+0.018 s. The
uncertainties in the spin period were estimated by simulating light curves following the methods outlined in (Boldin et al. 2013). For this, we generated 1000 simulated light curves such that the count rates are within the error of the original data. Next, we applied the epoch folding technique to each simulated light curve to obtain the spin period distribution. The mean value and the standard deviation of the distribution were then obtained. The standard deviation thus obtained were taken as the spin period uncertainty.
In order to develop an understanding of the spin evolution of the source, we carried out the same analysis as above for _XMM-Newton_ observations were taken for more than 6 years back ago. The analysis of the light curve (bintine 40 s) estimates the spin period of the source at 320.351\(\pm\)0.022 s. Therefore, it turns out that in more than 7 years gap interval the pulsations of the source have spun down by 1.08\(\times 10^{-8}ss^{-1}\) consistent with the works of Pike et al. (2020). A combined plot of the periodogram has been furnished in the Figure 1 for more clarity.
### Pulse Profile and Pulse Fraction
The pulse profile of the source was analyzed by resolving the NuSTAR light curves in 3-79 keV energy range into several energy bands of 3-7 keV, 7-12 keV, 12-18 keV, 18-30 keV, and 30-50 keV respectively. A similar analysis of XMM-Newton light curves was carried out in 0-3 keV, 3-7 keV, and 0.5-10 keV respectively. The pulse profiles shown in Figures 2 and 3 have been folded by defining the zero-point at the minimum flux. The pulse profile of both the observations is in general single-peaked and asymmetric demonstrating weak dependence on energy. Owing to limited statistics, no significant variations in higher energy bands is observed. The peak emission in the lowest energy band lies in the phase interval 0.4-0.5 with an additional emission component at phase interval 0.6-0.7. When compared to XMM-Newton observations, for e.g, the 3-7 keV energy range pulse profile, the peak emission has shifted towards lower phase intervals with time. This firmly establishes that the pulse profile has evolved with time.
The Pulse Fraction (PF) is defined as the ratio of the difference between the maximum and minimum intensity (\(F_{max}-F_{min}\)) to the sum of the maximum and minimum intensity (\(F_{max}+F_{min}\)) of the pulse profile i.e \(PF~{}=(F_{max}-F_{min})/(F_{max}+F_{min})\). If we ignore XMM-Newton data, then pulse fraction of the NuSTAR observation initially decreases upto 15 keV and then above 15 keV, it increases with energy which is quite typical of these X-ray pulsars (Lutovinov & Tsygankov 2009).
### Hardness Ratio (HR)
The HR is defined as the ratio of unnormalized pulse profiles in the corresponding energy bands 10-20 keV/ 3-10 keV and 3-15 keV/ 15-50 keV respectively. It is observed from the Figure 5 that the HR exhibits two peak structures with broad maxima at the initial phase \(\sim\)0.1 and a narrow one at \(\sim\)0.5. The HR shows a significant anti-correlation with the continuum pulse profile.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Observatory & Date of observation & OBs ID & Exposure & Count Rates (\(cs^{-1}\)) \\ & & & (in ksec) & \\ \hline NuSTAR & 2020-12-17 & 90601339002 & 27.10 & 1.54\(\pm\)0.01 \\ _Swift/XRT_ & 2020-12-17 & 00089189001 & 1.63 & 0.077\(\pm\)0.006 \\ XMM-Newton & 2013-11-24 & 0727961301 & 30 & 0.14\(\pm\)0.02 mos1 \\ & & & & 0.13\(\pm\)0.01 mos2 \\ & & & & 0.38\(\pm\)0.03 pn \\ \hline \hline \end{tabular}
\end{table}
Table 1: Observation details of the source IGR J21347+4737.
Figure 1: Periodogram of the source IGR J21347+4737 corresponding to both NuSTAR (black) and XMM-Newton (orange) observation.
## Page 4 of 1 _J. Astrophys. Astr._ (0000) **000**: ****
## 3 Spectral Analysis
### Phase-average spectroscopy
The combined _Swift_-NuSTAR phase-average spectra of the BeXRB IGR J21347+4737 were fitted in the energy range (0.5-50) keV (see Figure 6). We have ignored the spectrum below 3 keV for NuSTAR because the instrument is well calibrated only above 3 keV, and above 50 keV due to background domination. The _Swift_/_XRT_ spectra has been considered in the energy range 0.5-10 keV in order to understand the spectrum missed by NuSTAR below 3 keV. The spectra of both the FPMA and FPMB were grouped to have at least 30 counts per bin by using the tool grppha. The normalization factor _i.e_ inaccuracies in calibration between _Swift_ and NuSTAR were ensured by introducing the cross-calibration multiplicative factors (the constant model in xspec). The constant parameter of FPMA was fixed to unity while the parameter for FPMB and _Swift_ was left free. Several phenomenological models were applied that are broadly used to approximate the spectra of X-ray pulsars. In particular, the broadband spectra can be best fitted by a simple cutoffpl model (model i). The contribution of neutral hydrogen column density (\(n_{H}\)) was modified by the photoabsorption model (tbabs in xspec) with the solar abundances from (Wilms et al. 2000). No additional modifications with soft blackbody were required to improve the spectra. Different continuum components in the form of powerlaw+highecut (model ii) in place of cutoffpl were tested and they all fit the spectra very well with similar quality, however, the cutoffpl continuum makes it slightly better (see Table 2). The spectrum of the source has a typical shape for X-ray pulsars (Coburn et al. 2002; Filippova et al. 2005). None of the model combinations revealed a significant presence of the Fluorescent iron line (Fe \(K_{\alpha}\) line). The absorbed flux of the source in the 0.5-50 keV energy range was found to be \(\sim 4\times 10^{-11}\)_erg cm\({}^{-2}\)_s\({}^{-1}\)_ and the corresponding luminosity is \(3.45\times 10^{35}\)_erg s\({}^{-1}\)_ assuming a distance of 8.5 kpc (Reig & Zezas 2014).
For comparative study and to understand the evolution of spectral parameters with luminosity and time, we have fitted the XMM-Newton observations in 0.5-10 keV. We have considered the joint fitting of mos1, mos2, and pn in our analysis (see Figure 7). The calibration uncertainty was ensured by keeping the constant parameter of mos1 fixed while keeping free for the other two instruments. We have suitably fitted the spectra by an absorbed power-law model with an addition of bobydrad component _i.e._ constant\(\times\)tbabs\(\times\)(powerlaw+blackbody). The details of the spectral parameters can be seen in Table 3. The absorbed flux of the source in the 0.5-10 keV energy range was found to be \(\sim 4\times 10^{-12}\)_erg cm\({}^{-2}\)_s\({}^{-1}\)_ and
Figure 4: Variation of Pulse Fraction (PF) with energy of IGR J21347+4737. The two symbols marked in black color correspond to XMM-Newton observations while blue color corresponds to NuSTAR observation.
Figure 3: Folded pulse profile of IGR J21347+4737 in several energy bands corresponding to XMM-Newton observation.
\begin{table}
\begin{tabular}{c l l} \hline \hline Parameters & MODEL I (CUTOFFPL) & MODEL II (HIGHECUT) \\ \hline \(C_{FPMA}\) & 1(fixed) & 1(fixed) \\ \(C_{FPMB}\) & 1.003\(\pm\)0.013 & 1.003\(\pm\)0.013 \\ \(C_{XRT}\) & 0.84\(\pm\)0.02 & 0.84\(\pm\)0.03 \\ \(n_{H}\) (\(\times\) 10\({}^{22}\)\(cm^{-2}\)) & 4.37\(\pm\)0.54 & 5.03\(\pm\)0.92 \\ \(\Gamma\) & 1.30\(\pm\)0.07 & 1.34\(\pm\)0.09 \\ \(E_{CUT}\) (keV) & 20.06\(\pm\)2.26 & 5.98\(\pm\)0.69 \\ \(E_{fold}\) (keV) & - & 21.78\(\pm\)3.08 \\ Flux(\(\times\)10\({}^{-11}\)\(erg\)\(cm^{-2}\)\(s^{-1}\)) & 4.13\(\pm\)0.09 & 4.17\(\pm\)0.08 \\ \(\chi^{2}_{\nu}\) & 1.09 & 1.08 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The above table represents the fit parameters of IGR J21347+4737 of _Swift/XRT_ and NuSTAR observations in the energy range 0.5-50 keV using the continuum models constant\(\times\)tbabs\(\times\)cutoffpl and constant\(\times\)tbabs\(\times\)highest\(\times\)po. \(n_{H}\) represents equivalent Hydrogen column density, \(\Gamma\) and \(E_{CUT}\) represents Photon Index and cutoff energy of cutoffpl model and \(E_{fold}\) represents folded energy of inghneut model. Flux were calculated within energy range (0.5-50) keV. The fit statistics \(\chi^{2}_{\nu}\) represents reduced \(\chi^{2}\) (\(\chi^{2}\) per degrees of freedom). Errors quoted for each parameter are within 90% confidence interval.
Figure 5: Variation of Hardness Ratio (HR) for the IGR J21347+4737 pulse profiles as a function of the spin phase. The averaged pulse profile in a wide energy range 3-50 keV is superimposed in gray color for visual comparison.
the corresponding luminosity is \(3.45\times 10^{34}\ erg\ s^{-1}\) assuming a distance of 8.5 kpc (Reig & Zezas 2014).
### Phase-Resolved Spectroscopy
In order to understand the evolution of the spectral parameters as a function of the rotation phase, we performed phase-resolved spectral analysis of the source in this section. Therefore, the source pulse period was divided into 10 evenly distributed rotational phase bins. We created good time interval (gt) files based on the folding epoch and rotational period corresponding to each phase. Using gti files merged by mgtime, we ran nuproducts for each phase to get the final spectra files corresponding to 10 phases.
The 3-50 keV spectra corresponding to all the 10 phases were approximated with the best-fit model constant\(\chi\)tbabs\(\chi\)cutoffppl. used for the phase-averaged spectrum. Figure 8 shows an evolution of the spectral parameters Flux, \(\Gamma\), and \(E_{cut}\) with the rotational phase. The hydrogen column density (\(n_{H}\)) was fixed at the phase averaged value of \(4.37\times 10^{22}\ cm^{-2}\). It is obvious that all the spectral parameters feature strong variations with a large amplitude over the pulse phase. The flux of the source varied between (3-5) \(\times\) 10\({}^{-11}\ erg\ cm^{-2}\ s^{-1}\) and follows the continuum pulse profile as shown in the background in grey color (see Figure 8). The Photon Index (\(\Gamma\)) having maximum and minimum values of 1.29 in the phase interval (0.5-0.6) and 0.97 in the phase interval (0.3-0.4). The cutoff energy (\(E_{c}\)) parameter of the cutoffpl model was seen to highlight variability at all phases having maximum value of 29.83 keV in the
\begin{table}
\begin{tabular}{c c} \hline \hline Parameters & XMM-Newton Data \\ \hline \hline \(C_{MOS1}\) & 1(fixed) \\ \(C_{MOS2}\) & 1.06\(\pm\)0.0.04 \\ \(C_{pn}\) & 1.03\(\pm\)0.0.03 \\ \(n_{H}\) (\(\times\) 10\({}^{22}\ cm^{-2}\)) & 0.96\(\pm\)0.07 \\ bbodyrad (kT) (keV) & 1.70\(\pm\)0.19 \\ bbodyrad norm & 0.019\(\pm\)0.004 \\ \(\Gamma\) & 0.81\(\pm\)0.12 \\ Flux (\(\times\) 10\({}^{-12}\ erg\ cm^{-2}\ s^{-1}\)) & 3.85\(\pm\)0.16 \\ \(\chi^{2}_{\nu}\) & 1.10 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The above table represents the fit parameters of IGR J21347+4737 for XMM-Newton observations using the continuum model constant\(\chi\)tbabs\(\chi\)(powerlaw+BLackbody). \(n_{H}\) represents equivalent Hydrogen column density, \(\Gamma\) represents Photon Index and kT represents blackbody temperature. Flux were calculated within energy range (0.5-10) keV. The fit statistics \(\chi^{2}_{\nu}\) represents reduced \(\chi^{2}\) (\(\chi^{2}\) per degrees of freedom). Errors quoted for each parameter are within 90% confidence interval.
Figure 6: Folded spectrum of IGR J21347+4737 and its approximation with the model constant\(\chi\)tbabs\(\chi\)cutoffppl. in the energy range 0.5-50keV. Red and black colors are for the FPMA and FPMB telescopes of the NuSTAR observatory while green color corresponds to _Swift/XRT_.
Figure 7: Folded spectrum of XMM-Newton in the energy range 0.5-10 keV. The above Figure shows the spectrum of IGR J21347+4737 and its approximation with the model constant\(\chi\)tbabs\(\chi\)(powerlaw+BLackbody). Red and black colors corresponds to MOS 1 and MOS 2 while green color is for pn.
phase interval (0.2-0.3) and minimum value of 12.03 keV in the initial phase.
## 4 Discussion and Conclusion
In this paper, we have investigated the BeXRB IGR J21347+4737 in the energy band 0.5-50 keV. Previous work of (Reig & Zezas 2014) have shown the timing and spectral properties of the source in the 1-12 keV energy range by analyzing the XMM-Newton observation. The timing analysis of the light curve detects coherent pulsation of the source at 322.738 \(\pm\) 0.018 s. In the same analysis using XMM-Newton observation, we found the spin period of the source at 320.351\(\pm\)0.022 s. This leads to the conclusion that the source has undergone spin down over the span of more than 7 years by 0.341 s \(year^{-1}\). When the source pulsation was first discovered, it was in a low optical state meaning when the \(H_{\alpha}\) line was in absorption, which implies the disappearance of the Be star's circumstellar disc (Reig & Zezas 2014). However, the detection of pulsations in both the XMM-Newton and the NuSTAR observation indicates that the accretion mechanism still remains active. The main contribution to X-ray emission in the absence of disc is the accretion from a stellar wind (Reig & Zezas 2014).
The pulse profile of IGR J21347+4737 in different energy bands demonstrate a single-peak, asymmetric in nature, indicating that the pulsar is probably turned towards the observer by one of its poles and leaving the other one to be practically invisible. Various theoretical model formulations are there that explain the asymmetric nature of the pulse profiles. One of the formulations being distorted magnetic dipole field. In this formalism, the magnetic poles not being opposite to one another may be one of the probable reasons for the asymmetric shape of pulse profiles (Parmar et. al. 1989; Leahy 1991; Riffert et. al. 1993; Bulik et. al. 1995). The other reason being considered is due to an asymmetric accretion stream (Basko & Sunyaev 1976; Wang & Welter 1981; Miller 1996). As evident from Figures 2 and 3, the profile is single-peaked reflecting a pencil beam pattern in the neutron star radiation. The simple single-peaked profile of X-ray pulsars in the broadband energy range 3-79 keV is supported by the argument of Vasilopoulos et al. (2017). The source luminosity in our spectral analysis is of the order of \(10^{35}\) for the NuSTAR. At this luminosity, we cannot expect the formation of an extended accretion column (Mushtukov et al. 2015a,b). Therefore most of the X-ray photons in this scenario should be originating from the region very close to the surface of the neutron star. As a result, the emission pattern in such a case is characterized by a pencil beam shaped (Basko & Sunyaev 1976). Thus, it is convincing to say that the source may be accreting in the sub-critical regime. Generally, a pencil beam pattern is characterized by luminosities marked lower than the critical luminosity (\(L_{\rm c}\)). \(L_{\rm c}\) is defined as the luminosity below which the accretion phenomenon is characterized by pencil beamed pattern and above which the accretion phenomenon is characterized by a fan-beamed pattern. The pencil beamed pattern indicates the fact that the source may be accreting in the sub-critical regime while the fan-beamed pattern indicates that the source may be accreting in the super-critical regime. In the sub-critical regime, the accreted matter reaches the surface of the Neutron star through nuclear collisions with atmospheric protons or the coulomb collisions with thermal electrons (Harding et al. 1994), whereby, the emission phenomenon occurs from the top of the column (Bumard Arons & Klein 1991). It is also clear that the observed source luminosity of NuSTAR is much greater than the minimum luminosity (2.2 \(\times\) 10\({}^{32}\)\(erg\)\(s^{-1}\) (Reig & Zezas 2014) at which the propeller effect sets in.
The pulse profile of the source is represented in Figures 2 and 3 and evolves with time indicating a change in the accretion geometry.
The PF of the source shows a dramatic variation with energy that is rarely observed in low-luminosity X-ray pulsars. Initially, the PF is found to decrease steadily with energy upto 15 keV followed by a non
Figure 8: Variation of spectral parameters with pulse phase. \(\Gamma\) and \(E_{cutr}\) represents Photon Index and cutrror energy of cutoffpl. model. Flux were calculated within energy range (3-50) keV for the NuSTAR observation. Errors quoted for each parameter are within 90 % confidence interval. The averaged pulse profile in a wide energy range 3-50 keV is superimposed in gray color for visual comparison.
monotic increasing trend above 15 keV which is typical for X-ray pulsars (Lutovinov & Tsygankov 2009). Such variations are usually seen at higher luminosities but rarely observed at low luminosities.
The broadband 0.5-50 keV spectra of the source are well fitted by an absorbed cutoff power-law modified with cutoff at high energy. Other continuum models in the form of incoherent were tested in place of cutoff and the given combinations also fit the spectra very well. No other emission components in the form of blackbody or Fluorescent iron line were required for modifying the spectra. The detection of the iron line provides strong evidence for the presence of material in the vicinity of the source. In addition, no absorption lines in the form of the Cyclotron line were present in the phase-averaged spectra. There was no significant presence of features like CRSF that leads to the straightforward way to determine the magnetic field in such systems. The lack of presence of the CRSF suggests that the magnetic field of the source is either weaker than \(\sim 5\times 10^{11}\ G\) or stronger than \(\sim 6\times 10^{12}\ G\) considering the lower and upper limits of the observatory NuSTAR full energy range (3-79) keV where the sensitivity allows us to non-detection of such features below and above the range.
When a source undergoes an outburst, it quickly reaches its peak luminosity and then the rate of mass accretion slows down. After an outburst, the pulsar could go through a phase called the propeller phase, which causes the luminosity to suddenly decrease. \(L_{prop}\) stands for the corresponding luminosity at which the source transitions to the propeller phase. One of the causes of the luminosity's abrupt decline is the magnetosphere's centrifugal barrier, which is produced by its faster-than-Kelvin-velocity rotation. The expression for the propeller luminosity (\(L_{prop}\)) is,
\[L_{prop}\ =\ 4\times 10^{37}k^{7/2}B_{12}^{2}P_{s}^{-7/3}M_{1.4}^{-2/3}R_{6}^ {5}[erg/s] \tag{1}\]
where k=0.5 is used to account for the interaction between the magnetosphere and the accretion flow. (Ghosh & Lamb 1978), \(B_{12}\) is the magnetic field in units of \(10^{12}G\), \(M_{1.4}\) is the mass of the NS in units of solar mass and \(R_{6}\) is the radius of the NS in units of \(10^{6}\). If the magnetic field of the source is constrained at \(10^{14}\ G\), using the source pulsation of \(\sim\)322 s, we estimated \(L_{prop}\sim 5\times 10^{32}\ erg\ s^{-1}\). And considering the magnetic field of the source at \(10^{12}\) G, we found \(L_{prop}\sim 5\times 10^{30}\ erg\ s^{-1}\).Thus the calculated \(L_{prop}\) is roughly about \(10^{3}-10^{5}\) times less than the observed luminosity of the source. It is therefore reasonable to conclude that the source with a pulse period of 322 s and a magnetic field of \(10^{12}-10^{13}\ G\) may not reach the propeller phase. As a result, the source's magnetic field must be in the order of \(10^{14}\ G\) for it to enter the propeller phase at the measured luminosity of the source.
## Data availability
The observational data used in this study can be accessed from the HEASARC data archive and is publicly available for carrying out research work.
## 5 Acknowledgement
This research work have utilized the NuSTAR data archived by the NASA High Energy Astrophysics Science Archive Research Center (HEASARC) online service which is maintained by the Goddard Space Flight Center. This work has made use of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Space Science Data Center (SSDC, Italy) and the California Institute of Technology (Caltech, USA). We further acknowledge the use of public data from XMM-Newton observatory. We would like to thank the anonymous reviewer for his/her kind suggestion which helped us in improving the manuscript in the present form.
| This paperでは、NuSTARと\textit{SWIFT/XRT}のデータから、0.5-50 keVの広いエネルギー範囲で詳細な時間的およびスペクトル的なBeXRB J21347+4737の研究結果を報告する。322.738$\;\pm\;0.018$ sの周期の相関的な脈動が光曲線で検出され、その結果、XMMNewtonからの相関的な脈動は7年以上前に測定されたものと比較して、0.341 s $yr^{-1}$ の減速を示唆している。このソースの脈動プロファイルにはエネルギー依存性があり、時間とともに進化している。NuSTARで観測されたソースの脈動率はエネルギーで減少する傾向があり、15 keVを下回ると減少する。その後、15 keVを超えると非線形に増加する。 |
2306.00173 | Discovering Love numbers through resonance excitation during extreme
mass ratio inspirals | General Relativity predicts that black holes do not possess an internal
structure and consequently cannot be excited. This leads to a specific
prediction about the waveform of gravitational waves, which they emit during a
binary black hole inspiral and to the vanishing of their Love numbers. However,
if astrophysical black holes do possess an internal structure, their Love
numbers would no longer vanish, and they could be excited during an inspiral by
the transfer of orbital energy. This would affect the orbital period and lead
to an observable imprint on the emitted gravitational waves waveform. The
effect is enhanced if one of the binary companions is resonantly excited. We
discuss the conditions for resonant excitation of a hypothetical internal
structure of black holes and calculate the phase change of the gravitational
waves waveform that is induced due to such resonant excitation during
intermediate- and extreme-mass-ratio inspirals. We then relate the phase change
to the electric quadrupolar Love number of the larger companion, which is
resonantly excited by its smaller companion. We discuss the statistical error
on measuring the Love number by LISA and show that, because of this phase
change, the statistical error is small even for small values of the Love
number. Our results provide a strong indication that the Love number could be
detected by LISA with remarkable accuracy, much higher than what can be
achieved via tidal deformation effects. Our results further indicate that
resonant excitation of the central black hole during an extreme- or
intermediate-mass-ratio inspirals is the most promising effect for putting
bounds on, or detecting, non-vanishing tidal Love numbers of black holes. | Shani Avitan, Ram Brustein, Yotam Sherf | 2023-05-31T20:42:13 | http://arxiv.org/abs/2306.00173v1 | # Discovering Love numbers through resonance excitation during extreme mass ratio inspirals
###### Abstract
General Relativity predicts that black holes do not possess an internal structure and consequently cannot be excited. This leads to a specific prediction about the waveform of gravitational waves which they emit during a binary black hole inspiral and to the vanishing of their Love numbers. However, if astrophysical black holes do possess an internal structure, their Love numbers would no longer vanish, and they could be excited during an inspiral by the transfer of orbital energy. This would affect the orbital period and lead to an observable imprint on the emitted gravitational waves waveform. The effect is enhanced if one of the binary companions is resonantly excited. We discuss the conditions for resonant excitation of a hypothetical internal structure of black holes and calculate the phase change of the gravitational waves waveform that is induced due to such resonant excitation during intermediate- and extreme-mass-ratio inspirals. We then relate the phase change to the electric quadrupolar Love number of the larger companion, which is resonantly excited by its smaller companion. We discuss the statistical error on measuring the Love number by LISA and show that, because of this phase change, the statistical error is small even for small values of the Love number. Our results provide a strong indication that the Love number could be detected by LISA with remarkable accuracy, much higher than what can be achieved via tidal deformation effects. Our results further indicate that resonant excitation of the central black hole during an extreme- or intermediate-mass-ratio inspirals is the most promising effect for putting bounds on, or detecting, non-vanishing tidal Love numbers of black holes.
Department of Physics, Ben-Gurion University, Beer-Sheva 84105, Israel
[email protected], [email protected], [email protected]
Introduction
General Relativity predicts that black holes (BHs) do not possess an internal structure. They are "bald" and can be characterize solely by their mass and angular momentum [1]. Coalescing BHs radiate gravitational waves (GWs) which are being detected by the LIGO and VIRGO observatories since September 2015 [2]. The calculational efforts for improving the accuracy of the general relativistic (GR) predictions for the emitted GW waveform, could hopefully provide an opportunity for testing the baldness of BHs. Particularly, the inclusion of tidal interactions may allow us to probe the hypothetical interior structure of the binary companions and quantitatively test the predictions of GR [3, 4, 5, 6, 7, 8].
In spite of the increasing precision of ground-based detectors, their limited frequency band enables observations of only a few cycles in the inspiral phase of a binary-BH (BBH) coallesence event for a limited range of masses. The LISA space detector [9, 10], whose design sensitivity is maximal in the mHZ region, is expected to be able to detect and track many BBH coalescence events from the early stages of the inspiral through the merger to the late post-merger ringdown.
In GR, the interior of BHs is vacuum, except for a possibly singular core. But is this their true description? A common expectation is that quantum effects at the Planck length scale, or at a somewhat larger length scale, such as the string length scale, will be sufficient to resolve all singularities. However, there are strong indications to the contrary when it comes to the resolution of BH singularities. First, a seemingly necessary condition for evading the singularity theorems [11, 12] and the closely related "Buchdahl-like" bounds [13, 14, 15, 16] is that the geometry is sourced by matter that has the maximal negative radial pressure permitted by causality, \(p_{r}=-\rho\), all the way to the surface of the star [17]. Furthermore, if one also considers the emitted Hawking radiation from such a quantum-regularized BH, one finds an untenable violation of energy conservation: When the scale of resolution is parametrically smaller than that of the Schwarzschild radius, the emitted energy of Hawking particles will greatly exceed the original mass of the collapsing matter [18, 19]. Thus, the tentative conclusion that we will adopt in our following discussion, is that deviations from GR must extend throughout the object's interior, that is, horizon-scale deviations from GR.
The Love numbers of an object encode its response to an external tidal field. These numbers could provide some limited information about the mass distribution and the compactness of the object. The Love numbers determine the possible corrections to the GW signal due to tidal interactions of the companion in a binary system. The quadrupolar Love number \(k_{2}\) identically vanishes for GR BHs in four spacetime dimensions [20, 21, 22, 23, 24, 25, 26], making it a key observable. Measuring non-zero values will indicate a deviation from the GR predictions [27, 28, 29, 30, 31, 32]. If indeed horizon scale deviations from the GR predictions occur, then the expectation is that the Love numbers will be small, but not extremely small, suppressed only by some additional pertrubative parameter that quantifies the strength of the deviations. The reason for such expectation is that the Love numbers are normalized such that they are order unity if all the dimensional scales are of order of their radius [29, 30].
Previous studies have primarily focused on measuring the Love numbers using tidal deformability, which constitutes a subleading correction to the emitted GW waveform and enters at 5PN order compared to the dominant point-particle term. Tidal-deformability effects are more pronounced at the late inspiral phase. This makes the measurement of the Love number more challenging, since other finite-size effects are also of similar magnitude, requiring the construction of more accurate GW waveforms and detectors with better sensitivity. [33, 34, 35, 36, 3].
For GR BHs the inspiral evolution is dominated by the point-particle GW emission. For BHs which posses an internal structure, an interesting different effect can dominate the evolution if the orbital frequency becomes comparable to a characteristic frequency of some internal mode. In this case, this mode is resonantly excited, resulting in a rapid energy transfer from the orbit to the internal mode. The loss of orbital energy effectively advances the inspiral evolution, bringing the companions to a closer and faster orbit. The abrupt energy transfer changes significantly the emitted GW waveform compared to the point particle waveform since it leads to an instantaneous phase jump and a secondary subleading accumulated dephasing due to the differences in orbital velocities. Such resonant energy transfer can only be realized when the internal modes are non-relativistic. The reason is that the Keplerian orbital frequency is much smaller than the relativistic frequency \(c/R\) (\(c\) is the speed of light and \(R\) is the radius of the compact object) when the two objects are far from each other.
Tidal resonant excitations were first discussed in the context of ordinary polytropic stars [37], then, much later, for binaries with at least one of the companions being a neutron star [38, 39, 40]. In these cases, the effect was related to the tidal Love numbers [51, 52, 53, 54, 55, 56]. However, as already mentioned, since the corrections enter the GWs waveform at 5PN order, the effect becomes significant during the late inspiral phase, where additional effects are also significant, making it difficult to detect the Love number with high confidence.
More recent studies related to BH mimickers [57, 58, 59, 60], treat the BBH as if they were horizonless ultra-compact objects (UCOs). In [57, 59, 60], the tidal field was exciting some additional spacetime modes of a hypothetical spacetime structure outside the UCO. In [59], the resulting phase shift due to the resonant excitation of these additional spacetime modes was related to the tidal Love numbers, and the detectability of the quadrupolar Love number \(k_{2}\) using observations of ground-based GW detectors and the proposed Einstein telescope was discussed. In [58], the detectability prospects of the resonance effects were discussed, but without connecting the effect to the tidal Love numbers. In this study, no evidence for resonance was found in the observations of the first two runs of Advanced LIGO and Advanced Virgo.
Here, in contrast to previous studies, we discuss the tidal excitation of hypothetical non-relativistic internal modes of the BH, relate the resulting phase shift to the Love numbers and discuss the possible detectability of \(k_{2}\) in LISA observations of IMRIs and EMRIs. We find that this is, by far, the most promising way to put bounds on, or detect Love numbers of astrophysical BHs.
In the following, contrary to GR, we assume that astrophysical BHs do have an internal structure that supports non-relativistic fluid modes. We keep the calculations as model-independent as possible by expressing the model-dependent quantities through the dimensionless tidal Love number. We follow the discussion in [29, 59], to relate the resonance phase shift of the excited modes to the quadrupolar tidal Love number \(k_{2}\), and their relation to internal modes frequencies of quantum black holes [29, 30, 31, 61] and recent frozen star model results [62, 63, 64]. We estimate the statistical error in the measurement of \(k_{2}\) through resonance excitations during the inspiral of slowly rotating EMRIs and IMRIs, using the design noise spectrum of LISA [9, 65]. We find that the statistical error is small even for small values of the Love number, providing a strong
indication that the Love number could be detected with impressive accuracy. We end with an explicit comparison between the detection prospects of the Love numbers with tidal deformability and tidal resonance, and conclude that resonance excitations are the most promising effect for detecting the Love numbers.
## 2 Tidal-Resonance interaction
Here, we examine the tidal interaction in a binary system, focusing on the central object that is subjected to the weak periodic tidal force exerted by the smaller companion. Following the ideas presented in [38, 42, 54, 56] and more recently in [30], we describe the response of the object to the tidal force from the perspective of an asymptotic observer. The idea is that the object possess a set of non-relativistic fluid modes which are driven by the tidal force and can be therefore described as a collection of driven harmonic oscillators.
The spectrum of the interior fluid modes depends on the radial number \(n\) and the angular numbers \(l,m\), so their frequencies depend of the three numbers \(\omega_{nlm}\). Here, we are particularly interested in the dominant effect which is due to the excitation of the \(n=1\) mode by the quadrupolar tidal field, so we focus on the case \(l=m=2\)[42]. As for the other modes; the spherically symmetric static \(m=0\) mode cannot generate pressure gradients that are needed for resonance excitaion and therefore is not relevant to our discussion. The \(m=1\) mode can be resonantly excited in the case that the spin-orbit configuration is misaligned [42, 44]. Here, we restrict our attention to spin vectors that are aligned with the orbital angular momentum.
The mode corresponding to \(n,\ l,\ m=1,\ 2,\ 2\) is non-relativistic, meaning that, as for neutron stars, \(\omega_{122}\) is parametrically smaller than \(c/R\). The orbital frequency which determines the frequency of the driving tidal force is determined by the Kepler law. It follows, as explained in the Introduction, that as the smaller object gets closer to the central object, the orbital and internal frequencies can match.
When the frequency of one of the interior modes of the central, larger, object, matches the orbital frequency of the companion, it is resonantly excited and efficiently absorbs energy from the orbital motion. The instantaneous energy absorption increases the orbital velocity and shortens
the inspiral duration, thus leading to a phase difference in the emitted GW waveform, when compared to the emitted waveform in the absence of a resonance. To calculate the dephasing of the GW waveform, we adopt the derivation in [39, 43], resulting in the following phase evolution,
\[\begin{cases}\Phi(t)=\Phi_{PP}(t)&t<t_{R}\\ \Phi(t)=\Phi_{PP}(t+\Delta t)-\Delta\Phi&t>t_{R}+\Delta t,\end{cases} \tag{1}\]
where \(\Phi_{PP}(t)\) is the point particle phase, \(t_{R}\) is the time at which the resonance starts, \(\Delta t\) is the resonance duration and \(\Delta\Phi\) is the instantaneous resonance phase difference, which in general depends on the object's properties as demonstrated below. The point particle phase \(\Phi_{PP}\), is independent, by definition, on the object's composition. In particular, it has the same value for a GR black hole and one endowed with an internal excitation spectrum, such as the objects we are discussing.
Then, assuming that the resonance duration is short compared to the inspiral duration and under adiabatic evolution, we arrive at the frequency domain resonance phase [39, 43],
\[\Phi(f)\ =\ \Phi(f)_{PP}+\Theta(f-f_{R})\left(\frac{f}{f_{R}}-1\right)\Delta \Phi_{Res}\, \tag{2}\]
where \(f_{R}\) is the internal mode frequency which satisfies the resonance condition \(2\pi f_{R}=m\Omega\), \(\Omega\) being the orbital angular velocity. Resonance corrections to the phase \(\Delta\Phi_{Res}\), are composed of two terms; a dominant term that enters at 2.5PN order higher than the leading order point-particle term and a subleading 4PN-higher contribution. The dominant contribution, which is frequency independent and proportional to \(\Delta\Phi\), originates from the instantaneous energy absorption during resonance. The subleading term, which is proportional to the frequency, is a secular effect that increases towards the late stages of the inspiral.
### The phase shift
Fluid perturbations of compact objects are described by the displacement vector \(\xi^{i}\), of a fluid element from its unperturbed position, which is given by the orthonormal base decomposition,
\[\xi^{i}\ =\ \sum_{n}a_{n}\xi^{i}_{n}, \tag{3}\]
\(\xi_{n}\) being the normal displacement vectors, and \(a_{n}\) are the dimensionless displacement amplitudes.1 In the presence of tidal forces, the fluid modes satisfy the damped-driven harmonic oscillator equation [38, 44],
Footnote 1: We use relativistic units \(G,c=1\).
\[\ddot{a}_{nlm}+2\gamma_{n}\dot{a}_{nlm}+\omega_{n}^{2}a_{nlm}\ =\ \mathcal{F}(t)_{nlm}, \tag{4}\]
where \(\gamma_{n}=-\text{Im}\ \omega_{n}\) is the damping rate of the mode. The source of the damping and its precise magnitude are irrelevant for the resulting resonant excitation and dephasing. So, \(\gamma\) can be neglected altogether (see below).
The external periodic force \(\mathcal{F}(t)_{nlm}\) excites the \(n\)th mode interior fluid mode is given by
\[\mathcal{F}(t)_{nlm}\ =\ N_{lm}\frac{\mathcal{E}_{l}Q_{nl}}{MR^{2}}e^{-im \phi(t)}\, \tag{5}\]
where \(M\) and \(R\) are the mass and radius of the central object. The order unity factor \(N_{lm}\) is proportional to the Wigner function and is specified below. The tidal field of the \(l\) mode is denoted by \(\mathcal{E}_{l}\), which for the \(l=2,m=\pm 2\) satisfies \(\mathcal{E}_{ij}x^{i}x^{j}=\mathcal{E}r^{2}Y_{2\pm 2}\). The mass moment of the quadrupolar \(n\)th mode \(Q_{n}\) is given by the overlap integral [39],
\[Q_{n}=-\int d^{3}r\delta\rho_{n}r^{2}\, \tag{6}\]
where \(\delta\rho_{n}\) is the corresponding energy density perturbation.
Next, we aim to find the instantaneous phase shift \(\Delta\Phi\) and the corresponding phase evolution
in Eq. (1). We start by solving Eq. (4) for the amplitudes \(a_{n}\), which at resonance is given by [44],
\[a_{n}(t)\ =\ \left(\frac{\pi}{m\ddot{\phi}}\right)^{1/2}\frac{\mathcal{F}(t)_{nlm} }{\gamma_{nl}-i\omega_{nl}}e^{-i\omega_{nl}t}, \tag{7}\]
where \(\ddot{\phi}\) denotes the rate of change of the orbital frequency at resonance. The transferred energy to the mode \(nlm\) during the resonance is a sum of kinetic and potential terms [38, 44],
\[E_{nlm}(t)\ =\ \left(\frac{1}{2}\dot{a}_{nlm}(t)^{2}+\frac{1}{2}\omega_{nl}^ {2}a_{nlm}^{2}(t)\right)MR^{2}. \tag{8}\]
The total energy absorbed by the mode, neglecting \(\gamma_{nl}\), is given by
\[\Delta E_{nlm}\ =\ \ N_{lm}^{2}\frac{\pi}{4m\ddot{\phi}}\frac{(\mathcal{E}_ {l}Q_{nl})^{2}}{MR^{2}}. \tag{9}\]
The resonance excitations lead to a phase shift, since the orbital energy decreases as it excites the interior modes. Accordingly, the orbital velocity increases and the inspiral duration decreases by a time \(\Delta t\). To estimate \(\Delta t\), we follow [43]. The energy absorbed by the central objects decreases the energy of the orbit by the same amount. In the absence of resonance, such a decrease in energy can only occur by the emission of GW and the time that it would take the orbit to emit GW with such energy \(\Delta t\) would be determined by the equality \(\dot{E}_{GW}\Delta t=\Delta E_{nlm}\). The rate of GW emission \(\dot{E}_{GW}\) is, to a very good approximation, the same rate as in the absence of resonance, which to leading order is given by \(\dot{E}_{GW}=\frac{32}{5}(\mathcal{M}_{c}\ \Omega)^{10/3}\), with \(\mathcal{M}_{c}\) being the chirp mass. The resulting phase shift \(\Delta\Phi=m\Omega\Delta t\) is the following,
\[\Delta\Phi_{nlm}\ =\ m\Omega\frac{\Delta E_{nlm}}{\dot{E}_{GW}}=\frac{5}{32} \ m\Omega\ \frac{\Delta E_{nlm}}{(\mathcal{M}_{c}\ \Omega)^{10/3}}. \tag{10}\]
For IMRIs or EMRIs \(\mathcal{M}_{c}\approx M\) and \(\dot{E}_{GW}\sim v^{10}\).
Using Eq. (9), we may calculate the phase shift induced by the leading order quadrupolar
mode \(l=m=2\)[39, 59],
\[\Delta\Phi_{n22}\ =\ \frac{25\pi}{1024q(1+q)}\frac{1}{R_{1}^{5}}\frac{|Q_{n2}|^{2} }{M_{1}\omega_{22}^{2}R_{1}^{2}}=\frac{25\pi}{2048q(1+q)}\frac{1}{R_{1}^{5}} \frac{|Q_{n2}|^{2}}{\Delta E^{int}}, \tag{11}\]
where we used that \(N_{22}=\sqrt{3/32}\). Here \(q=M_{2}/M_{1}\) is the mass ratio and \(\Delta E^{int}=\frac{1}{2}M_{1}\omega_{22}^{2}R_{1}^{2}\) is the internal energy of oscillations which is related to the energy stored in the \(n\)th mode by \(\Delta E^{int}=\sum\limits_{n}\Delta E_{n22}\), [54].
We wish to justify our estimate of \(\Delta t\) using only \(\dot{E}_{GW}\) and neglecting other dissipation effects. In general, the time difference \(\Delta t\) should include all types of dissipation channels, mainly the dominant dissipation due to tidal friction and the subleading tidal deformation. However, the rate of work of tidal friction is given by [66, 67]\(\dot{E}_{TF}=\frac{1}{2}Q_{ij}\dot{\mathcal{E}}^{ij}\sim k_{2}v^{15}\nu/M\), where \(\nu\) is the kinematic viscosity giving rise to viscous dissipation. In [67], it is demonstrated that, under reasonable assumptions, the contribution of viscous dissipation is negligibly small compared to the leading order GW emission and, therefore, can be ignored. For example, for cold Neutron stars, considered to be highly viscous \(\nu/M\approx 10^{-7}\), whereas for BHs \(\nu/M=1\)[68]. During the inspiral, when the orbital velocity is non-relativistic the ratio of the different emission rates scales as \(\dot{E}_{TF}/\dot{E}_{GW}\sim v^{5}\ll 1\), which shows that the internal dissipation effects can indeed be neglected.
## 3 Fluid-origin Love numbers
Here we follow [29, 30] to determine the relationship between the Love number and the spectrum of internal fluid modes. We focus on the static tidal Love number, ignoring dissipative effects.
Following [30] (see also [54, 56]), we wish to find the static response of the object to an external tidal field. At low frequencies, away from resonance, the amplitude in Eq. (7) reduces to
\[a_{n}=\frac{\mathcal{E}Q_{n}}{M\omega_{n}^{2}R^{2}}. \tag{12}\]
Then, using the definition of the Love number, \(k_{2}R^{5}=3Q/(2\mathcal{E})\), we apply the normal mode
decomposition identities \(Q=\sum_{n}a_{n}Q_{n}\), and \(k_{2}=\sum_{n}a_{n}k_{2n}\), where the n\(th\) mode Love number, which is associated with the n\(th\) mode quadrupolar moment, is given by
\[k_{2n}R^{5}=\frac{3Q_{n}}{2\mathcal{E}}. \tag{13}\]
when substituting the explicit form of \(a_{n}\) from Eq. (12), the Love number becomes
\[k_{2}\ =\ \sum_{n}\frac{3}{2R^{5}}\frac{Q_{n}^{2}}{M\omega_{n}^{2}R^{2}}. \tag{14}\]
We now approximate \(k_{2}\) by the first term in the sum in Eq. (14) relying on a physically motivated assumption. The sum in Eq. (14) is dominated by the fundamental \(n=1\) mode. The justification is that the number of nodes in the overlap integral in Eq. (6) increases as \(n\) increases. It follows that the contribution of \(Q_{n}\) decreases as \(n\) increases. Using the \(l=2\)-mode excitation energy \(\Delta E_{n}^{int}=\frac{1}{2}M\omega_{n2}^{2}R^{2}\), the sum in Eq. (14) can be approximated as
\[k_{2}\ \simeq\ \frac{3}{4R^{5}}\frac{Q_{1}^{2}}{\Delta E_{1}^{int}}. \tag{15}\]
We now observe that a similar expression to the one in Eq. (15), appears in Eq. (11) which determines the phase shift \(\Delta\Phi_{122}\). This allows to express \(\Delta\Phi_{122}\) in terms of \(k_{2}\),
\[\Delta\Phi_{Res}\ =\ \frac{25\pi}{1536}\frac{k_{2}}{q(1+q)}. \tag{16}\]
We are interested in the case of small mass ratios, \(q\lesssim 1/1000\) and a small but not extremely small \(k_{2}\), \(k_{2}\lesssim 1/10\). Then we can parameterize the resonance dephasing by
\[\Delta\Phi_{Res}\ \simeq\ 5\times\left(\frac{k_{2}}{10^{-1}}\right)\left( \frac{q}{10^{-3}}\right)^{-1}. \tag{17}\]
The resonance-induced dephasing is governed by the dimensionless tidal Love number and the companion's mass ratio. Generally, the detection threshold for the instantaneous phase jump requires \(\Delta\Phi_{Res}\gtrsim 1\)[69]. Thus, for typical values of Love numbers \(k_{2}\lesssim 10^{-1}\), it is more likely to
observe resonances for moderate to extreme mass-ratio binaries \(10^{-3}\leq q\leq 10^{-5}\).
We can also express \(k_{2}\) in terms of the frequency \(\omega_{12}\equiv\omega_{2}\) of the \(n=1\), \(l=2\) mode. At resonance, from Eq. (6), \(Q\sim\Delta E^{int}\), where \(\Delta E^{int}=\frac{1}{2}M\omega_{2}^{2}R^{2}\) is the energy of the oscillating star at resonance. Thus, on dimensional grounds, we get \(Q\sim\Delta E^{int}R^{2}\). For example, for a constant energy density perturbation \(Q=\frac{3}{5}\Delta E^{int}R^{2}\), while typical non-constant energy density profiles result in a numerical prefactor \(\lesssim 1\)[56] (see also [29]). Substituting the expressions for \(Q\) and \(\Delta E^{int}\), we arrive at our final result for the Love number
\[k_{2}\ \simeq\ \mathcal{N}\omega_{2}^{2}R^{2}\, \tag{18}\]
where \(\mathcal{N}\) is an order unity dimensionless number that depends on the object's energy density profile and contains the numerical factors in the definition of the Love number [29]. We will use Eq. (18) to determine the detectability of \(k_{2}\) in the next section.
Remarkably, in [29], it is shown that the gravitational polarizability of objects which possess a discrete spectrum of quantum mechanical energy levels is similar to that of classical stars. This follows from the fact that the wavelength of the oscillation is of order of the star radius. We shall refer these objects as "quantum black holes" (QBHs) to mean the quantum state that corresponds to a classical BH. The idea is justified on the grounds of the Bohr correspondence principle, where at macroscopic excitations, expectation values are replaced by classical observables. Therefore, an excited quantum macroscopic object can be treated as a semi-classical oscillating fluid-like object that satisfy Eq. (4). Using standard time-independent quantum perturbation theory, the Love number of QBHs is given by [29, 30]
\[k_{2}\simeq\frac{3}{4R^{5}}\frac{|\langle\Psi_{0}|\hat{Q}|n=1,l=2\rangle|^{2}} {\Delta E_{1}^{int}}. \tag{19}\]
where \(\Psi_{0}\) is the QBH ground state, \(\hat{Q}\) is the mass moment operator that obeys the no-hair theorem; \(\langle\Psi_{0}|\hat{Q}|\Psi_{0}\rangle=0\). The definition of Eq. (15) is restored by applying the Bohr correspondence principle and replacing expectations values with classical observables, \(\langle\Psi_{0}|\hat{Q}|n,l=2\rangle\leftrightarrow Q_{n}\). In this form, Eq. (19) can be treated in a similar way to the classical treatment of Eqs. (15),(18),
which eventually recovers the result \(k_{2}\simeq{\cal N}\omega_{2}^{2}R^{2}\). The result is valid for any object of radius \(R\), quantum or classical, which has a quadrupole internal mode whose non-relativistic frequency is \(\omega_{2}\).
## 4 Detectability
In this section, using the Fisher method, we give a quantitative estimation of the statistical error in measuring the Love number. We discuss the prospects for detection of a non-vanishing Love number with the future space LISA detector and demonstrate that during the inspiral, it is more likely to detect the Love number with resonances rather than tidal deformability.
We evaluate the detectability of the Love numbers through resonant excitations with the planned space telescope LISA, which according to [9], could track and observe moderate to extreme mass-ratio binaries from the early stages of the inspiral and up to the merger with high SNR. Before addressing the precise statistical analysis, we wish to emphasize that for most of the range of the binary masses and spins and for Love numbers \(k_{2}\lesssim 10^{-1}\), the leading order 2.5PN resonance phase term is comparable to the other effects entering at 2.5PN, such as the PP 2.5PN term and the leading order tidal heating term. For smaller values of \(q\), the resonance phase term becomes significant. Since it is established that LISA can detect the other 2.5PN effects, we expect that LISA could be able to detect the Love numbers with high confidence.
To evaluate the statistical error, we employ the Fisher information method. Assuming a signal \(s(t)=h(t,\theta^{i})+n\), with the uncorrelated noise \(n\), a model signal \(h(t,\theta^{i})\) with model parameters \(\theta^{i}\). For high SNR events, the posterior distribution takes the form
\[p(\theta^{i}|s)\propto e^{-\frac{1}{2}\Delta\theta^{i}\Delta\theta^{j}\Gamma_{ ij}}. \tag{20}\]
where \(\Gamma_{ij}\) is the Fisher matrix defined as
\[\Gamma_{ij}\ =\ \left(\frac{\partial h}{\partial\theta^{i}}\Big{|}\frac{ \partial h}{\partial\theta^{j}}\right). \tag{21}\]
with the inner product defined by \((h_{1}|h_{2})=4\text{Re}\int_{f_{min}}^{f_{max}}\frac{\tilde{h}_{1}(f)\tilde{h}_{ 2}^{*}(f)}{S_{n}(f)}df\), and \(S_{n}(f)\) is LISA's design noise spectral density. We choose \(f_{max}=f_{ISCO}(\chi)\), where \(f_{ISCO}\) is the orbital frequency at the innermost stable circular orbit(ISCO) and \(f_{min}=10^{-5}\text{Hz}\) as the lowest frequency in the LISA frequency band. The model parameters are \(\theta^{i}=(\ln\mathcal{A},\ln\mathcal{M}_{c},\eta,\Phi_{c},t_{c},\chi_{1}, \chi_{2},k_{2})\), where \(\mathcal{A}\) is the amplitude, \(\mathcal{M}_{c}\) is the chirp mass, \(\eta\) is the symmetric mass-ratio, \(\Phi_{c}\) and \(t_{c}\) are the phase and time at coalescence, \(\chi_{i}\) are the companions spin parameter and \(k_{2}\) is the Love number given in Eq. (18). The statistical error in measuring \(k_{2}\) is related to the Fisher matrix,
\[\sigma_{k_{2}}\ =\ \sqrt{\langle\Delta k_{2}\rangle^{2}}\ =\ \sqrt{(\Gamma^{-1})_{k_{2}k_{2}}} \tag{22}\]
We consider quasi-circular orbits and employ the analytical frequency domain post-Newtonian approximation TaylorF2, which accurately describes the binary evolution of the inspiral up to the ISCO [70, 71, 72]. The frequency domain GW waveform describing the binary inspiral is of the form \(\tilde{h}(f,\theta_{i})\ =\ \mathcal{A}e^{i\Phi}\), where \(\Phi\) is the phase evolution in Eq. (2). From Eq. (18), for \(q\ll 1\), the instantaneous phase shift at resonance becomes
\[\Delta\Phi_{Res}\ \approx\ \mathcal{N}\ \frac{\omega_{2}^{2}R^{2}}{20q}. \tag{23}\]
In our analysis we included correction terms up to 3PN order and neglected the higher order tidal deformability terms that depend on the Love number and enter at 5PN and 6PN order (See Sec. 4.1).
Additionally, since our model is valid only until the ISCO, the frequency range \(\omega_{2}>\omega_{ISCO}\) is not included in our analysis. Consequently, it is beneficial to parameterize the oscillation frequencies in terms of the ISCO frequency \(\omega_{2}=\alpha\omega_{ISCO}\), where \(0<\alpha\leq 1\), and \(\omega_{ISCO}(\chi)\) is spin-dependent. This also means that resonance at the ISCO sets the maximal value of the Love number that can be detected \(k_{2}^{max}=\mathcal{N}\omega_{ISCO}^{2}R^{2}\).
We consider moderate to extreme mass-ratio binaries with \(q=[10^{-3},10^{-4},10^{-5}]\), where the central object mass is \(M_{1}=10^{6}M_{\odot}\), and small to moderate Kerr spin parameters \(\chi^{i}=[0,0.1,0.2,0.3,0.4,0.5]\), at a luminosity distance \(D_{l}=2\)Gpc. We also average over the sky location
parameters [72]. We assume equal spins \(\chi_{1}=\chi_{2}\) that are aligned with the orbital angular velocity vector. For the model-dependent order unity coefficient \(\mathcal{N}\), we use the estimation derived in [29], and consider \(\mathcal{N}\in[0.1,1]\).
In Fig.1, the purple region shows the analytical Love-resonance-spin relation described in Eq. (18) that is determined by our model, where a given Love number corresponds to a specific resonance frequency and a spin parameter. This region describes the parameter space accessible to our model and is independent of the detector properties. In our analysis, the largest accessible \(k_{2}\) is reached for \(\mathcal{N}=1\), \(\alpha=1\) and \(\chi=0.5\), resulting in \(k_{2}^{max}\approx 0.159\), larger values are
Figure 1: The solid blue lines correspond to a potential measurement of \(k_{2}\) for a given mass ratio \(q\), with relative error \(\sigma_{k_{2}}/k_{2}=1/3\). The region above each solid line corresponds potential measurement of \(k_{2}\) with a relative error smaller than \(1/3\). As anticipated by Eq. (16), for a smaller mass ratio, the error on measuring a specific \(k_{2}\) is smaller and it is possible to measure smaller values of \(k_{2}\). The purple region describes the parameter space accessible to our model for values of the spin parameter between \(0\) and \(0.5\), taking into account Love-resonance-spin relation: \(k_{2}\propto\omega_{ISCO}(\chi)R(\chi)\), such that a given Love number corresponds to a specific resonance frequency and spin parameter. The gray region describes the parameter space which is not accessible to our model for these values of the spin parameters.
inaccessible to our model. The gray region is the parameter space region that our model cannot describe.
### Comparison to Tidal-deformability
We now turn to estimate the relative magnitude of the resonance phase shift effects compared to the magnitude of tidal deformation effects on the phase evolution. To leading PN order, the tidal deformability contribution to the phase for \(q\ll 1\) takes the form \(\Phi_{TD}(f)\sim k_{2}v^{5}/q\), where \(v=(\pi Mf)^{1/3}\) is the orbital velocity. The accumulated phase throughout the inspiral is given by
\[\Delta\Phi_{TD}\ =\ \int_{f_{min}}^{f_{ISCO}}f\frac{d^{2}\Phi_{TD}(f)}{df^{2}}df \sim\frac{k_{2}}{q}v_{ISCO}^{5}. \tag{24}\]
Figure 2: The figure displays the relative statistical errors in measuring \(k_{2}\) with resonance excitations and tidal deformation. For a given \(k_{2}\) and \(\chi\) we calculate \(\sigma_{k_{2}}^{\rm res}\) without tidal deformation effects and \(\sigma_{k_{2}}^{TD}\) without resonances. The results show a preference for detecting \(k_{2}\) with resonance effects. The preference is more apparent for a smaller mass ratio \(q\). The colored regions enclosed by the solid and the dashed lines mark the additional parameter space that resonances can probe compared to tidal deformation.
For a case for which the central object mass is \(M_{1}=10^{6}M_{\odot}\) and for small to moderate spin parameters, we find \(v_{ISCO}^{5}\sim 0.01\). Comparing to the instantaneous resonance phase jump Eq. (11), \(\Delta\Phi_{TD}/\Delta\Phi_{Res}\sim v_{ISCO}^{5}\). Therefore, we would expect to have a larger error in the measurement of the Love number relying on tidal deformability.
We calculated the statistical error in measuring the Love number through tidal deformability and compared it to a measurement via resonance effects and found that the previous estimate is indeed correct. We repeated the statistical evaluation performed above, excluding resonance effects and including the leading tidal deformation terms entering the phase at 5PN and 6PN order [73, 74, 35].
The results of the calculation of the ratio of the relative errors in measuring the Love numbers, denoted by \(\sigma_{k_{2}}^{Res}/\sigma_{k_{2}}^{TD}\) for different spin parameters \(0\leq\chi\leq 0.5\) are presented in Fig. 2.
## 5 Summary and conclusion
The future measurement of GWs produced during BBH inspirals by the planned GW detector LISA will present an unprecedented opportunity to test GR. Hypothetical tidal interactions between the inspiraling objects would affect the waveform of the emitted GWs in a way that could only be possible if astrophysical BHs were actually ultra-compact objects possesing an internal structure rather than the structureless objects predicted by GR.
We discussed how the resonant excitation of the hypothetical non-relativistic interior modes of astrophysical BHs changes the phase of the emitted GW waveform when compared to the phase predicted by GR. The non-relativistic nature of the modes was crucial to the possibility of resonantly exciting them, because in this case they could be excited when the two objects are still far apart. In this case, the resonance occurs a long time before the ISCO is reached and leads to a significant dephasing. We find that regardless the specific details of the primary's interior composition, the phase shift is governed by a single intrinsic quantity - the dimensionless tidal Love number \(k_{2}\).
We evaluated the statistical error in measuring the Love number \(k_{2}\) by LISA using the resonance effect. We concluded that the smallness of the resulting statistical error indicates that
\(k_{2}\) could actually be detected by LISA with impressive accuracy by observing intermediate and extreme mass-ratio inspirals. We compared the statistical error for detection of the Love number relying on tidal deformation effects with the error when using resonance effects and concluded that prospects of measuring \(k_{2}\) using resonance effects are much better. The results reveal additional sensitivity-enhancement factors whose origin is the Love-resonance-spin relation. First, the statistical error in measuring the Love number reduces for BHs with higher spin, because for such BHs, the inspiral duration is longer. Second, the statistical error in measuring the Love number reduces if the inspiral includes a range of higher orbital velocities, which could lead to excitation of higher internal frequencies, which, in turn, correspond to the BH having a larger Love number.
Our conclusion is that the effects of resonant excitation of astrophysical BHs during intermediate and extreme mass-ratio inspirals provide the best opportunity for putting bounds on, or detecting, the tidal Love number of astrophysical BHs and thus providing evidence of physics beyond GR. Nevertheless, we stress that the results of our statistical analysis should be viewed as preliminary estimates for the detection prospects. A comprehensive statistical treatment requires more accurate waveform modeling and should consider LISA's ability to track and discriminate several EMRIs simultaneously [9].
Our analysis is based on a general theoretical framework which only requires the existence of a set of non-relativistic internal modes, and does not require specifying the detailed properties of the central object. The entire dependence on the interior composition is parameterized in terms of the dimensionless tidal Love numbers. Therefore our results can be applied to a wide range of ultra-compact objects or BHs mimickers.
## Acknowledgments
We thank Vitor Cardoso, Tanja Hinderer and Ely Kovetz for useful discussions. The research is supported by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland" and by VATAT (Israel planning and budgeting committee) grant for supporting theoretical high energy physics. | ブラックホールは内部構造を持たないことが予測されており、従って励起されないと考えられます。これは、重力波の波形に特異的な予測をもたらし、二つのブラックホールが相互に引き寄せられている際に放出する重力波の波形と、ブラックホールの「愛数」の消失につながります。しかし、天体ブラックホールが内部構造を持っていると仮定すると、その愛数は消失しません。また、軌道エネルギーの伝達により、その際に励起される可能性があります。これは、軌道周期に影響を与え、放出される重力波の波形の可視的な印跡を生み出す可能性があります。この効果は、二つの伴星のうちの1つが共鳴的に励起されると強化されます。私たちは、ブラックホールの Hypothetical 内部構造の共鳴励起条件について議論し、中間および極端質量比のインスピレーション時に生じる重力波の波形に |
2307.16438 | Coexistence of Superconductivity and ferromagnetism in high entropy
carbide ceramics | Generally, the superconductivity was expected to be absent in magnetic
systems, but this reception was disturbed by unconventional superconductors,
such as cuprates, iron-based superconductors and recently discovered nickelate,
since their superconductivity is proposed to be related to the
electron-electron interaction mediated by the spin fluctuation. However, the
coexistence of superconductivity and magnetism is still rare in conventional
superconductors. In this work, we reported the coexistence of these two quantum
orderings in high entropy carbide ceramics (Mo0.2Nb0.2Ta0.2V0.2W0.2)C0.9,
(Ta0.25Ti0.25Nb0.25Zr0.25)C, and they are expected to be conventional
superconductors. Clear magnetic hysteresis loop was observed in these high
entropy carbides, indicating a ferromagnetic ground state. A sharp
superconducting transition is observed in (Mo0.2Nb0.2Ta0.2V0.2W0.2)C0.9 with a
Tc of 3.4 K and upper critical field of ~3.35 T. Meanwhile, superconductivity
is suppressed to some extent and zero-resistance state disappears in
(Ta0.25Ti0.25Nb0.25Zr0.25)C, in which stronger magnetism is presented. The
upper critical field of (Ta0.25Ti0.25Nb0.25Zr0.25)C is only ~1.5 T, though they
show higher transition temperature near 5.7 K. The ferromagnetism stems from
the carbon vacancies which occurs often during the high temperature synthesis
process. This work not just demonstrate the observation of superconductivity in
high entropy carbide ceramics, but also provide alternative exotic platform to
study the correlation between superconductivity and magnetism, and is of great
benefit for the design of multifunctional electronic devices. | Huchen Shu, Wei Zhong, Jiajia Feng, Hongyang Zhao, Fang Hong, Binbin Yue | 2023-07-31T06:47:12 | http://arxiv.org/abs/2307.16438v1 | # Coexistence of Superconductivity and ferromagnetism in high entropy carbide ceramics
###### Abstract
**Generally, the superconductivity was expected to be absent in magnetic systems, but this reception was disturbed by unconventional superconductors, such as cuprates, iron-based superconductors and recently discovered nickelate, since their superconductivity is proposed to be related to the electron-electron interaction mediated by the spin fluctuation. However, the coexistence of superconductivity and magnetism is still rare in conventional superconductors. In this work, we reported the coexistence of these two quantum orderings in high entropy carbide ceramics (Mo\({}_{0.2}\)Nb\({}_{0.2}\)Ta\({}_{0.2}\)V\({}_{0.2}\)W\({}_{0.2}\))C\({}_{0.9}\), (Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C, and they are expected to be conventional superconductors. Clear magnetic hysteresis loop was observed in these high entropy carbides, indicating a ferromagnetic ground state. A sharp superconducting transition is observed in (Mo\({}_{0.2}\)Nb\({}_{0.2}\)Ta\({}_{0.2}\)V\({}_{0.2}\)W\({}_{0.2}\))C\({}_{0.9}\) with a \(T_{c}\) of 3.4 K and upper critical field of \(\sim\)3.35 T. Meanwhile, superconductivity is suppressed to some extent and zero-resistance state disappears in (Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C, in which stronger magnetism is presented. The upper critical field of (Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C is only \(\sim\)1.5 T, though they show higher transition temperature near 5.7 K. The ferromagnetism stems from the carbon vacancies which occurs often during the high temperature synthesis process. This work not just demonstrate the observation of superconductivity in high entropy carbide ceramics, but also provide alternative exotic platform to study the correlation between superconductivity and magnetism, and is of great benefit for the design of multifunctional electronic devices.**
In 2004, the concept of high-entropy alloys (HEAs) was proposed by Yeh et al. and Cantor et al. [1, 2], which rapidly attracted a significant amount of research interest due to their highly disordered and homogeneous single-phase characteristic [3, 4]. Subsequently, Rost et al.[5] synthesized a stable face-centered cubic structured high-entropy oxide (Mg\({}_{0.2}\)Co\({}_{0.2}\)Ni\({}_{0.2}\)Cu\({}_{0.2}\)Zn\({}_{0.2}\))O in 2015, demonstrating for the first time the entropy stabilization of oxides and introducing the concept of high-entropy into the field of ceramics [5, 6]. Since then, new types of high-entropy materials such as borides [7], carbides [8], nitrides [9], silicides [10], fluorides [11], and hydrides [12] have continuously emerged, and their applications have been found in thermal protection [13], supercapacitors [14], wear-resistant coatings [15], biocompatible coatings [16], water splitting [17], nuclear reactor cladding [18], and so on.
As early as 1972, transition metal carbides (TMCs) were recognized for their significant solid solubility [19]. In addition, the coexistence of metallic and covalent bonds gives them excellent properties such as high melting point, admirable hardness, nice electrical/thermal conductivity, and so on [20, 21]. High-entropy carbides (HECs) are generally a simple single-phase system composed of multiple metal carbides (\(\geq\)5) that are molten in equimolar proportions, and often exhibit superior performance compared to single-component systems. Studies have found that HECs display superior properties such as higher hardness [22, 23], better wear resistance [24], more excellent high-temperature stability [25], and oxidation resistance [26] than traditional monolithic TMCs. Furthermore, The superconductivity in TMCs was reported several decades ago[21]. In 1962, Giorgi et al. [27] investigated the relationship between the superconducting transition temperature (\(T_{c}\)) and carbon contents in Ta-C and Nb-C carbides, and found that the closer the C/X (X represents a metal atom) molar ratio is to 1, the higher the \(T_{c}\). Soon after, Willens et al. [28] claimed that the reason for the influence of \(T_{c}\) in binary TMC alloys may be related to lattice disorder scattering, based on the study of the mutual solubility in various binary TMC phases, and measured the \(T_{c}\) of NbC, MoC, TaC, and WC, which was almost equal to previously reported data. The superconducting and topological properties of TaC and NbC were studied by Shang et al. [29] recently. The \(T_{c}\) values are found to be 10.3 K and 11.5 K, respectively, and band structure calculations show that the density of state on the Fermi level is mainly controlled by the \(d\) orbitals of Ta or Nb and strongly hybridizes with the C \(p\) orbitals to form a large cylindrical Fermi surface, similar to high-\(T_{c}\) iron-based superconductors. In the same year, Yan et al. [30] prepared single-crystal NbC using solid-phase reaction method and measured
its \(T_{c}\) as high as 12.3 K. The experiment showed that NbC belongs to type-II superconductor, judging from the behavior of upper and lower critical magnetic fields. It exhibits strong Fermi surface nesting, and this is beneficial for the strong electron-phonon interaction, which ultimately enhances the superconductivity. In addition, Ge et al. [31] measured the \(T_{c}\) of \(\alpha\)-Mo\({}_{2}\)C to be 7.5 K. Hence, it is reasonable to believe that HECs should possess superconductivity, if it includes one or more superconducting metal carbides. The explore of superconductivity in high entropy compounds was initiated firstly in HEA. Kozelj et al. [32] discovered the superconductivity of Ta\({}_{34}\)Nb\({}_{33}\)Hf\({}_{8}\)Zr\({}_{14}\)Ti\({}_{11}\) in 2014, proving it to be a type-II superconductor with a measured \(T_{c}\) of 7.3 K. Since then, many superconducting phenomena have been reported in HEMs [33-35], and subsequently, superconducting phenomena have also been found in high-entropy oxides [36, 37]. Recently, Liu et al. [38] reported the discovery of superconductivity in high-entropy silicides for the first time, with a relatively high \(T_{c}\) value (3.2-3.3 K). Unfortunately, research on HECs has mainly focused on their thermodynamic properties [22-26], and there are rare studies on their superconducting properties to date.
In this work, we synthesized and characterized two types of HECs: (Mo\({}_{0.2}\)Nb\({}_{0.2}\)Ta\({}_{0.2}\)V\({}_{0.2}\)W\({}_{0.2}\))C\({}_{0.9}\) and (Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C (named as Mo-HEC and Ta-HEC, respectively). X-ray diffraction (XRD) analysis showed that both of them are single NaCl phase (_Fm-3m_, No.225), in which the transition metal atoms are randomly distributed on the cationic positions and the carbon atoms occupy the anionic positions. Superconductivity was observed in Mo-HEC at 3.4 K through low temperature electrical transport measurements with an upper critical field of \(\sim\)3.35 T, which is below the weak-coupling Pauli limit (1.84 T/K * 3.4 K \(\approx\) 6.26 T), and suggests a typical conventional superconductor. Meanwhile, magnetic measurement shows a ferromagnetic ground state in the whole temperature range. Previous studies demonstrate that the ferromagnetic ordering is an intrinsic behavior in carbides with metal element and carbon vacancies [39, 40]. Previous theoretical calculation claims that the \(p\) electrons of the nearest-neighbor carbon atoms near the vacancies is responsible for the long-range ferromagnetic ordering [39]. Hence, it will be of great interests to observe the coexistence of ferromagnetism and superconductivity. To examine the universality of such kind of phenomenon in HEC, we studied another compound Ta-HEC. A much higher \(T_{c}\)\(\simeq\) 5.7 K is observed in Ta-HEC but zero-resistance state is absent. Magnetism measurement demonstrated a much stronger ferromagnetism existing in Ta-HEC (at least 20 times of that in Mo-HEC), which is expected to suppress the superconductivity.
Microstructure analysis shows that Ta-HEC has a smaller average grain size than Mo-HEC (\(\sim\)12 \(\upmu\)m _VS_ 42 \(\upmu\)m). This means that Ta-HEC should have a higher formation energy and higher vacancy rate. Carbon content was investigated by an infrared carbon/sulfur analyzer and verified that 96.55% of ideal carbon content is found in Mo-HEC, while only 81.9% is found in Ta-HEC. Our work provides a promising way to realize the coexistence of ferromagnetism and superconductivity in HECs, which may also apply to other high-entropy superconducting system with light elements (C, B, O and N et al.).
**Fig.1 The structure and morphology characterization of (Mo\({}_{0.2}\)Nb\({}_{0.2}\)Ta\({}_{0.2}\)V\({}_{0.2}\)W\({}_{0.2}\))C\({}_{0.9}\) (Mo-HEC) and (Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C (Ta-HEC) high entropy carbides.****a.** The SEM image and corresponding individual element distribution analyzed by EDX in Mo-HEC, **b.** the X-ray diffraction pattern of sintered Mo-HEC, pristine mixture powder and original phases of various binary carbides used in this work, **c.** the EBSD image of Mo-HEC showing the grain size, grain boundaries and orientation, **d.** statistical result of grain size distribution for Mo-HEC, **e.** statistical result of grain size distribution for Ta-HEC, **f.** the Vickers-hardness testing image on a Mo-HEC, **g.** the nanoindentation results for both Mo-HEC and Ta-HEC.
Mo-HEC and Ta-HEC high entropy carbides were synthesized by sparking plasmon sintering method. The mole fraction of five (or four) metal elements is equal. Fig.1 shows the elemental analysis, morphology and structure characterization results. The EDX results show an even distribution of each principal metal, as seen in Fig.**1a**. The sintered sample is a simple cubic phase with only five diffraction peaks in our experimental range of x-ray diffraction, while the pristine powder mixture shows a complex pattern with contribution from each binary metal carbide, as seen in Fig.**1b**. The starting powder was grounded by ball milling and the grain size was only 1-2 microns or even submicron, as shown in Supplementary information, but the final product shows a large grain size, as shown in Fig.**1c** and Fig.**1d**. Statistical grain size is 42.4\(\pm\)4.1 microns for Mo-HEC. The grains show random orientation as seen by the Electron Backscatter Diffraction (EBSD) image in Fig.**1c**. For comparison, the grain size of Ta-HEC is only 11.9\(\pm\)0.1 microns. Different grain size distribution suggests that Ta-HEC has a higher formation energy than Mo-HEC. The smaller grain size also signals that there would be more grain boundaries and defects existing in Ta-HEC, and this will be discussed later. The hardness of Mo-HEC and Ta-HEC was briefly checked by a Vickers-hardness tester, and the HECs is very fragile, as seen in Fig.**1f**. Then, a delicate nanoindentation measurement was carried out, as seen in Fig.**1g**, and Mo-HEC has a hardness of \(\sim\)26 GPa at 100 mN loading while Ta-HEC has a little bit higher hardness of \(\sim\) 30 GPa.
Electrical transport property of Mo-HEC was investigated by a commercial Physical Property Measurement System, equipment with a 9T magnet (PPMS-9). At high temperature range, the resistivity of Mo-HEC doesn't change too much and is almost constant below 40-50 K (Fig.2a). A sharp drop of resistivity is observed near 3.4 K, and zero-resistance state is presented as well, suggesting the occurrence of superconductivity. The magnetic field suppresses the superconductivity as \(T_{c}\) shifts to lower temperature with pressure (Fig.2b). The zero-temperature upper critical field is extracted by fitting the upper critical field-\(T_{c}\) relation with the Ginzburg-Landau (G-L) formula as presented in Fig.2c, which is about 3.37 T. Such an upper critical field is smaller than the Bardeen-Copper-Schrieffer (BCS) weak coupling Pauli paramagnetic limit of \(\mu_{0}\)H\({}_{\text{p}}=1.84T_{c}=6.25\) T for \(T_{c}\approx\) 3.40 K, suggesting the absence of Pauli pair breaking.
**Fig.2 The electrical transport properties of Mo-HEC.****a.** The RT curve is collected from 2 K to 300 K, a sharp superconducting/SC transition is observed near 3.4 K, **b.** the magnetic field effect on the SC transition, **c.** the upper critical field-Tc relation extracted from **b**, and the curve was fitted by Ginzburg-Landau (G-L) formula, giving a zero-temperature upper critical field of \(\sim\)3.37 T.
To further verify the superconductivity, magnetism measurement was carried out. As shown in Fig.3a, Mo-HEC shows a clear diamagnetic behavior below the superconductivity. It is noted that there is a small temperature deviation (\(\sim\)0.2 K) of \(T_{c}\) from R-T measurement, which can be ignored since the magnetic measurement is carried out in a MPMS system, in which the temperature sensor set-up is different from that in PPMS. A typical diamagnetic hysteresis loop is also observed, as seen in Fig.3b. These magnetic results prove that there is the Meissner effect in Mo-HEC and it is truly a superconductor. Meanwhile, the magnetic behavior is also studied above SC transition, as shown in Fig.3c and Fig.3d. Mo-HEC shows a clear ferromagnetic signal at 4 K, just above the \(T_{c}\). The ferromagnetic signal should not be from any impurity but from the carbon vacancy induced magnetism, which has been proposed in previous work on carbon-based compounds [39, 40].
**Fig.3 The magnetic measurement of Mo-HEC high entropy carbide.****a.** The ZFC-FC curves near the SC transition, showing a typical Meissner effect with clear diamagnetic signal, and it was further confirmed by the magnetic hysteresis loop at 1.8 K presented in **b**. **c.** The M-H curve measured at 4 K, which is above the SC transition temperature, and zoon-in region with a clear ferromagnetic loop is seen in **d**.
To check whether it is an accidental result showing the coexistence of superconductivity and magnetism in HEC, the other high entropy carbide-(Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C (Ta-HEC) has also been tested. The transport results are displayed in Fig.4. The resistivity of Ta-HEC shows a little bit more sensitive temperature dependence at high temperature. Similarly, its resistivity is also almost constant below 40-50 K, while there is sharp drop near 5.7 K, signaling a possible SC transition, since zero-resistance state is absent this time (Fig.4a). Meanwhile, there is also another drop below \(\sim\)4 K,
suggesting a two-phase SC transition. The magnetic field effect verifies the existence of superconductivity (Fig.4b), since the \(T_{c}\) shifts to lower temperature while the resistivity of normal metal state is a constant, excluding the possibility of a magnetoresistance behavior. The G-L fitting giving a zero-temperature upper critical field of \(\sim\)1.59 T (Fig.4c), which is obviously lower than the Bardeen-Copper-Schrieffer (BCS) weak coupling Pauli paramagnetic limit.
**Fig.4 The electrical transport properties of (Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C (Ta-HEC).****a.** The RT curve is collected from 2 K to 300 K, a sharp drop of resistivity is observed near 5.7 K, while another drop is near 4 K, suggesting possible superconducting/SC transitions, **b.** The magnetic field effect verifies the SC transition, **c.** the upper critical field-\(T_{c}\) relation extracted from **b**, and the curve was fitted by Ginzburg-Landau (G-L) formula, giving a zero-temperature upper critical field of \(\sim\)3.37 T for the higher SC phase starting near 5.7 K.
To further verify the superconductivity, the magnetic measurement was carried out. Fig.5 shows the ZFC-FC curve and magnetic hysteresis loop measured at various temperature. It is clear that Ta-HEC has a stronger ferromagnetic background than that in Mo-HEC. The separation of ZFC and FC signal is even observed near room temperature, as seen in Fig.5a, and a sharp drop of magnetic moment is observed at low temperature. After zooming in this region, it is found that the drop behavior in
temperature dependent magnetic moment is consistent with the SC transitions in transport measurement. If the ferromagnetic background was extracted from the ZFC-FC curves, the modified ZFC-FC curves will show a typical diamagnetic behavior (not shown). Therefore, the sum effect of SC diamagnetic behavior and strong ferromagnetic background finally gives an overall positive moment rather than a negative one. The ferromagnetic background can be tracked by the magnetic hysteresis loop measured at various temperatures, as displayed in Fig.5c and Fig.5d. It is clear that there is also strong ferromagnetic signal even in the SC state. At 200 K, the FM signal is similar with that at 1.8 K. It is noted that the remanent magnetic moment in Ta-HEC is much higher than that in Mo-HEC (2 emu/mol vs 0.05 emu/mol), suggesting a strong FM ground state, which affects the superconductivity much more strongly and cause the absence of zero-resistance state in Ta-HEC.
**Fig.5 The magnetic measurement of Ta-HEC.****a.** The ZFC-FC curves shows a relatively strong ferromagnetic background at 1.8 K. It is clear that the ferromagnetic background is
ferromagnetic background at high temperature range while the magnetic moment shows clear drop at low temperature near the SC transition, **b.** the zoom-in region near SC transition, a Meissner effect is presented, which is embedded in the FM background, and the magnetic measurement finally gives an overall positive moment rather than a typical diamagnetic negative moment. Two transitions can be seen and consistent with transport measurement. **c.** The M-H curve measured at 1.8 K, 8 K and 200 K respectively, which covers the SC transition region and normal metallic state as well, **d.** the zoom-in region with a clear ferromagnetic loop is seen in all temperature points, suggesting a strong FM ground state.
To verify the correlation between carbon vacancy and ferromagnetism in these two HECs, we did more analysis on the carbon content and vacancy by using an infrared carbon/sulfur analyzer (CS-800, ELTRA GmbH, Germany). As shown in Table I, the Mo-HEC has a theoretical 1:0.9 molar ratio for the metal and carbon, the experimental result gives a value of 1:0.869, which means a 96.55% carbon content, and 3.45% carbon vacancy. For Ta-HEC, the ideal molar ratio is 1:1 for the metal and carbon, however, the measured metal/carbon ratio is only 1:0.819, which means only 81.9% carbon content and 18.1% carbon vacancy. This testing result demonstrates that there are much more vacancies existing in Ta-HEC. This is also consistent with the grain size distribution analysis, and Ta-HEC has a much smaller average grain size, which means more grain boundaries and defects. Such a large difference can well explain the large ferromagnetic signal in Ta-HEC.
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline
**Sample** & **Theoretical metal/carbon molar rate** & **Testing sample weight/mg** & **Carbon weight** & **Experimental metal/carbon molar rate** & **Carbon (Exp/theory)** & **Carbon vacancy vacancy** \\ \hline Mo-HEC & 1:0.9 & 22.6 & 7.8785 \% & 1:0.869 & 96.55\% & 3.45\% \\ \hline Ta-HEC & 1:1 & 17.5 & 8.6964 \% & 1:0.819 & 81.9\% & 18.1\% \\ \hline Molecular weight (g/mol): Mo-HEC, 131.73; Ta-HEC, 115.25 & & & & & \\ \hline \end{tabular}
\end{table}
Table I: The carbon content and carbon vacancy analysis
Therefore, it could be a prevailing phenomenon that there is coexistence of superconductivity and magnetism in HEC. The magnetism in HEC is generally a ferromagnetic behavior and does not favor the superconductivity. However, the \(T_{c}\) values in different HECs may vary a lot and the coexistence behavior may inspire the protype design of multifunctional devices. Meanwhile, when we finished our work, we also noted that there is report about the superconductivity in HEC ceramic-(Ti\({}_{0.2}\)Zr\({}_{0.2}\)Nb\({}_{0.2}\)Hf\({}_{0.2}\)Ta\({}_{0.2}\))C, and this HEC has a \(T_{c}\) of \(\sim\)2.5 K and upper critical field of \(\sim\)0.51 T [41]. Clearly, the \(T_{c}\) and upper critical field in our Mo-HEC and Ta-HEC is much higher. Since there is ferromagnetic background in Mo-HEC and Ta-HEC, the \(T_{c}\) should be enhanced if the ferromagnetic behavior is suppressed by tuning the carbon vacancies and synthesis conditions, which is worthy of further study.
Such vacancy-induced magnetism may also exist in other compounds showing superconductivity, such as borides or oxides in form of simple compounds or high entropy compounds (since the light element can easy escape during high temperature synthesis process), especially for transition metal-based compounds. Meanwhile, the manipulating of vacancy and corresponding SC and/or magnetism by external methods, such as liquid ion gating, will also be of great interests, and the small ion intercalation to the vacancy position (such as Li\({}^{+}\), H\({}^{+}\)) would be a promising way to achieve higher \(T_{c}\) in high entropy ceramics.
## Methods
### Sample preparation
In this study, two types of HEC were prepared: (Mo\({}_{0.2}\)Nb\({}_{0.2}\)Ta\({}_{0.2}\)V\({}_{0.2}\)W\({}_{0.2}\))C\({}_{0.9}\) and (Ta\({}_{0.25}\)Ti\({}_{0.25}\)Nb\({}_{0.25}\)Zr\({}_{0.25}\))C (The following are referred to as Mo-HEC and Ta-HEC respectively). The initially selected carbide powders were TaC (99.5%, 1-4 \(\upmu\)m), TiC (99.99%, 1-12 \(\upmu\)m), NbC (99%, 1-4 \(\upmu\)m), ZrC (99%, 1 \(\upmu\)m), Mo\({}_{2}\)C (99.95%, 1-4 \(\upmu\)m), and WC (99.9%, 400 nm), all purchased from Shanghai Aladdin Biochemical Technology Co., Ltd., China. VC (99.9%, 1-4 \(\upmu\)m) was purchased from Adamas-beta, Shanghai Titan Scientific Co., Ltd., China. Each single carbide precursor was weighed in equimolar amounts of metal atoms, and then the precursor powders were poured into a 50 ml stainless steel vacuum ball mill jar lined with ZrO\({}_{2}\) and configured with different diameter ZrO\({}_{2}\) grinding balls (diameter 3.15 mm-10 mm) to achieve a ball-to-powder ratio of approximately 5:1,
using ethanol as the milling medium. The ball mill jar was evacuated using a mechanical pump and milled at 500 rpm for 6 h using a planetary ball mill (TJX-410, Tianjin Oriental Tianjing Technology Development Co. Ltd., China), with a 10 min pause every 0.5 h to reduce the possibility of oxide formation due to overheating during milling. After milling, the powder was dried in a 70 \({}^{\circ}\)C drying box for 8 h, and then siezed and placed into a graphite mold with a diameter of 10 mm, with a layer of carbon paper between the powder and the mold to facilitate demolding. The mold was placed in a spark plasma sintering furnace (SPS-3T-3-MINI (H), Shanghai Chenhua Technology Co., Ltd., China), and the sample chamber was evacuated and filled with Ar. Under the initial pre-pressure of 11.25 MPa, the temperature was set to rise from room temperature to 700 \({}^{\circ}\)C within 5 min, and then to 1800 \({}^{\circ}\)C at a rate of 100 K/min, and kept for 10 min to remove residual gas in the powder. The temperature continued to rise to 2050 \({}^{\circ}\)C within 3 min, while the pressure was increased to 30 MPa and kept under this condition for 3 min, and then within 4 min, the temperature was raised to 2200 \({}^{\circ}\)C, and the pressure was increased to 50 MPa, and maintained for 6 min. The sample was then cooled at a rate of 100 K/min to 800 \({}^{\circ}\)C and then to room temperature with the furnace, resulting in an HEC sample with a diameter of 10 mm and a thickness of approximately 2 mm.
### 2) (Micro) structure characterization
X-ray diffraction (XRD) measurements were performed using a PANalytical Empyream X-ray Diffractometer (Holland) with a copper target and a wavelength of \(\lambda\)(K\({}_{\alpha}\)) = 1.5406 A, after grinding the samples. The detector is PIXcel3D, and the x-ray Cu target works on a voltage of 40 kV and a current of 40 mA. Scanning was conducted in the range of 5\({}^{\lx@math@degree}\)-80\({}^{\lx@math@degree}\) with a step size of 0.013\({}^{\circ}\) within 8 mins.
Microstructural analysis and element distribution uniformity were investigated using a scanning electron microscope (SEM, JSM-7900F of JEOL, Japan) equipped with an Oxford X-Max N50 Aztec EDS detector at 15 kV through secondary electron scattering. Due to the large error of the EDS detector in the determination of the content of light elements such as C, we chose to use an infrared carbon/sulfur analyzer (CS-800, ELTRA GmbH, Germany) to measure the carbon content of the HECs. The detection is based on GB/ T 20123-2006, the working carrier gas is oxygen, compressed air is used as the power gas, and the detection limit is 0.1 ppm.
Furthermore, crystal grain size distribution and orientation determination were carried out using SEM (VERSA 3D, FEI, Hillsboro, OR, USA) equipped with an electron backscatter diffraction (EBSD) detector at 20 kV. Raw EBSD data were analyzed and post-processed using the Tango module in HKL Channel 5 software. Mo-HEC and Ta-HEC were characterized with scanning step sizes of 4 \(\upmu\)m and 1.4 \(\upmu\)m, respectively.
### Electrical transport and magnetic property measurement
A piece of stick-like sample with well-defined geometry was cut from the as-prepared HEC sample by using a diamond sawing system. Then, the electrical transport measurements from 2 K to 300 K were carried out on a commercial Physical Property Measurement System (PPMS, Quantum Design), based on a standard four-probe method.
The magnetic measurements were carried out on a Vibrating Sample Magnetometer (VSM) incorporating with the PPMS. A small magnetic field 5 Oe is used to study the temperature dependent magnetism during the zero-field cooling and field-cooling, which helps to study the Meissner effect and verify the superconductivity. Meanwhile, the magnetic hysteresis loops were measured at representative temperatures, showing the ferromagnetic behavior.
### Mechanical property testing
The Vickers hardness \(H_{\mathrm{V}}\) and fracture toughness \(K_{\mathrm{IC}}\) were measured using a micro- Vickers hardness tester (Qness 60A\({}^{+}\), Germany) equipped with a Vickers diamond indenter under a load of 9.8 N for 15 s. The \(H_{\mathrm{V}}\) and \(K_{\mathrm{IC}}\) were determined by the following equations [39]:
\[H_{\mathrm{V}}\mathrm{=}\frac{1854.4F}{L^{2}} \tag{1}\]
\[K_{\mathrm{IC}}\mathrm{=}\frac{0.016\left(E/H_{\mathrm{V}}\right)^{0.5}F}{C^{ 1.5}} \tag{2}\]
here \(F\) (N) is the applied load, \(L\) (\(\upmu\)m) is the arithmetic mean diagonal length of the indentation, \(C\) (\(\upmu\)m) is the average length of the radial cracks and \(E\) is the Young's modulus of HEC.
Nanoindentation tests were performed using a nanoindentation system (KLA G200, Milpitas, CA,
USA) equipped with a standard Berkovich diamond indenter. Prior to testing, the Berkovich indenter was calibrated using fused silica. In the nanoindentation test, four sets of maximum load tests of 8 mN, 50 mN, 100 mN and 300 mN were selected for each sample, the loading rate was 0.5 mN/s, and the maximum load was stayed for 5 s. The thermal drift was kept below 0.05 nm/s and corrected using the measured value at 10% of the full load during unloading. Using Continuous Stiffness Measurement (CSM) mode, using a 5 \(\times\) 5 indentation array for indentation mapping under each load, with a 20 \(\upmu\)m distance between adjacent indentations to ensure that it is not affected by the residual stress field of adjacent indentations, and grain measurement data for several different orientations were collected. SEM (VERSA 3D, FEI, Hillsboro, OR, USA) observations were performed on the collected indentation arrays, and indentation data at grain boundaries or defects (such as pores) were eliminated to make the results more consistent. The Young's modulus (_E_) was obtained from the loading/unloading-displacement curves according to the Oliver-Pharr method [40]. The values of Vickers hardness, nanoindentation hardness and Young's modulus (_E_) were the average values of at least five measurements.
## Acknowledgment
This work was supported by the National Key R&D Program of China (Grants No. 2021YFA1400300), the Major Program of National Natural Science Foundation of China (Grants No. 22090041), the National Natural Science Foundation of China (Grants No. 12004014, No. U1930401). Part of the experimental work was carried out at the Synergic Extreme Conditions User Facility.
| 一般的な場合、磁性系では、超伝導性が欠如すると予想されていたが、この予想は、クーパー、鉄系超伝導体、そして近年発見されたニッケートなどの非従来の超伝導体が、その超伝導性が電子-電子相互作用を介してスピン揺らぎによって媒介されることが提案されていることから、この予想は破綻した。しかしながら、従来の超伝導体では、超伝導と磁気共存は稀である。この研究では、高 entropy カーブ 複合体 (Mo0.2Nb0.2Ta0.2V0.2W0.2)C0.9、(Ta0.25Ti0.25Nb0.25Zr0.25)C を報告した。これらの材料は、従来の超伝導体と予想される。これらの高entropy カーブ複合体には、明瞭な磁気 |
2305.00423 | Tropical mirror for toric surfaces | We describe the tropical mirror for complex toric surfaces. In particular we
provide an explicit expression for the mirror states and show that they can be
written in enumerative form. Their holomorphic germs give an explicit form of
good section for Landau-Ginzburg-Saito theory. We use an explicit form of
holomorphic germs to derive the divisor relation for tropical Gromov-Witten
invariants. We interpret the deformation of the theory by a point observable as
a blow up of a point on the toric surface. We describe the implication of such
interpretation for the tropical Gromov-Witten invariants. | Andrey Losev, Vyacheslav Lysov | 2023-04-30T08:23:00 | http://arxiv.org/abs/2305.00423v2 | # Tropical mirror for toric surfaces
###### Abstract
We describe the tropical mirror for complex toric surfaces. In particular we provide an explicit expression for the mirror states and show that they can be written in enumerative form. Their holomorphic germs give an explicit form of good section for Landau-Ginzburg-Saito theory. We use an explicit form of holomorphic germs to derive the divisor relation for tropical Gromov-Witten invariants. We interpret the deformation of the theory by a point observable as a blow up of a point on the toric surface. We describe the implication of such interpretation for the tropical Gromov-Witten invariants.
###### Contents
* 1 Introduction
* 2 Geometry of toric surfaces
* 2.1 Projective toric surface
* 2.2 Rays and stars
* 2.3 Intersection of rays and stars
* 3 Tropical mirror for toric surfaces
* 3.1 Mirror relation
* 3.2 Mirror states and holomorphic germs
* 3.3 Tropical good section
* 3.4 Mirror state for point observable
* 3.5 Mirror state for star-observable
* 3.6 Mirror for tropical curve observable
* 4 Divisor relation
* 4.1 Divisor relation for Gromov-Witten invariants
* 4.2 Tropical divisor relation from LGS
* 5 Mirror for selected toric surfaces
* 5.1 \(\mathbb{P}^{2}\)
* 5.2 \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)
* 5.3 Blow up of a point on \(\mathbb{P}^{2}\)
* 6 Recursion for point observables
* 6.1 Recursion for point observables on \(\mathbb{P}^{2}\)
* 6.2 Enumerative description of recursion
* 6.3 Double deformation and contact terms
* 6.4 Conclusion and open questions
## 1 Introduction
Tropical mirror symmetry has all features of the mirror symmetry while providing a much simpler description for most of them. In particular, holomorphic curves become graphs and
topological string theory becomes topological quantum mechanics. In our paper we argue that the same level of simplification holds for the mirror of the evaluation observables.
The conventional mirror symmetry [1] focuses on the superpotential and the choice of special coordinates on its space of deformations. The choice of special coordinates is encoded as a solution to a certain dynamical system (starting from pioneering work by K. Saito [2]), which can be phrased as flatness and torsionless condition for some connection. The Christoffel symbols for this connection can be encoded as a contact terms determined by K. Saito's good section. Using this method, in order to evaluate the \(n\)-point invariant we need to differentiate the 3-point invariant, given by the residue formula, \(n-3\) times with respect to the special coordinates.
In our approach to tropical mirror we focus on observables rather than the superpotential. The contact terms naturally emerge as distinguished deformation of the mirror states in topological quantum mechanics. Such distinguished deformations for polynomial superpotentials were constructed in [3], for a holomorphic germination of harmonic form states. Hence, we can immediately describe the tropical good section for Landau-Ginzburg-Saito theory. Moreover, we can directly evaluate the correlation functions using the mirror states for the evaluation observables.
Given various simplifications of the mirror map in the tropical approach we can expect that the mirror states could also have an explicit description. In our work [4, 5] we provided an integral representation for the mirror states. Moreover, for the case of \(\mathbb{P}^{1}\) the integrals evaluate into the indicator functions. However, the simplicity of the answers might be the feature of simplest example, hence we evaluated the mirror states for the observables on a 2-dimensional toric surfaces.
In this paper we show that the mirror states can be written using the indicator functions on cones, which are standard objects in toric (algebraic) geometry [6, 7]. Moreover, we showed that the sum over the indicator functions can be rewritten as a weighted sum over intersection points of particular graphs. Similar sums were introduced by Mikhalkin [8, 9, 10] to define the intersection number for tropical curves.
Given an explicit form of the holomorphic germs we can use the Landau-Ginzburg-Saito theory to check one of the universal relations for the Gromov-Witten invariants: the divisor relation. In present paper we derive the divisor relation from the recursion formula for the correlation functions in Landau-Ginzburg-Saito theory. In particular, we use our expression for the holomorphic germs of the hyperplane observables to show that they change moduli of the superpotential, while preserving the topology of the toric space.
An explicit form of holomorphic germs allows us to give an explicit form of the tropical good section for the toric surfaces. Note that already for polynomial superpotentials with more than one variable it is possible to have more than one good sections, so it might be hard to choose one, relevant for the mirror symmetry. Our construction of the good section uses the holomorphic germs of the mirror states.
The last but not least application of the mirror states in explicit form allows us to describe a (novel?) relation between the Gromov-Witten invariants on \(\mathbb{P}^{2}\) and the \(Bl_{0}(\mathbb{P}^{2})\). We call it the "cutting corners" relation. The relation is similar to the divisor relation. The \((n+1)\)-point function with a point observable on \(\mathbb{P}^{2}\) is related to the \(n\)-point function on \(Bl_{0}(\mathbb{P}^{2})\).
The structure of our paper is as follows: In section 2 we briefly review the relevant information on the geometry of smooth complex toric surfaces. In section 3 we briefly review the tropical mirror map and describe the mirror states and holomorphic germs of the observables on toric surface. In section 4 we derive the divisor relation from the recursion formula in Landau-Ginzburg-Saito theory and our explicit expression for the holomorphic germ of the hypersurface observable. In section 5 we describe the mirror for the several simples toric surfaces \(\mathbb{P}^{2},\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(Bl_{0}(\mathbb{P}^{2})\). In section 6 we present the cutting corners procedure for the \(\mathbb{P}^{2}\) and formulate the related open questions and conjectures.
## 2 Geometry of toric surfaces
In this section we will briefly review the geometry for 2-dimensional toric varieties, equivalently complex toric surfaces.
### Projective toric surface
Toric surface \(X\) is a compactification of \(\mathbb{C}^{*2}\). We can represent \(\mathbb{C}^{*2}=\mathbb{R}^{2}\times\mathbb{T}^{2}\) in the form of the radial part \(\mathbb{R}^{2}\), equipped with standard coordinates \(r^{i},\ i=1,2\) and angular part, 2-dimensional torus \(\mathbb{T}^{2}=S^{1}\times S^{1}\), with standard angular coordinates \(\phi^{i}\). Equivalently, we can say that the \(\mathbb{C}^{*2}\) is a trivial 2-dimensional toric fibration over \(\mathbb{R}^{2}\). We describe the compactification of \(\mathbb{C}^{*2}\) using the fibration data.
* The radial part \(\mathbb{R}^{2}\) is compactified by _convex rational polytope_. We will describe a polytope by a collection of _supporting hyperplanes_.
* Each hyperplane is given in terms of the inside-pointing 2-dimensional (normal) vector
with components \(b^{i},i=1,2\). For rational polytope each vector has integer components i.e. \(b^{i}\in\mathbb{Z}\). For toric space \(X\) we will denote the set of corresponding vectors by \(B_{X}\).
* In order to get a compactification of a complex manifold, we require that one of the circles \(S^{1}\subset\mathbb{T}^{2}\) inside the toric fibration shrinks to zero when we approach each of the compactifying hypersurfaces. The choice of a circle is given by a class in \(\pi_{1}(\mathbb{T}^{2})\) defined by a normal vector \(\vec{b}\) of the hyperplane.
In toric geometry [6, 7] the collection of normal vectors \(B_{X}\) define a _fan_ of \(X\), hence we will adopt this notation for \(B_{X}\). Let us order vectors \(\vec{b}\in B_{X}\) in counterclockwise order on \(\mathbb{R}^{2}\). The consecutive pairs form cones of a fan for \(X\). A 2-dimensional cone, formed by pair of vectors \(\vec{b}_{1}\) and \(\vec{b}_{2}\) is
\[\text{Cone}(\vec{b}_{1},\vec{b}_{2})=\{\vec{b}_{1}\;t_{1}+\vec{b}_{2}\;t_{2} \mid t_{1},t_{2}\in\mathbb{R}^{\geq 0}\}\subset\mathbb{R}^{2}. \tag{2.1}\]
We will restrict our consideration to the smooth toric surfaces. The fan for a smooth toric surface consists of smooth cones. Smoothness of \(\text{Cone}(\vec{b},\vec{c})\) requires that the generating vectors form a basis in \(\mathbb{Z}^{2}\), what is equivalent to
\[\det(\vec{b},\vec{c})=\pm 1. \tag{2.2}\]
It is convenient to introduce a cross product for two vectors, so that
\[\det(\vec{b},\vec{c})=b^{1}c^{2}-b^{2}c^{1}=\vec{b}\times\vec{c}. \tag{2.3}\]
Note that the sign of the cross product \(\vec{b}\times\vec{c}\) is determined by the relative orientation of the two vectors. The sign is positive if we can rotate (angle less than \(\pi\)) from \(\vec{b}\) to \(\vec{c}\) in counterclockwise direction and negative otherwise.
### Rays and stars
By construction a genus-0 smooth tropical curve is an embedding of a 3-valent tree into \(\mathbb{R}^{2}\) by a piece-wise linear map. For more details see Mikhalkin [8, 9, 10]. The leaves of a tree map to infinite rays along the normal vectors of compactifying polytope. Moreover, each tropical curve requires a balance condition: the sum of all vectors on the leaves equals to zero. Below there are four examples of tropical curves, drawn in corresponding compactifying polytopes.
For any tropical curve we can construct its (maximally) degenerate version by shrinking the images of all internal edges of a tree to zero size. The resulting tropical curve will have a star shape. Below we provide the results of shrinking procedure for the four curves above.
**Definition**: Given a point \(\vec{\rho}\) and a vector \(\vec{l}\) we define a ray \(R_{l,\rho}\), starting at \(\rho\) and directed along \(\vec{l}\), i.e.
\[R_{l,\rho}=\vec{\rho}+\mathbb{R}^{+}\vec{l}=\left\{(\rho^{1}+t\;l^{1},\rho^{2 }+t\;l^{2})\in\mathbb{R}^{2}\;|\;t\in\mathbb{R}^{+}\right\}. \tag{2.4}\]
A ray \(R_{l,\rho}\) describes a holomorphic disc with Poincare-dual form
\[\gamma_{R_{l,\rho}}=\frac{1}{(2\pi)^{2}}\int_{S^{1}}\int_{0}^{\infty}\;\delta^ {2}(\vec{r}-\vec{\rho}-\vec{l}t)(dr^{1}-l^{1}dt)(d\phi^{1}-l^{1}d\varphi)(dr^{ 2}-l^{2}dt)(d\phi^{2}-l^{2}d\varphi), \tag{2.5}\]
which can be simplified into
\[\gamma_{R_{l,\rho}}=\frac{1}{2\pi}(\vec{l}\times d\vec{r})(\vec{l}\times d \vec{\phi})\int_{0}^{\infty}dt\;\delta^{2}(\vec{r}-\vec{\rho}-\vec{l}\;t). \tag{2.6}\]
**Definition**: A _star_\(S_{\rho}\) on complex toric surface \(X\) is the union of rays from common end point \(\vec{l}\) from \(\vec{\rho}\)
\[S_{\rho}=\bigcup_{\vec{l}\in S_{\rho}}R_{l,\rho}\;\;\;, \tag{2.7}\]
such that each vector of s star \(\vec{l}=-\vec{b}\) for some \(\vec{b}\in B_{X}\) and the sum of all vectors equals to zero.
\[\sum_{\vec{l}\in S_{\rho}}\vec{l}=0. \tag{2.8}\]
The equality (2.8) is known as the _balancing condition_. Note that there could be multiple rays in the same direction as depicted in examples above. The Poincare-dual of a star is a sum of the Poincare-duals of all its rays, i.e.
\[\gamma_{S_{\rho}}=\sum_{\vec{l}\in S_{\rho}}\gamma_{R_{l,\rho}}\;. \tag{2.9}\]
### Intersection of rays and stars
Pair of rays \(R_{l,\rho}\) and \(R_{n,0}\) on a plane \(\mathbb{R}^{2}\) may intersect at most at one point. We can express the number of intersection points using Poincare-duals for the rays
\[\begin{split} R_{l,\rho}\cdot_{\mathbb{R}}R_{n,0}& =\int_{\mathbb{R}^{2}}(\vec{l}\times d\vec{r})\int_{0}^{\infty}dt_ {1}\;\delta(\vec{r}-\vec{\rho}-\vec{l}\;t_{1})\wedge(\vec{n}\times d\vec{r}) \int_{0}^{\infty}dt_{2}\;\delta(\vec{r}-\vec{n}\;t_{2})\\ &=(\vec{l}\times\vec{n})\int_{(\mathbb{R}^{+})^{2}}dt_{1}\;dt_{2} \;\delta(\vec{\rho}+\vec{l}\;t_{2}-\vec{n}\;t_{1})\;=\frac{(\vec{l}\times\vec {n})}{|\vec{l}\times\vec{n}|}\chi_{-\vec{l},\vec{n}}(\vec{\rho}\;)\;,\end{split} \tag{2.10}\]
where we introduced an indicator function, which equals to one inside the cone and to zero outside
\[\chi_{\vec{l}_{1},\vec{l}_{2}}(\vec{\rho}\;)=\left\{\begin{array}{ll}1,& \rho\in\text{Cone}(\vec{l_{1}},\vec{l_{2}})\\ 0,&\rho\notin\text{Cone}(\vec{l_{1}},\vec{l_{2}}).\end{array}\right. \tag{2.11}\]
The denominator in (2.10) is due to the Jacobian for the change of variables in the integral representation of the indicator function
\[\chi_{\vec{l}_{1},\vec{l}_{2}}(\vec{r})=\int_{\text{Cone}(\vec{l}_{1},\vec{l}_ {2})}d^{2}\vec{s}\;\delta(\vec{r}-\vec{s})=|\vec{l}_{1}\times\vec{l}_{2}|\; \int_{0}^{\infty}\int_{0}^{\infty}dt_{1}dt_{2}\;\delta(\vec{r}-\vec{l}_{1}\;t _{1}-\vec{l}_{2}\;t_{2}). \tag{2.12}\]
The sign factor for the intersection number (2.10) is common feature for the intersection of real cycles in real spaces.
Our formula (2.10) tells us that the question of intersection for two rays is the same as a question whether \(\vec{\rho}\) belongs to a cone \(\text{Cone}(\vec{n},-\vec{l})\). Below we present the graphical proof of the relation (2.10).
\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho
From the picture we see that all vectors \(\vec{l}_{+}\) are related to \(\vec{l}^{\prime}_{0}\) by a counterclockwise rotation, while all \(\vec{l}_{-}\) by a clockwise rotation, hence
\[\vec{l}_{+}\times\vec{l}^{\prime}_{0}>0,\ \ \vec{l}_{-}\times\vec{l}^{\prime}_{0}<0\ . \tag{2.16}\]
We can rewrite the difference of intersection numbers
\[S_{\rho}\cdot S^{\prime}_{0}-S_{\rho^{\prime}}\cdot S^{\prime}_{0}=\sum_{\vec{ l}_{+}}\vec{l}_{+}\times\vec{l}^{\prime}_{0}-\sum_{\vec{l}_{-}}-(\vec{l}_{-} \times\vec{l}^{\prime}_{0})=\sum_{\vec{l}\in S}(\vec{l}\times\vec{l}^{\prime}_ {0})=0. \tag{2.17}\]
The last equality is due to the balancing condition (2.8) for the star \(S\). \(\blacksquare\)
We can use relation (2.10) to rewrite sum (2.14) over indicator functions on cones as a sum over intersection points of corresponding rays to give an enumerative description for the intersection number. Hence an intersection number \(S_{\rho}\cdot S^{\prime}_{0}\) becomes the weighted sum over intersection points \(p\in S_{\rho}\cap S^{\prime}_{0}\) for pairs of corresponding rays. The weight at intersection point \(p\) is equal to the absolute value of the cross product for directional vectors \(\vec{l}_{p}\) and \(\vec{l}^{\prime}_{p}\) of the two rays intersecting at \(p\), i.e.
\[S_{\rho}\cdot S^{\prime}_{0}=\sum_{p\;\in\;S_{\rho}\cap S^{\prime}_{0}}|\vec{ l}_{p}\times\vec{l}^{\prime}_{p}|. \tag{2.18}\]
**Example**: On the picture below we present the intersection of two stars. There are three intersection points and we zoomed in the circled region around on of the points and labeled the vectors of the two rays intersection at this point. The absolute value of the wedge product for the two rays of the circled point equal to one. the same true for the remaining two points. Hence we conclude that the intersection number for two stars equals to 3.
**Remark**: The enumerative expression for the intersection of stars can be naturally extended to the intersection of two tropical curves. For more details see Mikhalkin [8, 9, 10].
**Remark**: We can refine a self-intersection for tropical curve \(\vec{\Gamma}\) from being defined only on cohomology classes to the representative. The self-intersection number for a curve \(\vec{\Gamma}\) is the weighted union of vertex points \(V(\vec{\Gamma})\).
## 3 Tropical mirror for toric surfaces
In this section we will adopt the construction of the tropical mirror from [4, 5] to toric surfaces. In particular, we will describe the mirror superpotential, mirror states, holomorphic germs and tropical good section.
### Mirror relation
The mirror of the complex toric surface \(X\) is a non-compact 2-dimensional Calabi-Yau \(X^{\vee}=\mathbb{C}^{*2}\) with holomorphic superpotential. We will used the toric representation \(\mathbb{C}^{*2}=\mathbb{R}^{2}\times\mathbb{T}^{2}\) with radial coordinates \(r^{j}\) and angular (holomorphic) coordinates \(Y_{j}\). The holomorphic top form in these coordinates
\[\Omega=dY_{1}\wedge dY_{2}. \tag{3.1}\]
The mirror superpotential is
\[W_{X}=\sum_{\vec{b}\in B_{X}}q_{\vec{b}}\ e^{i\langle\vec{b},\vec{Y}\rangle}. \tag{3.2}\]
where we used the pairing
\[\langle\vec{b},Y\rangle=b^{1}Y_{1}+b^{2}Y_{2}. \tag{3.3}\]
The form (3.1) is invariant under \(SL(2,\mathbb{Z})\), the linear transformations with determinant equal to one and integer coefficients. Let us arrange vectors of the fan \(B_{X}\) in a counter-clockwise order and label them \(\vec{b}_{1},\vec{b}_{2},...\). A smooth projective toric variety is a collection of smooth cones \(\text{Cone}(\vec{b}_{k},\vec{b}_{k+1})\), i.e. cones with \(|\vec{b}_{k}\times\vec{b}_{k+1}|=1\). Hence, we can use an \(SL(2,\mathbb{Z})\)-rotation to rotate the pair of vectors \(\vec{b}_{1},\vec{b}_{2}\) to the standard basis of \(\mathbb{Z}^{2}\), i.e.
\[\vec{b}_{1}\rightarrow(1,0),\ \ \ \vec{b}_{2}\rightarrow(0,1),\ \ \vec{b}_{k} \rightarrow\vec{b}_{k}^{\prime},\ \ k>2. \tag{3.4}\]
The superpotential in new basis becomes
\[W_{X}=q_{\vec{b}_{1}}\ e^{iY_{1}}+q_{\vec{b}_{2}}\ e^{iY_{2}}+\sum_{k>2}q_{ \vec{b}_{k}}\ e^{i\langle\vec{b}_{k}^{\prime},Y\rangle}. \tag{3.5}\]
The holomorphic top form (3.1) is also invariant under constant shifts of \(Y\)-variables, hence we can use
\[Y_{1}\to Y_{1}-i\ln q_{\vec{b}_{1}},\ \ \ Y_{2}\to Y_{2}-i\ln q_{\vec{b}_{2}} \tag{3.6}\]
to simplify the superpotential into
\[W_{X}=e^{iY_{1}}+e^{iY_{2}}+\sum_{k>2}q^{\prime}_{\vec{b}_{k}}\ e^{i(\vec{U}_{k },Y)}. \tag{3.7}\]
The new toric moduli
\[q^{\prime}_{\vec{b}_{k}}=\frac{q_{\vec{b}_{k}}}{q_{\vec{b}_{1}}^{b_{1}}\cdot q _{\vec{b}_{2}}^{b_{2}}} \tag{3.8}\]
refine Kahler moduli of \(X\). If we formally set all Kahler moduli to zero we arrive into superpotential for the non-compact toric variety \(\mathbb{C}^{2}\). Hence, the superpotential in the form (3.7) describes toric variety \(X\) as a compactification of \(\mathbb{C}^{2}\).
### Mirror states and holomorphic germs
**Definition**: The Jacobi ring for superpotential \(W\) is
\[J_{W}=R_{\mathbb{C}^{*2}}/I_{W}, \tag{3.9}\]
where \(R_{\mathbb{C}^{*2}}\) is the ring of holomorphic functions on \(\mathbb{C}^{*2}\). In our coordinates \(R_{\mathbb{C}^{*2}}\) is the ring of periodic functions of \(Y\). The \(I_{W}\) is the ideal generated by the partial derivatives of \(W\)
\[I_{W}=\left\{\frac{\partial W}{\partial Y_{j}}\right\}. \tag{3.10}\]
Let us consider a graded vector space of Landau-Ginzburg-Saito theory
\[V_{LGS}=R_{\mathbb{C}^{*2}}\otimes\mathbb{C}[\psi_{\Phi}^{i}] \tag{3.11}\]
for parity-odd variables \(\psi_{\Phi}^{i}\). On \(V_{LGS}\) there is a pair of graded-commuting differentials
\[\mathbf{Q}_{W}=\frac{\partial W}{\partial Y_{j}}\frac{\partial}{\partial\psi_ {\Phi}^{j}},\ \ \ \mathbf{G}_{-}=\frac{\partial}{\partial Y_{j}}\frac{\partial}{\partial\psi_{ \Phi}^{j}}. \tag{3.12}\]
The mirror map for observables is the map from the de Rahm cohomology of toric space \(X\) to \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology, i.e
\[\Phi:H^{*}_{dR}(X)\to H^{*}({\bf Q}_{W}+z{\bf G}_{-}):\gamma\mapsto\Phi_{\gamma}. \tag{3.13}\]
The mirror map is constructed in the following way: We turn an observable \(\gamma\) into A-type HTQM state \(\Psi_{\gamma}\), then construct the corresponding mirror state \(\Psi^{X}_{\gamma}\) and take its holomorphic germ \(\Phi_{\gamma}\).
Let us introduce the notation \(\Psi^{\vec{b}}\) for dressing of a state \(\Psi\) by a single compactifying divisor, labeled by \(\vec{b}\), i.e.
\[\Psi^{\vec{b}}=2\pi KG_{-}\mu_{2}(\Psi_{\vec{b}},\Psi)=2\pi\int e^{-tH}dt\;G_{+ }G_{-}\mu_{2}(\Psi_{\vec{b}},\Psi). \tag{3.14}\]
The double dressing by vectors \(\vec{b}_{1},\vec{b}_{2}\) in these notations is
\[\Psi^{\vec{b}_{1},\vec{b_{2}}}=2\pi KG_{-}\mu_{2}(\Psi_{\vec{b}_{2}},\Psi^{\vec {b}_{1}}). \tag{3.15}\]
The mirror state on the toric surface is given by
\[\Psi^{X}_{\gamma}=\Psi_{\omega}+\sum_{\vec{b}_{1}\in B_{X}}\Psi^{\vec{b}_{1}}_ {\gamma}+\sum_{\vec{b}_{1},\vec{b}_{2}\in B_{X}}\Psi^{\vec{b}_{1},\vec{b}_{2}} _{\gamma}. \tag{3.16}\]
The holomorphic germ \(\Phi_{\gamma}\) for a mirror state \(\Psi^{X}_{\gamma}\) is lowest component in \(\psi\)-expansion evaluated at \(\vec{r}=0\), i.e
\[\Phi_{\gamma}=\Psi^{X}_{\gamma}\Big{|}_{\psi=0,r=0}. \tag{3.17}\]
### Tropical good section
The construction of Jacobi ring comes with canonical projection \(\pi_{W}:R_{{\mathbb{C}}^{*2}}\to J_{W}\). Given a pair of homolorphic functions \(\Phi_{1}\) and \(\Phi_{2}\) we can project their product \(\Phi_{1}\Phi_{2}\) to the class \(\pi_{W}(\Phi_{1}\Phi_{2})\) in Jacobi ring \(J_{W}\). The section (which inverts \(\pi_{W}\)) \(S_{W}:J_{W}\to R_{{\mathbb{C}}^{*2}}\) turns this class into holomorphic function \(S_{W}\;\pi_{W}(\Phi_{1}\Phi_{2})\). The difference
\[\Phi_{1}\Phi_{2}-S_{W}\;\pi_{W}(\Phi_{1}\Phi_{2}) \tag{3.18}\]
is trivial in Jacobi ring. An isomorphism between the \(J_{W}\) and \(H^{*}({\bf Q}_{W})\) means that there exists a map \({\bf\Sigma}_{W}:R_{{\mathbb{C}}^{*2}}\to V_{LGS}\) such that
\[\Phi_{1}\Phi_{2}-S_{W}\pi_{W}(\Phi_{1}\Phi_{2})={\bf Q}_{W}{\bf\Sigma}_{W}(\Phi_ {1}\Phi_{2}), \tag{3.19}\]
and
\[{\bf\Sigma}_{W}S_{W}=0. \tag{3.20}\]
The choice of such \({\bf\Sigma}_{W}\) is known as the choice of homotopy for \({\bf Q}_{W}\).
**Definition**: We define a contact term fo \(\Phi_{1}\) and \(\Phi_{2}\) in LGS theory with section \(S_{W}\)
\[C^{S}_{W}(\Phi_{1},\Phi_{2})={\bf G}_{-}{\bf\Sigma}_{W}(\Phi_{1}\Phi_{2}). \tag{3.21}\]
In other terms the product of two functions \(\Phi_{1}\Phi_{2}\) can be decomposed into the sum of the image of \(S_{W}\) and a linear combination of \(\partial^{1}W,\partial^{2}W\), i.e.
\[\Phi_{1}\Phi_{2}=S_{W}\pi_{W}(\Phi_{1}\Phi_{2})+\sigma_{k}\partial^{k}W \tag{3.22}\]
The \({\bf\Sigma}_{W}(\Phi_{1}\Phi_{2})\) has the form \(\sigma_{k}(Y)\psi^{k}_{\Phi}\), so \({\bf G}_{-}\)-action on it is
\[{\bf G}_{-}{\bf\Sigma}_{W}(\Phi_{1}\Phi_{2})=\frac{\partial\sigma_{k}(Y)}{ \partial Y_{k}}, \tag{3.23}\]
i.e. just a divergence of the vector field \(\sigma_{k}(Y)\partial_{Y_{k}}\). Note that for a given \(S_{W}\) the decomposition in (3.22) does not uniquely fixes the \(\sigma_{k}(Y)\). The freedom of choice \(\sigma\) is fixed by the choice of homotopy \({\bf\Sigma}_{W}\).
Note that the dependence of contact term \(C_{W}\) on the choice of homotopy \({\bf\Sigma}_{W}\) is \(({\bf Q}_{W}+z{\bf G}_{-})\)-exact. It was shown that the correlation functions are well-defined in \(H^{*}({\bf Q}_{W}+z{\bf G}_{-})\), so the choice of homotopy does not affect the recursion formula.
The tropical good for Landau-Ginzburg-Saito theory is a linear space spanned by identity germ \(\Phi_{1}^{X}=1\), point germ \(\Phi_{\rho}^{X}\), germs \(\Phi_{S}^{X}\) for a basis in a space of stars.
\[{\rm Im}\ S^{trop}={\mathbb{C}}\langle 1,\Phi_{\rho}^{X},\Phi_{S}^{X}\rangle. \tag{3.24}\]
### Mirror state for point observable
The A-model state for the \(U(1)^{2}\)-invariant Poincare-dual of the point evaluation observable located at a point \(\rho\) is
\[\Psi_{\rho}=\frac{1}{(2\pi)^{2}}\delta^{2}(\vec{r}-\vec{\rho}\;)\;\psi^{1}_{ \Phi}\psi^{1}_{R}\psi^{2}_{\Phi}\psi^{2}_{R}. \tag{3.25}\]
The single dressing of the state \(\Psi_{\rho}\) by a divisor state is
\[\begin{split}\Psi^{\vec{b}}_{\rho}&=2\pi\int e^{-tH }dt\;G_{+}G_{-}\mu_{2}(\Psi_{\vec{b}},\Psi_{\rho})\\ &=\frac{1}{2\pi}q_{\vec{b}}\;e^{i(\vec{b},Y)}(\vec{b}\times\vec{ \psi}_{R})(\vec{b}\times\vec{\psi}_{\Phi})\int_{0}^{\infty}dt\;\delta^{2}(\vec {r}-\vec{\rho}-\vec{b}t).\end{split} \tag{3.26}\]
We used
\[G_{-}\left(\psi^{2}_{\Phi}\psi^{1}_{\Phi}\;e^{i(\vec{b},Y)}\right)=(b^{2}\psi^ {1}_{\Phi}-b^{1}\psi^{2}_{\Phi})\;e^{i(\vec{b},Y)}=(\vec{\psi}_{\Phi}\times \vec{b})\;e^{i(\vec{b},Y)} \tag{3.27}\]
and similar relation for \(G_{+}\). The integral of a delta function implies that the single dressed state \(\Psi^{\vec{b}}_{\rho}\) has support on the ray \(R_{b,\rho}\). Moreover, the inclusion of \(\psi\)-dependence describes the \(\Psi^{\vec{b}}_{\rho}\) as the multiple of the state for Poincare-dual (2.6) of the ray \(R_{b,\rho}\), hence we can write
\[\Psi^{\vec{b}}_{\rho}=q_{\vec{b}}\;e^{i(\vec{b},Y)}\Psi_{R_{b,\rho}}\;\;. \tag{3.28}\]
We can represent the dressing of the state \(\Psi_{\rho}\) by all divisors from the fan \(B_{X}\), i.e.
\[\sum_{\vec{b}\in B_{X}}\Psi^{\vec{b}}_{\rho}=\sum_{\vec{b}\in B_{X}}q_{\vec{b }}\;e^{i(\vec{b},Y)}\Psi_{R_{b,\rho}} \tag{3.29}\]
as the evaluation state for the quasi-star (no balancing condition) \(S_{\rho}\) with rays identical to the rays of \(B_{X}\), equipped with holomorphic functions. The ray along vector \(\vec{b}\) is equipped with the function \(q_{\vec{b}}\;e^{i(\vec{b},Y)}\).
The dressing of \(\Psi_{\rho}\) by two divisor states
\[\begin{split}\Psi^{\vec{b}_{1},\vec{b}_{2}}_{\rho}& =q_{\vec{b}_{1}}q_{\vec{b}_{2}}\;e^{i(\vec{b}_{1}+\vec{b}_{2},Y)}( \vec{b}_{1}\times\vec{b}_{2})^{2}\int_{0}^{\infty}\int_{0}^{\infty}dt_{1}dt_{2 }\;\delta(\vec{r}-\vec{\rho}-\vec{b}_{1}t_{1}-(\vec{b}_{1}+\vec{b}_{2})t_{2})\\ &=q_{\vec{b}_{1}}q_{\vec{b}_{2}}\;e^{i(\vec{b}_{1}+\vec{b}_{2},Y )}|\vec{b}_{1}\times\vec{b}_{2}|\chi_{\vec{b}_{1},\vec{b}_{1}+\vec{b}_{2}}( \vec{r}-\vec{\rho}\;).\end{split} \tag{3.30}\]
We used an integral representation (2.12) for indicator function on a cone and
\[\vec{b}_{1}\times(\vec{b}_{2}+\vec{b}_{1})=\vec{b}_{1}\times\vec{b}_{2}. \tag{3.31}\]
Note that the dressing is not symmetric under exchange of \(\vec{b}_{1}\) and \(\vec{b}_{2}\), because the indicator functions have support at different regions, i.e
\[\chi_{\vec{b}_{1},\vec{b}_{1}+\vec{b}_{2}}(\vec{r}\,)\neq\chi_{\vec{b}_{2},\vec {b}_{1}+\vec{b}_{2}}(\vec{r}\,). \tag{3.32}\]
We can notice that two orders of performing the double dressing have the same holomorphic function, so we can naturally simplify the sum using an equality
\[\chi_{\vec{b}_{1},\vec{b}_{1}+\vec{b}_{2}}(\vec{r}\,)+\chi_{\vec{b}_{2},\vec{b }_{1}+\vec{b}_{2}}(\vec{r}\,)=\chi_{\vec{b}_{1},\vec{b}_{2}}(\vec{r}\,)\quad. \tag{3.33}\]
The graphical reprsenation of this equality is given on the picture above. The holomorphic germ for the point observable is
\[\Phi^{X}_{\rho}=\Psi^{\vec{b}_{1},\vec{b}_{2}}_{\rho}\Big{|}_{r=0}=\frac{1}{2} \sum_{\vec{b},\vec{b}^{\prime}\in B_{X}}|\vec{b}\times\vec{b}^{\prime}|\;q_{ \vec{b}}q_{\vec{b}^{\prime}}\;e^{(\vec{b}+\vec{b}^{\prime},Y)}\chi_{\vec{b}, \vec{b}^{\prime}}(-\vec{\rho}\,)\;. \tag{3.34}\]
Our construction for the holomorphic germ gives different holomorphic functions depending on the location of the point \(\rho\) on \(\mathbb{R}^{2}\). However, different holomorphic germs represent the same cohomology class.
**Proposition (cone crossing)**: Holomorphic germs (3.34) represent the same class in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology for all values of \(\rho\).
**Proof**: The holomorphic germ (3.34) changes each time the point \(\rho\) crosses cone of the fan \(B_{X}\). Let us consider the change of the germ under crossing of a single vector \(\vec{b}_{0}\). On the picture below we colored in blue all vectors \(\vec{b}_{+}\) such that the cones \({\rm Cone}(\vec{b}_{0},\vec{b}_{+})\) give non-zero contribution to the \(\Phi^{X}_{-\rho}\). We colored in green all vectors \(\vec{b}_{-}\), such that the cones \({\rm Cone}(\vec{b}_{0},\vec{b}_{-})\) contribute to \(\Phi^{X}_{-\rho^{\prime}}\).
The difference between two functions is given by
\[\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}=\sum_{\vec{b}_{+}}|\vec{b}_{+}\times \vec{b}_{0}|\;q_{\vec{b}_{+}}q_{\vec{b}_{0}}\;e^{(\vec{b}_{0}+\vec{b}_{+},Y)}- \sum_{\vec{b}_{-}}|\vec{b}_{-}\times\vec{b}_{0}|\;q_{\vec{b}_{0}}q_{\vec{b}_{- }}\;e^{i(\vec{b}_{0}+\vec{b}_{-},Y)}. \tag{3.35}\]
All vectors \(\vec{b}_{+}\) are related to \(\vec{b}_{0}\) by a counterclockwise rotation, while \(\vec{b}_{-}\) are related by clockwise rotation hence
\[\vec{b}_{+}\times\vec{b}_{0}>0,\;\;\;\vec{b}_{-}\times\vec{b}_{0}<0, \tag{3.36}\]
and we can rewrite
\[\begin{split}\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}& =\sum_{\vec{b}_{+}}(\vec{b}_{+}\times\vec{b}_{0})\;q_{\vec{b}_{+}} q_{\vec{b}_{0}}\;e^{(\vec{b}_{0}+\vec{b}_{+},Y)}-\sum_{\vec{b}_{-}}-(\vec{b}_{-} \times\vec{b}_{0})\;q_{\vec{b}_{0}}q_{\vec{b}_{-}}\;e^{i(\vec{b}_{0}+\vec{b}_ {-},Y)}\\ &=\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{b}_{0})\;q_{\vec{b}_ {0}}q_{\vec{b}_{0}}\;e^{(\vec{b}+\vec{b}_{0},Y)}.\end{split} \tag{3.37}\]
We can express the sum ver \(\vec{b}\) as a derivative of superpotential (3.2), i.e.
\[\begin{split}\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}& =\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{b}_{0})\;q_{\vec{b}}q_{ \vec{b}_{0}}\;e^{(\vec{b}+\vec{b}_{0},Y)}=q_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y) }(-i\vec{\partial_{Y}}\vec{W}\times\vec{b}_{0})=\mathbf{Q}_{W}\chi_{\vec{b}_{ 0}},\end{split} \tag{3.38}\]
for the state
\[\chi_{\vec{b}_{0}}=-iq_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}(\vec{\psi}_{\Phi} \times\vec{b}_{0}). \tag{3.39}\]
The state \(\chi_{\vec{b}_{0}}\) is \(\mathbf{G}_{-}\)-closed, i.e.
\[\mathbf{G}_{-}\chi_{\vec{b}_{0}}=\frac{\partial}{\partial Y_{k}}\frac{\partial }{\partial\psi^{k}_{\Phi}}\left(q_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}(\vec{ \psi}_{\Phi}\times\vec{b}_{0})\right)=iq_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}( \vec{b}_{0}\times\vec{b}_{0})=0. \tag{3.40}\]
Hence the change of the holomorphic germ under the crossing of a single ray along \(\vec{b}_{0}\) is exact, i.e
\[\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}=(\mathbf{Q}_{W}+z\mathbf{G}_{-}) \chi_{\vec{b}_{0}}. \tag{3.41}\]
For general translation from \(\rho\) to \(\rho^{\prime}\) we may need to perform several cone-crossings.
**Proposition (Enumerative expression for germs)**: The holomorphic germ for the point observable can be constructed in the following way: We perform the weighted sum over the intersection points \(p\) of the fan \(B_{X}\) at origin and the fan \(-B_{X}^{-\rho}\) at point \(-\rho\). The weights are determined by the direction vectors of the intersecting rays.
\[\Phi_{\rho}^{X}=\frac{1}{2}\sum_{p\;\in\;B_{X}\cap-B_{X}^{-\rho}}q_{\vec{b}_{p }}q_{\vec{b}_{p}}|\vec{b}_{p}\times\vec{b}_{p}^{\prime}|\;e^{i(\vec{b}_{p}+ \vec{b}_{p}^{\prime},Y)}. \tag{3.42}\]
**Proof**: The relation (2.13) tells us that the step functions in a sum (3.34) describe the intersection points two rays: one from \(B_{X}\), while the other from \(B_{X}^{-\rho}\). Hence the sum over step functions is the same as the sum over intersection points. The multiplicative factors in front of the indicator functions give us the weight factors in (3.42). Hence the proof is complete. \(\blacksquare\)
### Mirror state for star-observable
The A-model state \(\Psi_{R_{l,\rho}}\) for a ray \(R_{l,\rho}\) is constructed from the the Poincare-dual form (2.6) by replacement \(dr\to\psi_{R}\) and \(d\phi\to\psi_{\Phi}\). Namely,
\[\Psi_{R_{l,\rho}}=\frac{1}{2\pi}(\vec{l}\times\vec{\psi}_{R})(\vec{l}\times \vec{\psi}_{\Phi})\int_{0}^{\infty}dt\;\delta^{2}(\vec{r}-\vec{\rho}-t\;\vec{l}). \tag{3.43}\]
The single divisor dressing of the state
\[\Psi_{R_{l,\rho}}^{\vec{b}}=(\vec{l}\times\vec{b})^{2}q_{\vec{b}}\;e^{i(\vec{ b},Y)}\int_{0}^{\infty}dt\int_{0}^{\infty}ds\;\;\delta^{2}(\vec{r}-\vec{\rho}-s\; \vec{l}-t\;\vec{b}). \tag{3.44}\]
The integral is the step function on a cone (2.13), hence we can further simplify
\[\Psi_{R_{l,\rho}}^{\vec{b}}=|\vec{l}\times\vec{b}|\;q_{\vec{b}}\;e^{i(\vec{b},Y)}\;\chi_{\vec{l},\vec{b}}(\vec{r}-\vec{\rho}\;). \tag{3.45}\]
The mirror state for the ray-observable
\[\Psi^{X}_{R_{l,\rho}}=\Psi_{R_{l,\rho}}+\sum_{\vec{b}\in B_{X}}\Psi^{\vec{b}}_{R_ {l,\rho}}=\Psi_{R_{l,\rho}}+\sum_{\vec{b}\in B_{X}}|\vec{l}\times\vec{b}|\;q_{ \vec{b}}\;e^{i(\vec{b},Y)}\chi_{\vec{l},\vec{b}}(\vec{r}-\vec{\rho}\;). \tag{3.46}\]
The A-model state for the star observable \(S_{\rho}\) is a sum of the states for its rays, i.e.
\[\Psi_{S_{\rho}}=\sum_{\vec{l}\in S}\Psi_{R_{l,\rho}}, \tag{3.47}\]
while corresponding mirror state
\[\Psi^{X}_{S_{\rho}}=\Psi_{S_{\rho}}+\sum_{\vec{l}\in S}\sum_{\vec{b}\in B_{X}} |\vec{l}\times\vec{b}|\;q_{\vec{b}}\;e^{i(\vec{b},Y)}\chi_{\vec{l},\vec{b}}( \vec{r}-\vec{\rho}\;). \tag{3.48}\]
The holomorphic germ for the star observable \(S_{\rho}\) is
\[\Phi^{X}_{S_{\rho}}=\Psi^{X}_{S_{\rho}}\Big{|}_{\psi=r=0}=\sum_{\vec{l}\in S} \sum_{\vec{b}\in B_{X}}|\vec{l}\times\vec{b}|\;q_{\vec{b}}\;e^{i(\vec{b},Y)} \chi_{\vec{l},\vec{b}}(-\vec{\rho}\;). \tag{3.49}\]
**Proposition**: Holomorphic germs \(\Phi^{X}_{S_{\rho}}\) in (3.49) represent the same class in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology for all values of \(\rho\).
**Proof**: The holomorphic germ (3.49) changes each time the point \(\rho\) crosses either ray of fan \(B_{X}\) or ray of star \(S_{\rho}\). Let us consider the change of function under crossing the vector \(\vec{l}_{0}\in S_{\rho}\). On the picture below we colored in blue all vectors \(\vec{b}_{+}\), such that the cones \({\rm Cone}(\vec{l}_{0},\vec{b}_{+})\) give non-zero contribution to \(\Phi^{X}_{S_{-\rho}}\). We colored in green all vectors \(\vec{b}_{-}\), such that the cones \({\rm Cone}(\vec{l}_{0},\vec{b}_{-})\) contribute to \(\Phi^{X}_{S_{-\rho^{\prime}}}\).
\(b_{-}\)\(b_{+}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}
All vectors \(\vec{b}_{+}\) are related to \(\vec{l}_{0}\) by a counterclockwise rotation, while \(\vec{b}_{-}\) by a clockwise one, hence
\[\vec{b}_{+}\times\vec{l}_{0}>0,\ \ \vec{b}_{-}\times\vec{l}_{0}<0. \tag{3.51}\]
We can simplify the absolute values in the sum
\[\begin{split}\Phi^{X}_{S_{-\rho}}-\Phi^{X}_{S_{-\rho^{\prime}}}& =\sum_{\vec{b}_{+}}(\vec{b}_{+}\times\vec{l}_{0})\;q_{\vec{b}_{+}} \;e^{i(\vec{b}_{+},Y)}-\sum_{\vec{b}_{-}}-(\vec{b}_{-}\times\vec{l}_{0})\;q_{ \vec{b}_{-}}\;e^{i(\vec{b}_{-},Y)}\\ &=\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{l}_{0})\;q_{\vec{b}} \;e^{(\vec{b},Y)}.\end{split} \tag{3.52}\]
We can express the sum ver \(\vec{b}\) as a derivative of superpotential (3.2), i.e.
\[\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{l}_{0})\;q_{\vec{b}}\;e^{(\vec{b},Y )}=-i\overline{\partial_{Y}W_{X}}\times\vec{l}_{0}={\bf Q}_{W}\chi_{\vec{l}_{ 0}}, \tag{3.53}\]
for the state
\[\chi_{\vec{l}_{0}}=-i(\vec{\psi}_{\Phi}\times\vec{l}_{0}). \tag{3.54}\]
The state \(\chi_{\vec{l}_{0}}\) is \({\bf G}_{-}\)-closed, i.e.
\[{\bf G}_{-}\chi_{\vec{l}_{0}}=\frac{\partial}{\partial Y_{k}}\frac{\partial}{ \partial\psi_{\Phi}^{k}}\left(\vec{\psi}_{\Phi}\times\vec{l}_{0}\right)=0, \tag{3.55}\]
hence
\[\Phi^{X}_{S_{-\rho}}-\Phi^{X}_{S_{-\rho^{\prime}}}=({\bf Q}_{W}+z{\bf G}_{-}) \chi_{\vec{l}_{0}}. \tag{3.56}\]
The other possibility is the crossing of some vector \(\vec{b}_{0}\in B_{X}\). The difference between two functions is given by \({\rm Cone}(\vec{l},\vec{b}_{0})\)-contributions. We can repeat the analysis for orientations to simplify the absolute values
\[\begin{split}\Phi^{X}_{S_{-\rho}}-\Phi^{X}_{S_{-\rho^{\prime}}}& =\sum_{\vec{l}_{+}}|\vec{l}_{+}\times\vec{b}_{0}|\;q_{\vec{b}_{0} }\;e^{i(\vec{b}_{0},Y)}-\sum_{\vec{l}_{-}}|\vec{l}_{-}\times\vec{b}_{0}|\;q_{ \vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}\\ &=\sum_{\vec{l}_{+}}(\vec{l}_{+}\times\vec{b}_{0})\;q_{\vec{b}_{0 }}\;e^{i(\vec{b}_{0},Y)}-\sum_{\vec{l}_{-}}-(\vec{l}_{-}\times\vec{b}_{0})\;q_{ \vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}\\ &=\;q_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}\sum_{\vec{l}\in S}( \vec{l}\times\vec{b}_{0})=0.\end{split} \tag{3.57}\]
The last equality is due to the balancing condition (2.8) for the star-observable.
The general translation of the star \(S\) from \(\rho\) to \(\rho^{\prime}\) can be split into finitely many crossings of a single vector either from \(S\) of \(B_{X}\). Since single crossing preserves the class of holomorphic germ in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology, so it is true for the finitely-many crossings. \(\blacksquare\)
### Mirror for tropical curve observable
We can use a relation (2.10) to replace indicator functions by intersection points of rays to give an enumerative formulation for the holomorphic germ (3.49) for a star observable: The sum over the intersection points of a star \(S_{\rho}\) and the reflection of the fan \(-B_{X}\), weighted cross-product of corresponding vectors and holomorphic function \(q_{\vec{b}}\,e^{i\langle\vec{b},Y\rangle}\).
\[\Phi^{X}_{S_{\rho}}=\sum_{p\;\in\;S_{\rho}\cap-B_{X}}|\vec{l}_{p}\times\vec{b} _{p}|\;q_{\vec{b}}\,e^{i\langle\vec{b}_{p},Y\rangle}. \tag{3.58}\]
We can generalize the formula for the holomorphic germ from the star (maximally degenerate tropical curve) to an arbitrary tropical curve (possibly of higher genus). Namely,
\[\Phi^{X}_{\Gamma}=\sum_{p\;\in\;\Gamma\cap-B_{X}}|\vec{l}_{p}\times\vec{b}_{p }|\;q_{\vec{b}_{p}}\;e^{i\langle\vec{b}_{p},Y\rangle}. \tag{3.59}\]
Each intersection point \(p\) is an intersection of a ray along \(\vec{b}_{p}\) from \(B_{X}\) and an edge of a graph, representing tropical curve \(\Gamma\) equipped with integer vector \(\vec{l}_{p}\).
**Proposition**: The class of holomorphic germ \(\Phi^{X}_{\Gamma}\) in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology is independent of the moduli of tropical curve.
**Proof**: There are two types of events which are can change the holomorphic germ as we change the moduli of the tropical curve:
* **Ray of \(B_{X}\) crosses vertex of \(\Gamma\)**: The change in holomorphic germ is controlled by the cones \({\rm Cone}(\vec{b},\vec{l})\) for vector \(\vec{b}\) on the ray of \(B_{X}\) and vectors \(\vec{l}\) connecting to the vertex \(V\) of \(\Gamma\). The analysis of the change is identical to the analysis we did for stars in section 3.5. In particular, the change was proportional to the balancing condition for a star. In the case of tropical curve the difference will be proportional to the balancing condition for the vertex \(V\), i.e. \[\Phi^{X}_{\Gamma}-\Phi^{X}_{\Gamma^{\prime}}=q_{\vec{b}}\:e^{i(\vec{b},Y)}\; \sum_{\vec{l}\in V}\left(\vec{l}\times\vec{b}\right)=0.\] (3.60) The last equality is due to balancing condition which holds for all vertices of tropical curve \(\Gamma\). For more details see Mikhalkin [8, 9, 10]
* **Edge of \(\Gamma\) crosses vertex of \(B_{X}\)**: The change in holomorphic germ is controlled by the cones \({\rm Cone}(\vec{b},\vec{l})\) for vector \(\vec{l}\) assigned to the edge of \(\Gamma\) and vectors \(\vec{b}\) from the fan of \(X\). The analysis of the change is identical to the analysis we did for stars in section 3.5. In particular, the change of a holomorphic germ is given by \[\Phi^{X}_{\Gamma}-\Phi^{X}_{\Gamma^{\prime}}=({\bf Q}_{W}+z{\bf G}_{-})\chi_{ \vec{l}}\,,\;\;\;\chi_{\vec{l}}\!=-i(\vec{\psi}_{\Phi}\times\vec{l}).\] (3.61)
Both events change the holomorphic germ by at most \(({\bf Q}_{W}+z{\bf G}_{-})\)-exact term, hence preserve the cohomology class of a germ. Any change of moduli for a tropical curve is a chain of the finitely many crossing events, hence the proof is complete. \(\blacksquare\)
**Example**: On the pictures below we present the intersection of the green tropical curve and the toric fan depicted in blue. Each consecutive picture describe a translation of the toric fan to the left. First, second and third picture describe a crossing for the (vertical) ray of the fan and vertices of the curve. The holomorphic germ does not change since the intersection points lie on the very same rays of the green curve and the cross of corresponding ray-vectors are the same. On the fourth and fifth pictures we observe the crossing of the green vertex and blue edges of the tropical curve. The intersection points on horizontal rays of the tropical fan move from one ray to another. Hence the holomorphic germ changes.
## 4 Divisor relation
### Divisor relation for Gromov-Witten invariants
Let us recall the following property of the Gromov-Witten invariants. For a hypersurface \(H\) with Poincare-dual form \(\gamma_{H}\) and classes \(\gamma_{1},..,\gamma_{n}\in H^{*}(X)\) the following relation holds
\[\langle\gamma_{H},\gamma_{1},..,\gamma_{n}\rangle_{0,\beta}^{X}=\left(\int_{ \Sigma_{\beta}}\gamma_{H}\right)\cdot\langle\gamma_{1},..,\gamma_{n}\rangle_{ 0,\beta}^{X}, \tag{4.1}\]
where \(\beta\) is the degree of curve and \(\Sigma_{\beta}\) is a curve representing class \(\beta\) in the Kahler cone of \(H_{2}(X)\).
Let us give an equivalent formulation of the divisor relation for tropical mirror of toric surfaces. The hyperplane \(H\) in 2 dimensions becomes a tropical curve. Moreover, we can turn a tropical curve into a star by shrinking the lengths of internal edges. A tropical curve and the corresponding star are in the same cohomology class on \(X\), hence without loss of generality we will assume that \(H\) is a star. Our expression (2.18) for the intersection number implies that two stars \(S,S^{\prime}\) have positive intersection number \(S\cdot S^{\prime}\). Stars form a cone under the union operation: we can take a union of two stars. Equivalently, we can add the corresponding Poincare-dual forms
\[\gamma_{S\cup S^{\prime}}=\gamma_{S}+\gamma_{S^{\prime}}. \tag{4.2}\]
Hence, we can use stars to define a Kahler cone for \(X\) and express the intersection number in (4.1) as an intersection number for two stars
\[\int_{\Sigma_{\beta}}\gamma_{H}=S_{\beta}\cdot H. \tag{4.3}\]
The weighted sum over the Gromov-Witten invariants can be written
\[\sum_{\beta}\langle\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}\;q^{\beta}=\sum_ {S_{\beta}}\langle\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}\;\prod_{\vec{l} \in S_{\beta}}q_{\vec{l}}\;\;, \tag{4.4}\]
where \(q_{\vec{l}}\) are toric moduli associated to the rays \(\vec{l}\) of a star \(S_{\beta}\).
### Tropical divisor relation from LGS
The tropical mirror [5] allows us to express the weighted sum in terms of correlation function in B-type HTQM
\[\sum_{\beta}q^{\beta}\langle\gamma_{S},\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}=\langle\Psi^{X}_{\gamma_{S}},\Psi^{X}_{\gamma_{1}},..,\Psi^{X}_{\gamma_ {n}}\rangle_{Q_{W}}. \tag{4.5}\]
We can replace the mirror state for a star observable \(\Psi^{X}_{\gamma_{S}}\) by its holomorphic germ \(\Phi_{S}\) and use the recursion relation in B-type HTQM to arrive into
\[\langle\Psi^{X}_{\gamma_{S}},\Psi^{X}_{\gamma_{1}},..,\Psi^{X}_{\gamma_{n}} \rangle_{Q_{W}}=\langle\Phi_{S},\Psi^{X}_{\gamma_{1}},..,\Psi^{X}_{\gamma_{n}} \rangle_{Q_{WX}}=\frac{d}{d\epsilon}\Big{|}_{\epsilon=0}\langle\Psi^{\epsilon} _{\gamma_{1}},..,\Psi^{\epsilon}_{\gamma_{n}}\rangle_{Q_{W\vec{X}}}, \tag{4.6}\]
for deformed superpotential
\[W^{\epsilon}_{X}=W_{X}+\epsilon\;\Phi^{X}_{S}, \tag{4.7}\]
and deformed states
\[\Psi^{\epsilon}_{\gamma_{k}}=\Psi^{W^{\epsilon}_{X}}_{\gamma_{k}},\;\;\;k=1,..,n. \tag{4.8}\]
The holomorphic germ of the mirror state for the star observable
\[\Phi^{X}_{S_{\rho}}=\sum_{\vec{l}\in S}\sum_{\vec{b}\in B_{X}}|\vec{l}\times \vec{b}|\;q_{\vec{b}}\;e^{i\langle\vec{b},Y\rangle}\chi_{\vec{l},\vec{b}}(-\rho) \tag{4.9}\]
gives us the deformed superpotential
\[W^{\epsilon}_{X}(q_{\vec{b}})=\sum_{\vec{b}\in B_{X}}q_{\vec{b}}\;e^{i\langle \vec{b},Y\rangle}+\epsilon\sum_{\vec{l}\in S}\sum_{\vec{b}\in B_{X}}|\vec{l} \times\vec{b}|\;q_{\vec{b}}\;e^{i\langle\vec{b},Y\rangle}\chi_{\vec{l},\vec{b }}(-\rho)=W_{X}(q^{\epsilon}_{\vec{b}}). \tag{4.10}\]
The deformed superpotential has the same \(e^{i\langle\vec{m},Y\rangle}\)-terms, but with \(\epsilon\)-dependent coefficients. Hence, it describes the same toric fan \(B_{X}\), but modified toric moduli. In particular, moduli
before and after deformation are related by the multiplicative factor
\[q^{\epsilon}_{\vec{b}}=q^{\epsilon}_{\vec{b}}\left(1+\epsilon\sum_{\vec{l}\in S}| \vec{l}\times\vec{b}|\;\chi_{\vec{l},\vec{b}}(-\rho)\right). \tag{4.11}\]
The recursion relation (4.6) is an equality between polynomials in \(q\), which implies equality between coefficients of corresponding monomials. Let us look at coefficients for monomial
\[q^{\beta}=\prod_{\vec{b}\in S_{\beta}}q_{\vec{b}}. \tag{4.12}\]
Note that the monomials \((q^{\epsilon})^{\beta}\) and \(q^{\beta}\) are related by the multiplicative factor
\[(q^{\epsilon})^{\beta}=\prod_{\vec{b}\in S_{\beta}}q^{\epsilon}_{\vec{b}}=q^{ \beta}\prod_{\vec{b}\in S_{\beta}}\left(1+\epsilon\sum_{\vec{l}\in S}|\vec{l} \times\vec{b}|\;\chi_{\vec{l},\vec{b}}(-\rho)\right). \tag{4.13}\]
Hence the coefficients in polynomial expansion are related by the multiplicative factor as well.
The coefficient in the expansion of the first expression in (4.6), by construction, is the tropical \((n+1)\)-pont Gromov-Witten invariant, while the coefficients of the last expression in (4.6) is the multiple of \(n\)-point Gromov-Witten invariant. In particular
\[\langle\gamma_{S},\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}=\langle\gamma _{1},..,\gamma_{n}\rangle^{X}_{0,\beta}\left.\frac{d}{d\epsilon}\right|_{ \epsilon=0}\prod_{\vec{b}\in S_{\beta}}\left(1+\epsilon\sum_{\vec{l}\in S}| \vec{l}\times\vec{b}|\;\chi_{\vec{l},\vec{b}}(-\rho)\right). \tag{4.14}\]
The derivative evaluates into the intersection number for stars
\[\sum_{\vec{b}\in S_{\beta},\ \vec{l}\in S}|\vec{l}\times\vec{b}|\;\chi_{\vec{l}, \vec{b}}(-\rho)=S_{\beta}\cdot S. \tag{4.15}\]
Hence, we derived the divisor relation for tropical Gromov-Witten invariants on toric surface from the recursion formula in B-type HTQM.
## 5 Mirror for selected toric surfaces
### \(\mathbb{P}^{2}\)
The compactification polytope for \(\mathbb{P}^{2}\) and the corresponding fan are presented below
\[\includegraphics[width=142.26378pt]{figures/1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1
There are three holomorphic germs of the mirror state (5.4), labeled by three cones
\[\Phi_{S^{FS}_{-\rho}}=\left\{\begin{array}{ll}q_{3}\;e^{i\langle\vec{b}_{3},Y \rangle}=q_{3}\;e^{-iY_{1}-iY_{2}},&\vec{\rho}\in\text{Cone}(\vec{l}_{1},\vec{ l}_{2});\\ q_{1}\;e^{i\langle\vec{b}_{1},Y\rangle}=q_{1}\;e^{iY_{1}},&\vec{\rho}\in\text{ Cone}(\vec{l}_{2},\vec{l}_{3});\\ q_{2}\;e^{i\langle\vec{b}_{2},Y\rangle}=q_{2}\;e^{iY_{2}},&\vec{\rho}\in\text{ Cone}(\vec{l}_{1},\vec{l}_{3}).\end{array}\right. \tag{5.6}\]
The enumerative description of the holomorphic germ for the star observable \(S\) is constructed from the diagrams below
All three functions represent the same cohomology class i.e
\[\Phi_{S^{FS}_{\rho}}=q_{1}\;e^{iY_{1}}=q_{2}\;e^{iY_{2}}=q_{3}\;e^{-iY_{1}-iY_ {2}}\in H^{*}(\mathbf{Q}_{\mathbb{P}^{2}}+z\mathbf{G}_{-}). \tag{5.7}\]
Indeed, we can perform the cone crossings to determine the exact terms
\[\begin{split}&(\mathbf{Q}_{\mathbb{P}^{2}}+z\mathbf{G}_{-})(-i \psi^{1}_{\Phi})=-i\partial_{1}W=q_{1}\;e^{iY_{1}}-q_{3}\;e^{-iY_{1}-iY_{2}},\\ &(\mathbf{Q}_{\mathbb{P}^{2}}+z\mathbf{G}_{-})(-i\psi^{2}_{\Phi}) =-i\partial_{2}W=q_{2}\;e^{iY_{2}}-q_{3}\;e^{-iY_{1}-iY_{2}}.\end{split} \tag{5.8}\]
The three possible choices of holomorphic germs give us the following deformations of toric moduli
\[\begin{split}\text{Cone}(\vec{l}_{1},\vec{l}_{2}):& \;(q_{1},q_{2},q_{3})\rightarrow(q_{1},q_{2},q_{3}(1+\epsilon)),\\ \text{Cone}(\vec{l}_{2},\vec{l}_{3}):&\;(q_{1},q_{2},q _{3})\rightarrow(q_{1}(1+\epsilon),q_{2},q_{3}),\\ \text{Cone}(\vec{l}_{1},\vec{l}_{3}):&\;(q_{1},q_{2},q _{3})\rightarrow(q_{1},q_{2}(1+\epsilon),q_{3}).\end{split} \tag{5.9}\]
The three deformations above describe the same Kahler moduli deformation. Indeed, the weight factor for the degree-d curves, written in terms of toric moduli using the star representative
\[q^{\beta}=(q_{1}q_{2}q_{3})^{d}=(q_{1}q_{2}q_{3})^{d}(1+\epsilon)^{d}=(q_{1}q _{2}q_{3})^{d}(1+d\cdot\epsilon+\mathcal{O}(\epsilon^{2})). \tag{5.10}\]
The \({\cal O}(\epsilon)\) terms in the last equality match with intersection of degree star \(S_{\beta}=d\;S^{FS}\) with a star \(S=\epsilon\;S^{FS}\). Indeed, we can evaluate
\[S_{\beta}\cdot S=d\cdot\epsilon\;S^{FS}\cdot S^{FS}=d\cdot\epsilon. \tag{5.11}\]
In the last equality we used the self-intersection number for the Fubini-Study star
\[S^{FS}\cdot S^{FS}=\sum_{\vec{l},\;\vec{l}\;\in S^{FS}}|\vec{l}\times\vec{l}^{ \prime}|\;\chi_{\vec{l},-\vec{l}^{\prime}}(\rho)=1. \tag{5.12}\]
There are three possible germs for the mirror state of the point observable at point \(-\vec{\rho}\), labeled by three cones
\[\Phi_{-\rho}=\left\{\begin{array}{ll}|\vec{b}_{1}\times\vec{b}_{2}|\;q_{1}q _{2}\;e^{i(\vec{b}_{1}+\vec{b}_{2},Y)}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}},&\vec{ \rho}\in\mbox{Cone}(\vec{b}_{1},\vec{b}_{2});\\ |\vec{b}_{1}\times\vec{b}_{3}|\;q_{1}q_{3}\;e^{i(\vec{b}_{1}+\vec{b}_{3},Y)}= q_{1}q_{3}\;e^{-iY_{2}},&\vec{\rho}\in\mbox{Cone}(\vec{b}_{2},\vec{b}_{3});\\ |\vec{b}_{2}\times\vec{b}_{3}|\;q_{2}q_{3}\;e^{i(\vec{b}_{2}+\vec{b}_{3},Y)}= q_{2}q_{3}\;e^{-iY_{1}},&\vec{\rho}\in\mbox{Cone}(\vec{b}_{1},\vec{b}_{3}). \end{array}\right. \tag{5.13}\]
We can perform the cone crossing to derive the relations
\[\begin{split}&({\bf Q}_{\mathbb{P}^{2}}+z{\bf G}_{-})(-iq_{2}e^ {iY_{2}}\psi^{1}_{\Phi})=-iq_{2}e^{iY_{2}}\partial_{1}W=q_{1}q_{2}e^{iY_{1}+iY _{2}}-q_{2}q_{3}e^{-iY_{1}},\\ &({\bf Q}_{\mathbb{P}^{2}}+z{\bf G}_{-})(-iq_{1}e^{iY_{1}}\psi^{2}_{ \Phi})=-iq_{1}e^{iY_{1}}\partial_{2}W=q_{1}q_{2}e^{iY_{1}+iY_{2}}-q_{1}q_{3}e^ {-iY_{2}}.\end{split} \tag{5.14}\]
which imply that all three holomorphic germs are in the same class
\[q_{2}q_{3}\;e^{-iY_{1}}=q_{1}q_{3}\;e^{-iY_{2}}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}} \in H^{*}({\bf Q}_{\mathbb{P}^{2}}+z{\bf G}_{-}). \tag{5.15}\]
Using holomorphic germs for trivial, point and hyperplane observables we can describe the tropical good section
\[\mbox{Im}\;S^{trop}_{\mathbb{P}^{2}}=\mathbb{C}\langle 1,\Phi_{S^{FS}},\Phi_{ \rho}\rangle=\mathbb{C}\langle 1,q_{3}\;e^{-iY_{1}-iY_{2}},q_{1}q_{2}\;e^{iY_{1}+iY _{2}}\rangle. \tag{5.16}\]
### \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)
The compactifying polyhedron and the fan for \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) are presented below
The generators of the fan
\[B_{\mathbb{P}^{1}\times\mathbb{P}^{1}}=\{\vec{b}_{1}=(1,0),\vec{b}_{2}=(0,1), \vec{b}_{3}=(-1,0),\vec{b}_{4}=(0,-1)\} \tag{5.17}\]
give us the mirror superpotential (3.2) of the form
\[W^{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{1}\;e^{iY_{1}}+q_{2}\;e^{iY_{2}}+q_{ 3}\;e^{-iY_{1}}+q_{4}\;e^{-iY_{2}}. \tag{5.18}\]
The \(H^{2}_{dR}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) is 2-dimensional and we can use the Fubini-Study forms on \(\mathbb{P}^{1}\)-factors as a basis. The tropical limit of the Fubini-Study forms is the pair of 2-ray stars: horizontal labeled by \(h\), depicted in blue and vertical, labeled by \(v\), depicted in green on the picture above. The corresponding A-model states
\[\begin{split}\Psi_{v}&=\delta(r^{1})\;\psi^{1}_{ \Phi}\psi^{1}_{R},\\ \Psi_{h}&=\delta(r^{2})\;\psi^{2}_{\Phi}\psi^{2}_{R}. \end{split} \tag{5.19}\]
The holomorphic germs are determined from the four intersections depicted below
There is a single intersection point in all four cases, so corresponding holomorphic contain single term. The straightforward evaluation gives us
\[\Phi_{v}=\left\{\begin{array}{ll}q_{1}\;e^{i\langle\vec{b}_{1},Y\rangle}=q_ {1}\;e^{iY_{1}},&\rho^{1}<0;\\ q_{3}\;e^{i\langle\vec{b}_{3},Y\rangle}=q_{3}\;e^{-iY_{1}},&\rho^{1}>0;\end{array}\right. \tag{5.20}\]
and
\[\Phi_{h}=\left\{\begin{array}{ll}q_{2}\;e^{i\langle\widetilde{b}_{2},Y\rangle}=q _{2}\;e^{iY_{2}},,&\rho^{2}<0;\\ q_{4}\;e^{i\langle\widetilde{b}_{4},Y\rangle}=q_{4}\;e^{-iY_{2}},&\rho^{2}>0. \end{array}\right. \tag{5.21}\]
The cone crossing procedure gives us the relations between pairs of germs
\[\begin{split}&(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z \mathbf{G}_{-})(-i\psi_{\Phi}^{2})=-i\partial_{Y_{2}}W^{\mathbb{P}^{1}\times \mathbb{P}^{1}}=q_{2}\;e^{iY_{2}}-q_{4}\;e^{-iY_{2}},\\ &(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-})(-i \psi_{\Phi}^{1})=-i\partial_{Y_{1}}W^{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{ 1}\;e^{iY_{1}}-q_{3}\;e^{-iY_{1}}.\end{split} \tag{5.22}\]
Indeed we can see that the holomorphic germs belong to the two classes
\[\begin{split}&\Phi_{v}=q_{1}\;e^{iY_{1}}=q_{2}\;e^{-iY_{1}}\in H ^{*}(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-}),\\ &\Phi_{h}=q_{3}\;e^{iY_{2}}=q_{4}\;e^{-iY_{2}}\in H^{*}(\mathbf{ Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-}).\end{split} \tag{5.23}\]
We can deform the mirror superpotential by the holomorphic herms of the vertical and horizontal stars, i.e.
\[W_{\mathbb{P}^{1}\times\mathbb{P}^{1}}\to W_{\mathbb{P}^{1}\times\mathbb{P}^ {1}}+\epsilon_{v}\Phi_{v}+\epsilon_{h}\Phi_{h}+\mathcal{O}(\epsilon^{2}). \tag{5.24}\]
The choice holomorphic germs gives us four different deformations of toric moduli
\[\begin{split}&\rho^{1}<0,\rho^{2}<0\;:\;(q_{1},q_{2},q_{3},q_{4}) \rightarrow((1+\epsilon_{v})q_{1},(1+\epsilon_{h})q_{2},q_{3},q_{4}),\\ &\rho^{1}>0,\rho^{2}<0\;:\;(q_{1},q_{2},q_{3},q_{4})\rightarrow( q_{1},(1+\epsilon_{h})q_{2},(1+\epsilon_{v})q_{3},q_{4}),\\ &\rho^{1}<0,\rho^{2}>0\;:\;(q_{1},q_{2},q_{3},q_{4})\rightarrow( (1+\epsilon_{v})q_{1},q_{2},q_{3},(1+\epsilon_{h})q_{4}),\\ &\rho^{1}>0,\rho^{2}>0\;:\;(q_{1},q_{2},q_{3},q_{4})\rightarrow( q_{1},q_{2},(1+\epsilon_{v})q_{3},(1+\epsilon_{h})q_{4}).\end{split} \tag{5.25}\]
The four deformations above describe the same Kahler moduli deformation. The degree vector \(\beta\) is two-dimensional and we will parametrize it by \(\beta=(d_{v},d_{h})\). The star basis representative for the degree, i.e. \(S_{\beta}=d_{v}H_{v}+d_{h}H_{h}\). The weight factor evaluates into
\[\begin{split} q^{\beta}&=(q_{1}q_{3})^{d_{h}}(q_{ 2}q_{4})^{d_{v}}=(q_{1}q_{3}(1+\epsilon_{v}))^{d_{h}}(q_{2}q_{4}(1+\epsilon_{h }))^{d_{v}}\\ &=(q_{1}q_{3})^{d_{h}}(q_{2}q_{4})^{d_{v}}(1+d_{h}\epsilon_{v}+d_ {v}\epsilon_{h}+\mathcal{O}(\epsilon^{2})).\end{split} \tag{5.26}\]
The \(\mathcal{O}(\epsilon)\) terms in the last equality match with intersection of degree star \(S_{\beta}\) with a star \(S=\epsilon_{v}S_{v}+\epsilon_{h}S_{h}\). Indeed we can evaluate
\[\beta\cdot S=d_{v}\epsilon_{v}\;S_{v}\cdot S_{v}+(d_{h}\epsilon_{v}+d_{v} \epsilon_{h})\;S_{h}\cdot S_{v}+d_{h}\epsilon_{h}\;S_{h}\cdot S_{h}=d_{h} \epsilon_{v}+d_{v}\epsilon_{h}. \tag{5.27}\]
The intersection numbers for the star observables \(S_{v}\) and \(S_{h}\) are
\[\begin{split} S_{v}\cdot S_{v}&=\sum_{\vec{l},\vec{l} \in H_{v}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}(\rho) =0,\\ S_{h}\cdot S_{h}&=\sum_{\vec{l},\vec{l}^{\prime}\in H _{h}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}(\rho)=0, \\ S_{v}\cdot S_{h}&=\sum_{\vec{l}\in H_{v},\ \vec{l}\in H _{h}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}(\rho)=1. \end{split} \tag{5.28}\]
The holomorphic germ for the point observable
\[\begin{split}\Phi_{\rho}&=q_{1}q_{2}\ \chi_{\vec{b}_{1},\vec{b}_{2}}(-\vec{\rho}\ )\ e^{iY_{1}+iY_{2}}+q_{1}q_{4}\ \chi_{\vec{b}_{1},\vec{b}_{4}}(-\vec{\rho}\ )\ e^{iY_{1}-iY_{2}}\\ &\qquad+q_{2}q_{3}\ \chi_{\vec{b}_{3},\vec{b}_{2}}(-\vec{\rho}\ )\ e^{-iY_{1}+iY_{2}}+q_{3}q_{4}\ \chi_{\vec{b}_{2},\vec{b}_{4}}(-\vec{\rho}\ )\ e^{-iY_{1}-iY_{2}}.\end{split} \tag{5.29}\]
Let us provide the exact term for pairs
\[\begin{split}(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z \mathbf{G}_{-})(-ie^{\pm iY_{1}}\psi_{\Phi}^{2})&=-ie^{\pm iY_{1 }}\partial_{Y_{2}}W_{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{2}e^{\pm iY_{1}+iY _{2}}-q_{4}e^{\pm iY_{1}-iY_{2}},\\ (\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-})( -ie^{\pm iY_{2}}\psi_{\Phi}^{1})&=-ie^{\pm iY_{2}}\partial_{Y_{1 }}W_{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{1}e^{iY_{1}\pm iY_{2}}-q_{3}e^{-iY _{1}\pm iY_{2}}.\end{split}\]
Using lemma we conclude that holomorphic germ can be written in the form
\[\Phi_{\rho}=q_{1}q_{2}\ e^{iY_{1}+iY_{2}}=q_{2}q_{3}\ e^{-iY_{1}+iY_{2}}=q_{1} q_{4}\ e^{iY_{1}-iY_{2}}=q_{3}q_{4}\ e^{-iY_{1}-iY_{2}}\in H^{*}(\mathbf{Q}_{ \mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-}).\]
Tropical good section
\[\text{Im}\ S_{\mathbb{P}^{1}\times\mathbb{P}^{1}}^{trop}=\mathbb{C}\langle 1, \Phi_{v},\Phi_{h},\Phi_{\rho}\rangle=\mathbb{C}\langle 1,q_{1}\ e^{iY_{1}},q_{2}\ e^{iY_{2}},q_{1}q_{2 }\ e^{iY_{1}+iY_{2}}\rangle. \tag{5.30}\]
### Blow up of a point on \(\mathbb{P}^{2}\)
We can depict the blow-up of a point on \(\mathbb{P}^{2}\) by cutting a corner on compactifying polyhedron for \(\mathbb{P}^{2}\). Similarly the corresponding fan is a refinement of the fan for \(\mathbb{P}^{2}\).
The compactifying divisors for \(\widehat{\mathbb{P}^{2}}\) are
\[B_{\widehat{\mathbb{P}^{2}}}=\{\vec{b}_{1}=(1,0),\vec{b}_{2}=(0,1),\vec{b}_{3}=(- 1,-1),\vec{b}_{4}=(1,1)\}. \tag{5.31}\]
The mirror superpotential (3.2) is
\[W_{\widehat{\mathbb{P}^{2}}}=q_{1}\ e^{iY_{1}}+q_{2}\ e^{iY_{2}}+q_{3}\ e^{-iY_{1}-iY_{2}}+q_{4}\ e^{iY_{1}+iY_{2}}. \tag{5.32}\]
The size of \(\mathbb{P}^{1}\) at blow up point is controlled by \(q_{4}\). The limit \(q_{4}\to 0\) describes a blow down of \(\widehat{\mathbb{P}^{2}}\) to \(\mathbb{P}^{2}\), while the superpotential in the limit becomes the mirror superpotential for \(\mathbb{P}^{2}\).
Second Betti number \(\dim H^{2}(\widehat{\mathbb{P}^{2}})=2\), hence there are two independent hypersurface observables. We can choose a basis consisting of Fubini-Study star \(S^{FS}\) on \(\mathbb{P}^{2}\) (depicted in blue) and a two ray star \(S^{bl}\), related to the blow up, depicted in green.
The holomorphic germ for \(S^{bl}\)-observable
\[\Phi_{S^{bl}_{-\rho}}=\left\{\begin{array}{ll}q_{1}\ e^{i\langle\vec{b}_{1},Y\rangle}=q_{1}e^{iY_{1}},&\rho^{2}>\rho^{1}\\ q_{2}\ e^{i\langle\vec{b}_{2},Y\rangle}=q_{2}e^{iY_{2}},&\rho^{2}<\rho^{1} \end{array}\right. \tag{5.34}\]
The cone crossing relations are
\[\begin{split}(\mathbf{Q}_{\widehat{\mathbb{P}^{2}}}+z\mathbf{G}_ {-})(-i\psi^{1})&=-i\partial_{1}W_{\widehat{\mathbb{P}^{2}}}=q_{1}e^{iY_{1} }-q_{3}e^{-iY_{1}-iY_{2}}+q_{4}e^{iY_{1}+iY_{2}}\\ (\mathbf{Q}_{\widehat{\mathbb{P}^{2}}}+z\mathbf{G}_{-})(-i\psi^{2})&=-i \partial_{2}W_{\widehat{\mathbb{P}^{2}}}=q_{2}e^{iY_{2}}-q_{3}e^{-iY_{1}-iY_{ 2}}+q_{4}e^{iY_{1}+iY_{2}}\end{split} \tag{5.35}\]
Hence we can express the holomorphic germs for the line observables in the form
\[\begin{split}\Phi_{S^{FS}}=& q_{3}e^{-iY_{1}-iY_{2}}=q_{1}e ^{iY_{1}}+q_{4}e^{iY_{1}+iY_{2}}=q_{2}e^{iY_{2}}+q_{4}e^{iY_{1}+iY_{2}},\\ \Phi_{S^{bl}}=& q_{1}e^{iY_{1}}=q_{2}e^{iY_{2}}.\end{split} \tag{5.36}\]
We can deform the mirror superpotential by the holomorphic herms of the vertical and horizontal stars, i.e.
\[W_{\overline{\mathbb{P}^{2}}}\to W_{\overline{\mathbb{P}^{2}}}+\epsilon\; \Phi_{S^{FS}}+\epsilon_{bl}\;\Phi_{S^{bl}}+\mathcal{O}(\epsilon^{2}). \tag{5.37}\]
Hence we have six possible deformations of toric moduli depending on the choice of holomorphic germs
\[\begin{split}(q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1 +\epsilon_{bl}),q_{2},q_{3}(1+\epsilon),q_{4}),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1+\epsilon_{bl} )(1+\epsilon),q_{2},q_{3},q_{4}(1+\epsilon)),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1+\epsilon_{bl} ),q_{2}(1+\epsilon),q_{3},q_{4}(1+\epsilon)),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1},q_{2}(1+\epsilon _{bl}),q_{3}(1+\epsilon),q_{4}),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1},q_{2}(1+\epsilon _{bl})(1+\epsilon),q_{3},q_{4}(1+\epsilon)),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1+\epsilon),q_{2},q_{3}(1+\epsilon_{bl}),q_{4}(1+\epsilon)).\end{split} \tag{5.38}\]
The four deformations above describe the same Kahler moduli deformation. The degree vector \(\beta\) is two-dimensional and we will parametrize it by \(\beta=(d,d_{bl})\). The star basis representative for the degree, i.e. \(S_{\beta}=d\;S^{FS}+d_{bl}\;S^{bl}\). The weight factor evaluates into
\[\begin{split} q^{\beta}&=(q_{1}q_{2}q_{3})^{d}(q_{ 3}q_{4})^{d_{bl}}=(q_{1}q_{2}q_{3}(1+\epsilon)(1+\epsilon_{bl}))^{d}(q_{3}q_{4 }(1+\epsilon))^{d_{bl}}\\ &=(q_{1}q_{2}q_{3})^{d}(q_{3}q_{4})^{d_{bl}}(1+d\cdot\epsilon+d_ {bl}\cdot\epsilon+d\cdot\epsilon_{bl}+\mathcal{O}(\epsilon^{2}))\end{split} \tag{5.39}\]
The \(\mathcal{O}(\epsilon)\) terms in the last equality match with intersection of degree star \(S_{\beta}\) with a star observable \(S=\epsilon\;S^{FS}+\epsilon_{bl}\;S^{bl}\). Indeed we can evaluate
\[\begin{split} S_{\beta}\cdot S&=d\cdot\epsilon\; S^{FS}\cdot S^{FS}+(d\cdot\epsilon_{bl}+d_{bl}\cdot\epsilon)S^{FS}\cdot S^{bl}+d_{bl} \cdot\epsilon_{bl}\;S^{bl}\cdot S^{bl}\\ &=d\cdot\epsilon+d_{bl}\cdot\epsilon+d\cdot\epsilon_{bl}.\end{split} \tag{5.40}\]
The intersection numbers for the star observables \(S_{v}\) and \(S_{h}\) are
\[\begin{split} S^{bl}\cdot S^{bl}&=\sum_{\vec{l},\vec{l }^{\prime}\in S^{bl}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{p}}( \rho)=0,\\ S^{FS}\cdot S^{FS}&=\sum_{\vec{l},\vec{l}^{\prime} \in S^{FS}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}( \rho)=1,\\ S^{FS}\cdot S^{bl}&=\sum_{\vec{l}\in S^{FS},\ \vec{l}^{\prime} \in S^{bl}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{p}}(\rho)=1. \end{split} \tag{5.41}\]
The holomorphic germ for the point observable at point \(\vec{\rho}\), labeled by four cones
\[\Phi_{\rho}=\left\{\begin{array}{ll}q_{1}q_{2}\ e^{iY_{1}+iY_{2}}+q_{2}q_{4} \ e^{iY_{1}+2iY_{2}},&\vec{\rho}\in\text{Cone}(\vec{b}_{2},\vec{b}_{4});\\ q_{1}q_{2}\ e^{iY_{1}+iY_{2}}+q_{1}q_{4}\ e^{2iY_{1}+iY_{2}},&\vec{\rho}\in \text{Cone}(\vec{b}_{1},\vec{b}_{4});\\ q_{1}q_{3}\ e^{-iY_{2}},&\vec{\rho}\in\text{Cone}(\vec{b}_{1},\vec{b}_{3}); \\ q_{2}q_{3}\ e^{-iY_{1}},&\vec{\rho}\in\text{Cone}(\vec{b}_{2},\vec{b}_{3}). \end{array}\right. \tag{5.42}\]
The tropical good section
\[\text{Im}\ S^{trop}_{\widehat{\mathbb{P}}^{2}}=\mathbb{C}\langle 1,\Phi_{ \rho},\Phi_{S^{FS}},\Phi_{S^{bl}}\rangle=\mathbb{C}\langle 1,q_{2}q_{3}\ e^{-iY_{1} },q_{3}\ e^{-iY_{1}-iY_{2}},q_{1}\ e^{iY_{1}}\rangle. \tag{5.43}\]
## 6 Recursion for point observables
The holomorphic germs for hypersurface observables and point observables are quite similar. Both are linear combinations finitely-many factors \(e^{i\langle\vec{m},Y\rangle}\) with minor difference: In case of line observables vectors \(\vec{m}=\vec{b}\) belong to the fan \(B_{X}\) of \(X\), while in case of point observable \(\vec{m}=\vec{b}+\vec{b}^{\prime}\) is the sum of two vectors \(\vec{b},\vec{b}^{\prime}\in B_{X}\) from the fan of \(X\). The deformation of the superpotential by such holomorphic germs
\[W_{X}\to W_{X}^{\epsilon}=W_{X}+\epsilon\Phi_{P}=\sum_{\vec{b}\in B_{X}}q_{ \vec{b}}\ e^{i\langle\vec{b},Y\rangle}+\sum_{\vec{b},\vec{b}^{\prime}\in B_{X} }c_{\vec{b}\vec{b}^{\prime}}\ e^{i\langle\vec{b}+\vec{b}^{\prime},Y\rangle} \tag{6.1}\]
in some cases can be thought as a superpotential for different toric variety \(X_{\epsilon}\), defined by the extension of the fan \(B_{X}\) by vectors \(\vec{b}+\vec{b}^{\prime}\) for each non-zero \(c_{\vec{b}\vec{b}^{\prime}}\).
An extension of the fan by sum of two vectors in some cases describe a blow up of a point in a toric variety. The simplest example of such phenomenon is the blow up of a point on \(\mathbb{P}^{2}\). Indeed the fan for \(\widehat{\mathbb{P}^{2}}\) is an extension of the fan for \(\mathbb{P}^{2}\) by adding a vector \(\vec{b}_{4}=\vec{b}_{1}+\vec{b}_{2}\) as shown on picture below
In the rest of this section we provide an explicit example of the superpotential deformation by the holomorphic germ for the point observable and discuss a potential implications for the tropical Gromov-Witten invariants.
### Recursion for point observables on \(\mathbb{P}^{2}\)
Let us consider a 4-point tropical Gromov-Witten invariant: the number of tropical curves (of degree-1 and genus-0) passing through 2 distinct points \(P_{1},P_{2}\) and two hypersurfaces \(H_{3},H_{4}\) in \(\mathbb{P}^{2}\). We can use the divisor relation to express the 4-point Gromov-Witten invariant via 3- and 2- point invariants
\[\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=1} ^{\mathbb{P}^{2}}=1\cdot\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}} \rangle_{d=1}^{\mathbb{P}^{2}}=1\cdot\langle\gamma_{P_{1}},\gamma_{P_{2}} \rangle_{d=1}^{\mathbb{P}^{2}}=1. \tag{6.2}\]
Below we provide the enumerative proof of the relation (6.2). The tropical hypersufaces \(H_{3}\) and \(H_{4}\) are 3-valent stars depicted in green and black. From the picture we observe that both stars always intersect the tropical curve of degree-1, depicted in blue, at a single point. Hence the 4-point invariant is determined by the tropical curves passing through points \(P_{1},P_{2}\) and there is only one such curve.
We can express the 4-point Gromov-Witten invariant via B-model correlation function
\[\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{ \mathbb{P}^{2}}=q_{1}q_{2}q_{3}\cdot\langle\gamma_{P_{1}},\gamma_{P_{2}}, \gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=1}^{\mathbb{P}^{2}}=\langle\Psi_{1}, \Psi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}} \tag{6.3}\]
of four mirror states
\[\Psi_{1}=\Psi_{P_{1}}^{W},\ \ \Psi_{2}=\Psi_{P_{2}}^{W},\ \ \Psi_{3}=\Psi_{H_{3}}^{W},\ \ \Psi_{4}=\Psi_{H_{4}}^{W}. \tag{6.4}\]
We can use the invariance of B-model correlation functions discussed in [5]
\[\langle\Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4}+Q_{W}\chi\rangle_{Q_{W}}=\langle \Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}} \tag{6.5}\]
to replace the mirror state \(\Psi_{2}\) by its holomorphic germ. In particular, let us choose the holomorphic germ in the form
\[\Phi_{2}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}, \tag{6.6}\]
to rewrite the B-model correlation function in the following form
\[\langle\Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}}=\langle\Psi_{1}, \Phi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}}. \tag{6.7}\]
We can use the recursion formula from [5] to express the 4-point function as a derivative of 3-point function
\[\langle\Psi_{1},\Phi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}}=\frac{d}{d\epsilon} \Big{|}_{\epsilon=0}\langle\Psi_{1}^{\epsilon},\Psi_{3}^{\epsilon},\Psi_{4}^{ \epsilon}\rangle_{Q_{W^{\epsilon}}} \tag{6.8}\]
in B-model with deformed superpotential
\[W_{\mathbb{P}^{2}}^{\epsilon}=W_{\mathbb{P}^{2}}+\epsilon\;\Phi_{2}=q_{1}\;e^ {iY_{1}}+q_{2}\;e^{iY_{2}}+q_{3}\;e^{-iY_{1}-iY_{2}}+\epsilon q_{1}q_{2}\;e^{ iY_{1}+iY_{2}}=W_{X_{\epsilon}}. \tag{6.9}\]
The deformed superpotential is the mirror superpotential for the different toric manifold \(X_{\epsilon}=\widehat{\mathbb{P}^{2}}\). The polytopes for \(\mathbb{P}^{2}\) and \(X_{\epsilon}\) are depicted below
We showed in [5] that the deformed mirror states in correlation function (6.8) are mirror
states in deformed theory, i.e.
\[\Psi^{\epsilon}_{\alpha}=\Psi^{W}_{\alpha}+2\pi K_{W}G_{-}\mu_{2}(\Psi^{W}_{ \alpha},\epsilon\;\Phi_{2})=\Psi^{W^{\epsilon}}_{\alpha}. \tag{6.10}\]
Hence we can represent the 3-point function \(\langle\Psi^{\epsilon}_{1},\Psi^{\epsilon}_{3},\Psi^{\epsilon}_{4}\rangle_{Q_{ W_{\epsilon}}}\) as the sum of A-model amplitudes for three observables \(P_{1},H_{3},H_{4}\) in the HTQM for \(X^{\epsilon}=\widehat{\mathbb{P}^{2}}\) and then convert them into tropical Gromov-Witten invariants on \(X^{\epsilon}=\widehat{\mathbb{P}^{2}}\). Namely
\[\langle\Psi^{\epsilon}_{P_{1}},\Psi^{\epsilon}_{H_{3}},\Psi^{\epsilon}_{H_{4} }\rangle_{Q_{W^{\epsilon}}}=\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4 }}\rangle^{\widehat{\mathbb{P}^{2}}}. \tag{6.11}\]
\(X_{\epsilon}=\widehat{\mathbb{P}^{2}}\) is a toric space, hence the Gromov-Witten invariant is a polynomial in toric moduli \(q_{1},q_{2},q_{3}\) and new module \(q_{4}(\epsilon)=\epsilon q_{1}q_{2}\). The \(\mathbb{P}^{2}\)-invariants are polynomials in Kahler module \(q=q_{1}q_{2}q_{3}\).
The \(\widehat{\mathbb{P}^{2}}\)-invariants are polynomials in Kahler moduli \(q,q_{bl}\) where \(q_{bl}=q_{4}q_{3}\) in additional Kahler module on \(\widehat{\mathbb{P}^{2}}\). Hence we can expand
\[\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P }^{2}}}=\sum_{d,d_{\epsilon}=0}^{\infty}q^{d}q_{bl}^{d_{bl}}\langle\gamma_{P_{ 1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P}^{2}}}_{d,d_{bl }} \tag{6.12}\]
The product \(q_{bl}=q_{4}q_{3}\) is a Kahler module of \(\widehat{\mathbb{P}^{2}}\) associated to the size of the blow-up \(\mathbb{P}^{1}\). The derivative at \(\epsilon=0\) picks up monomials, linear in \(\epsilon\), hence linear in \(q_{4}(\epsilon)\) and Kahler module \(q_{bl}\), i.e.
\[\frac{d}{d\epsilon}\bigg{|}_{\epsilon=0}\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P}^{2}}}=\frac{d}{d\epsilon}q_{bl} \cdot\sum_{d=0}^{\infty}q^{d}\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{ 4}}\rangle^{\widehat{\mathbb{P}^{2}}}_{d,d_{bl}=1}=q\langle\gamma_{P_{1}}, \gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P}^{2}}}_{0,d_{bl}=1}, \tag{6.13}\]
where we used
\[\frac{d}{d\epsilon}q_{bl}=q_{3}\frac{d}{d\epsilon}q_{4}=q_{1}q_{2}q_{3}=q \tag{6.14}\]
and the degree selection argument. The dimension of moduli space of tropical curves of bi-degree \((d,d_{bl})\) on \(\widehat{\mathbb{P}^{2}}\) with 3 marked points should be equal to the total degree of three observables, which implies that
\[3d+2d_{bl}+2=\sum_{\alpha=1}^{3}\deg\gamma_{\alpha}=4. \tag{6.15}\]
Hence the Gromov-Witten invariant \(\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{ P}^{2}}}\) is non-zero only for bi-degree
\(0,d_{bl}=1\).
The result of this procedure relates the 4pt degree-1 Gromov-Witten invariant on \(\mathbb{P}^{2}\) to 3pt invariant on \(\widehat{\mathbb{P}^{2}}\) of bi-degree \((0,1)\), i.e
\[\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=1}^ {\mathbb{P}^{2}}=\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d =0,d_{bl}=1}^{\widehat{\mathbb{P}^{2}}}. \tag{6.16}\]
### Enumerative description of recursion
We can give an enumerative interpretation of the relation (6.16) as a _cutting corners procedure_ for tropical Gromov-Witten invariants. In case point \(P_{2}\) is close to the corner, formed by hyperplanes supported on \(\vec{b}_{1}\) and \(\vec{b}_{2}\) the tropical Gromov-Witten invariant \(\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{ \mathbb{P}^{2}}\) is supported by the diagram below. Let us cut the corner together with the part of tropical curve and marked point \(P_{2}\). The result of the cutting is the polyhedron for \(\widehat{\mathbb{P}^{2}}\) with a tropical curve and remaining three observables: point \(P_{1}\) and two hyperplanes \(H_{1}\) and \(H_{2}\).
The remaining tropical curve is a curve of bi-degree \((d,d_{bl})=(0,1)\). The moduli space of such curve is \(\mathbb{R}^{1}\times S^{1}\): the radial part corresponds to the parallel translation of the curve as shown on a picture below
The three remaining observables completely fix the moduli hence there exists a tropical Gromov-Witten invariant on \(\widehat{\mathbb{P}^{2}}\), which counts the number of degree-\((0,1)\) tropical curves \(\Gamma\) which pass through the point \(P_{1}\) and two hyperplanes \(H_{3}\) and \(H_{4}\). Both hyperplanes \(H_{3}\) and \(H_{4}\) have degree-\((1,0)\) and intersect all curves \(\Gamma\) with intersection numbers \(H_{3}\cdot\Gamma=H_{4}\cdot\Gamma=1\)
hence we can reduce the original problem to counting curves \(\Gamma\) through point \(P_{1}\). There is a unique such tropical curve, hence
\[\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=0,d_{bl}=1}^{ \widehat{\mathbb{P}^{2}}}=1. \tag{6.17}\]
We can apply the cutting corner procedure to other tropical Gromov-Witten invariants. For example we can consider a degree-2 curves through 5 distinct points on \(\mathbb{P}^{2}\). Among the 4 distinct tropical curves of degree-2 let us consider the one presented below.
We performed a single corner cut to reduce the number of marked points to 4 and changed the target to \(\widehat{\mathbb{P}^{2}}\). We can continue the procedure and cut one more corner to reduce the number of points to 3. There are two possible cuts (up to an isomorphism) that we can perform:
* **two points are far away**: The resulting polytope describes the toric variety \(X_{\epsilon_{1}\epsilon_{2}}\) which is a blow up of \(\mathbb{P}^{2}\) at two points. In particular we have a network of blow down maps \(\pi_{1},\pi_{2}:X_{\epsilon_{1}\epsilon_{2}}\to\widehat{\mathbb{P}^{2}}\) which can be applied in any order. The cycles which are pre-images of blow up points do not intersect.
* **two points are nearby**: The resulting polyhedron describes the toric variety \(X_{\epsilon_{1}\epsilon_{2}}\) which is a bi-rational transformation of \(\mathbb{P}^{2}\) obtained by two consecutive blow-ups. We have a single chain of blow down maps.
### Double deformation and contact terms
Let us give a detailed description of the geometry for two cuts of \(\widehat{\mathbb{P}^{2}}\). We can use the polytopes to construct the corresponding fans
However in order to describe the toric moduli \(q_{1},...,q_{5}\) in terms of deformation parameters \(\epsilon_{1},\epsilon_{2}\) and toric moduli of the base \(\mathbb{P}^{2}\) we need to construct the corresponding mirror superpotentials. Both superpotentials are \(\epsilon_{2}\)-deformation of the superpotential (6.9), i.e.
\[W^{\epsilon_{1}\epsilon_{2}}_{\mathbb{P}^{2}}=W^{\epsilon_{1}}_{\mathbb{P}^{2} }+\epsilon_{2}\;\Phi_{4}^{\epsilon_{1}}. \tag{6.18}\]
where \(\Phi_{4}^{\epsilon_{1}}\) is the holomorphic germ on \(\widehat{\mathbb{P}^{2}}\). We can express the holomorphic germs on \(\widehat{\mathbb{P}^{2}}\) using the holomorphic germs on \(\mathbb{P}^{2}\) then the double deformed superpotential takes the form
\[W^{\epsilon_{1}\epsilon_{2}}_{\mathbb{P}^{2}}=W_{\mathbb{P}^{2}}+\epsilon_{1} \;\Phi_{5}+\epsilon_{2}\;\Phi_{4}+\epsilon_{1}\epsilon_{2}\;C^{trop}_{W}(\Phi_ {4},\Phi_{5}). \tag{6.19}\]
The second equality describes a double deformation of superpotential by a pair of holomorphic functions. The \(\epsilon_{1}\epsilon_{2}\)-term is the tropical contact term defined in section 3.3
The two cutting corners cases correspond to the different choices of the holomorphic germs \(\Phi_{4},\Phi_{5}\) for point observable on \(\mathbb{P}^{2}\). We can use our analysis from section 5.3 for holomorphic germs to perform the superpotential analysis.
* **two points are far away**: The holomorphic germ for \(P_{4}\) observable is the same for \(\mathbb{P}^{2}\) and \(\widehat{\mathbb{P}^{2}}\) \[\Phi_{3}^{\epsilon_{1}}=\Phi_{3}=q_{1}q_{3}\;e^{-iY_{2}}\] (6.20) hence the double deformation of the mirror superpotential is \[W^{\epsilon_{1}\epsilon_{2}}_{\mathbb{P}^{2}}=W^{\epsilon_{1}}_{\mathbb{P}^{2 }}+\epsilon_{2}\Phi_{3}^{\epsilon_{1}}=q_{1}\;e^{iY_{1}}+q_{2}\;e^{iY_{2}}+q_ {3}\;e^{-iY_{1}-iY_{2}}+\epsilon_{1}q_{1}q_{2}\;e^{iY_{1}+iY_{2}}+\epsilon_{2} q_{1}q_{3}\;e^{-iY_{2}}.\] (6.21) There is no quadratic terms in \(\epsilon\) in our expression hence we expect that the contact term between \(\Phi_{3}\) and \(\Phi_{5}\) vanishes. Indeed the product \(\Phi_{3}\Phi_{5}\) is in image of good section (5.16) \[\Phi_{5}\cdot\Phi_{3}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}\cdot q_{1}q_{3}\;e^{-iY_{2 }}=q_{1}^{2}q_{2}q_{3}\;e^{iY_{1}}\in\text{Im}\;S^{trop}_{\mathbb{P}^{2}},\] (6.22)
hence contact term between two deformations is trivial, i.e. \[C_{W}^{trop}(\Phi_{5},\Phi_{3})=C_{W}^{trop}(e^{iY_{1}+iY_{2}},e^{-iY_{2}})=0.\] (6.23)
* **two points are nearby**: The holomorphic germs \[\Phi_{4}^{\epsilon_{1}}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}+q_{4}(\epsilon_{1})q_{2} \;e^{iY_{1}+2iY_{2}}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}+\epsilon_{1}q_{1}q_{2}^{2} \;e^{iY_{1}+2iY_{2}}\] (6.24) \[\Phi_{4}=\Phi_{4}^{\epsilon_{1}}\Big{|}_{\epsilon_{1}=0}=q_{1}q_{2}\;e^{iY_{1} +iY_{2}}\] (6.25) gives us a mirror superpotential \[W_{\mathbb{P}^{2}}^{\epsilon_{1}\epsilon_{2}}=q_{1}\;e^{iY_{1}}+q_{2}\;e^{iY _{2}}+q_{3}\;e^{-iY_{1}-iY_{2}}+(\epsilon_{1}+\epsilon_{2})q_{1}q_{2}\;e^{iY_{ 1}+iY_{2}}+\epsilon_{1}\epsilon_{2}q_{1}q_{2}^{2}\;e^{iY_{1}+2iY_{2}}.\] (6.26) Note that the \(\epsilon_{1}\) and \(\epsilon_{2}\) enter symmetrically. The quadratic term is a contact term for two (identical) deformations \[C_{W}^{trop}(\Phi_{5},\Phi_{4}) =C_{W}^{trop}(q_{1}q_{2}\;e^{iY_{1}+iY_{2}},q_{1}q_{2}\;e^{iY_{1}+ iY_{2}})\] \[=\mathbf{G}_{-}\mathbf{\Sigma}_{W}(\Phi_{4}\Phi_{5}-S_{W}\pi_{W}( \Phi_{4}\Phi_{5}))=\mathbf{G}_{-}(q_{1}^{2}q_{2}\;e^{2iY_{1}+iY_{2}}i\psi_{ \Phi}^{2})\] (6.27) \[=e^{2iY_{1}+iY_{2}}.\] We used \[\pi_{W}(\Phi_{4}\Phi_{5})=\pi_{W}(q_{1}^{2}q_{2}^{2}\;e^{2iY_{1}+2iY_{2}})=q_{ 1}^{2}q_{2}q_{3}\;e^{iY_{1}}\] (6.28) and \[\Phi_{4}\Phi_{5}-S_{W}\pi_{W}(\Phi_{4}\Phi_{5})=q_{1}^{2}q_{2}^{2}\;e^{2iY_{1 }+2iY_{2}}-q_{1}^{2}q_{2}q_{3}\;e^{iY_{1}}=\mathbf{Q}_{W}(q_{1}^{2}q_{2}e^{2iY _{1}+iY_{2}}i\psi_{\Phi}^{2}).\] (6.29)
We explicitely checked that the two ways (6.18) and (6.19) of constructing the double deformed mirror superpotential for \(\mathbb{P}^{2}\) give identical results when we use the tropical good section (5.16) for the contact terms.
### Conclusion and open questions
We described the cutting corners procedure and its application to 4- and 5- point tropical Gromov-Witten invariants on \(\mathbb{P}^{2}\). It is reasonable to conjecture that the cutting corners
relation (6.16) for 4-point correlation function generalizes to \(n\)-point functions
\[\langle\gamma_{1},\gamma_{2},...,\gamma_{n},\gamma_{P}\rangle_{d}^{\mathbb{P}^{2} }=\langle\gamma_{1},\gamma_{2},...,\gamma_{n}\rangle_{d-1,d_{bl}=1}^{\widetilde {\mathbb{P}^{2}}}. \tag{6.30}\]
In our examples we cut up to two corners, but we can conjecture that the procedure can be iterated. If so, then we can repeat the cutting corners procedure till we are down to three point correlation function, which we can evaluate using the residue formula in Landau-Ginzburg-Saito theory. In particular, it would be interesting to perform the five cutting corners to evaluate the first nontrivial Gromov-Witten invariant, which is 12 degree-3 genus-0 curves passing through generic 8 points on \(\mathbb{P}^{2}\).
There is a famous isomorphism between the blow up of two points on \(\mathbb{P}^{2}\) and the blow up of one point on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). Such relation implies that the iterated cutting corners procedure after first few steps will give us the same toric spaces. Hence, we can use this relation as consistency check of the tropical Gromov-Witten invariants evaluation through the cutting corners procedure.
In case we can repeat the cutting corners procedure indefinitely we can use it give a non-perturbative definition of the Gromov-Witten invariants for point observables in way similar to what was done for the hyperplane observables.
### Acknowledgments
We are grateful to Yasha Neiman for many discussions on the topics presented in this paper. The work A.L. is supported by Wu Wen-Tsun Key Lab of Mathematics. The work of V.L. is supported by the Quantum Gravity Unit of the Okinawa Institute of Science and Technology Graduate University (OIST).
| ```
trópico mirrors と複雑なtoric surfaceについての記述をします。特に、ミラー状態を明示的に表現し、それらを Enumerative form に書けます。そのHolomorphic germ は Landau-Ginzburg-Saito theory の良い部分の明確な形を与えます。私たちはHolomorphic germ の明確な形を利用して、Tropcal Gromov-Witten invariants の割り当て関係を導き出します。点観察の変形を理論の解釈とします。それはtoric surface の点の吹付けと等しくなります。この解釈の影響をTropcal Gromov-Witten invariants に対する記述します。
```
**Please Note:**
* I tried my best to provide a clear and accurate translation.
* I prioritized conveying the meaning of the original sentence in Japanese.
* Some of the language used in the original sentence might not be perfectly accurate in Japanese. I have tried to make the best approximation of |
2309.16194 | Thin current sheets in the magnetotail at lunar distances: statistics of
ARTEMIS observations | The magnetotail current sheet's spatial configuration and stability control
the onset of magnetic reconnection - the driving process for magnetospheric
substorms. The near-Earth current sheet has been thoroughly investigated by
numerous missions, whereas the midtail current sheet has not been adequately
explored. This is especially the case for the long-term variation of its
configuration in response to the solar wind. We present a statistical analysis
of 1261 magnetotail current sheet crossings by the Acceleration, Reconnection,
Turbulence and Electrodynamics of Moon's Interaction with the Sun (ARTEMIS)
mission orbiting the moon (X~-60 RE), collected during the entirety of Solar
Cycle 24. We demonstrate that the magnetotail current sheet typically remains
extremely thin, with a characteristic thickness comparable to the thermal ion
gyroradius, even at such large distances from Earth's dipole. We also find that
a substantial fraction (~one quarter) of the observed current sheets have a
partially force-free magnetic field configuration, with a negligible
contribution of the thermal pressure and a significant contribution of the
magnetic field shear component to the pressure balance. Further, we quantify
the impact of the changing solar wind driving conditions on the properties of
the midtail around the lunar orbit. During active solar wind driving
conditions, we observe an increase in the occurrence rate of thin current
sheets, whereas quiet solar wind driving conditions seem to favor the formation
of partially force-free current sheets. | S. R. Kamaletdinov, A. V. Artemyev, A. Runov, V. Angelopoulos | 2023-09-28T06:32:35 | http://arxiv.org/abs/2309.16194v1 | # Thin current sheets in the magnetotail at lunar distances: statistics of ARTEMIS observations
###### Abstract
We present a statistical analysis of magnetotail current sheets collected by the ARTEMIS mission for 11 years of observations at \(\sim 60\) R\({}_{E}\) tail
We observe a large population (\(\sim 56\%\)) of ion-kinetic scale current sheets and a smaller population of partially force-free current sheets (\(\sim 24\%\))
We show that the occurrence rates of intense current sheets and partially force-free current sheets correlate with the solar wind parameters | Magnetotail電流シートの空間配置と安定性が磁力再結合の発生を制御し、その駆動過程である磁気圏外への substorm。近地点電流シートは多数のミッションによって十分に調査されているが、中地点電流シートは十分に調査されておらず、特に太陽風からの長期的な変動に対応する配置の調査は不足している。この調査は、月周回軌道を持つAcceleration, Reconnection, Turbulence and Electrodynamics of Moon's Interaction with the Sun (ARTEMIS)ミッションから収集された、太陽周期24中の1261回の磁気尾電流シートの crossings を対象とした統計的分析に基づいている。この分析の結果、磁気尾電流シートは、地球の偶力からの距離に関係なく、典型的に非常に薄い状態であることが示された。また、観察された電流シートの多くは、その磁場配置が部分的に力freeな状態である |
2301.13816 | Execution-based Code Generation using Deep Reinforcement Learning | The utilization of programming language (PL) models, pre-trained on
large-scale code corpora, as a means of automating software engineering
processes has demonstrated considerable potential in streamlining various code
generation tasks such as code completion, code translation, and program
synthesis. However, current approaches mainly rely on supervised fine-tuning
objectives borrowed from text generation, neglecting unique sequence-level
characteristics of code, including but not limited to compilability as well as
syntactic and functional correctness. To address this limitation, we propose
PPOCoder, a new framework for code generation that synergistically combines
pre-trained PL models with Proximal Policy Optimization (PPO) which is a widely
used deep reinforcement learning technique. By utilizing non-differentiable
feedback from code execution and structure alignment, PPOCoder seamlessly
integrates external code-specific knowledge into the model optimization
process. It's important to note that PPOCoder is a task-agnostic and
model-agnostic framework that can be used across different code generation
tasks and PLs. Extensive experiments on three code generation tasks demonstrate
the effectiveness of our proposed approach compared to SOTA methods, achieving
significant improvements in compilation success rates and functional
correctness across different PLs. | Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, Chandan K. Reddy | 2023-01-31T18:02:26 | http://arxiv.org/abs/2301.13816v4 | # Execution-based Code Generation using Deep Reinforcement Learning
###### Abstract
The utilization of programming language (PL) models, pretrained on large-scale code corpora, as a means of automating software engineering processes has demonstrated considerable potential in streamlining various code generation tasks such as code completion, code translation, and program synthesis. However, current approaches mainly rely on supervised fine-tuning objectives borrowed from text generation, neglecting specific sequence-level features of code, including but not limited to compilability as well as syntactic and functional correctness. To address this limitation, we propose PPOCoder, a new framework for code generation that combines pretrained PL models with Proximal Policy Optimization (PPO) deep reinforcement learning and employs execution feedback as the external source of knowledge into the model optimization. PPOCoder is transferable across different code generation tasks and PLs. Extensive experiments on three code generation tasks demonstrate the effectiveness of our proposed approach compared to SOTA methods, improving the success rate of compilation and functional correctness over different PLs. Our code can be found at [https://github.com/reddy-lab-code-research/PPOCoder](https://github.com/reddy-lab-code-research/PPOCoder).
## 1 Introduction
Recent years have seen a surge of attention towards the use of deep learning and neural language models to automate code generation and other software engineering processes, as a means to enhance developer productivity. The software development process encompasses a variety of code generation tasks, including code completion (Code2Code) [19], code translation (Code2Code) [46], and program synthesis (NL2Code) [20]. Inspired by the great performance of pre-trained neural language models (LMs) in different natural language processing (NLP) tasks, these pretraining techniques have been recently employed on large-scale code corpuses to automate code generation tasks. Examples of such pretrained models include CodeBERT [11], CodeGPT [23], PLABRT [1], and CodeT5 [40]. However, the code domain faces some unique challenges. For example, given that the generated code is intended for machine execution as opposed to human comprehension, it is imperative that the generated code maintains syntactic and functional correctness, i.e., being able to pass compilation and unit tests.
Despite the advancements of pretrained code models, they are heavily influenced by NLP's self-supervised masked language modeling (MLM) and often struggle to ensure the syntactic and functional correctness of the generated codes. Authors of [9] have shown that up to 70% of codes generated by these models can be non-compilable. To improve code generation towards syntactic and functional correctness, several approaches are followed: \((i)\) filtering and repairing the non-compilable synthesized programs [17], \((ii)\) using energy-based generation models with compilability constraints [16], and \((iii)\) using reinforcement learning (RL) finetuning mechanisms [38, 44, 18]. However, existing approaches are often tailored to a specific programming language (PL) or task and are not easily transferable to other different code generation tasks and PLs. To tackle this challenge, we propose **PPOCoder**, illustrated in Fig.1, a PPO-based RL framework for code generation that employs compiler feedback (i.e., syntactic or functional correctness) as the external source of knowledge in model optimization. PPOCoder utilizes the PPO [34] algorithm for RL optimization which is based on
Figure 1: An overview of the proposed PPOCoder framework. The actor and critic networks are first initialized from the pretrained PL model for the desired task. Following the sampling of a synthetic program from the stochastic policy, the reward is determined using the execution feedback and the ground truth target code. The values are estimated by the critic network. Finally, both actor and critic networks are updated based on the obtained values and returns.
the proximal actor-critic advantage policy gradient objective and a trust region mechanism, making the model optimization more stable and less sensitive to new environments (tasks or datasets). Also, PPOCoder integrates discrete compiler feedback with the syntactic and semantic matching scores between the generated codes and executable targets. This integration reduces the sparsity of the reward function, leading to a better guidance of the policy to generate code that is more closely aligned with the correct targets. To control explorations and prevent large deviations from the distributions learned by the pretrained PL model, PPOCoder incorporates the KL-divergence penalty. This penalty helps to reduce the chance of memorization, which is often caused by the cross-entropy loss in previous approaches during pretraining and finetuning, resulting in a more controlled and efficient exploration that can generalize well to different code generation tasks and PLs. To summarize, the major contributions of this paper are as follows:
* We present a PPO-based RL framework for code generation, PPOCoder, that utilizes compiler feedback (i.e., syntactic or functional correctness) as the external source of knowledge in model optimization. PPOCoder provides a more stable and generalizable model optimization that is less sensitive to new environments (tasks, PLs, or datasets).
* We develop a new reward function based on the discrete compiler feedback (compilation or unit test signal when available) received at the end of the generation episode as well as the syntactic and semantic matching scores between the AST sub-trees and DFG edges of the sampled generations and the correct targets.
* We reduce the chance of memorization by incorporating a KL-divergence penalty into reward instead of a cross-entropy loss used in earlier works to control explorations and prevent deviations from the pretrained model.
* We demonstrate the effectiveness of PPOCoder through an extensive set of experiments across diverse code generation tasks (code completion, code translation, code synthesis) and PLs (C++, Java, Python, C#, PHP, C). PPOCoder outperforms the SOTA baselines, improving the compilation rate and functional correctness over different PLs. We also investigate the benefits of PPOCoder's reward elements and PPO optimization through ablation study.
The organization of the remainder of this paper is as follows: In Section 2, existing code generation methods utilizing pretrained models, structure-based approaches, and RL methods for sequence generation are summarized. Section 3 delves into the specifics of our proposed PPOCoder method, including its various components. The experimental evaluation of our method on three code generation tasks: code completion, code translation, and program synthesis tasks, as well as the ablation study and case study, can be found in Section 4. Finally, the paper concludes in Section 5.
## 2 Related Work
### Pretrained Models for Code Generation
Recent research has focused on using pretrained neural language models (LMs) in natural language processing (NLP) to automate code generation tasks using large-scale code corpus data from open-source repositories [23, 43, 25]. Notable examples of these pretrained models include CodeBERT [11] with encoder-only, CodeGPT [23] with decoder-only as well as PLABRT [1] and CodeT5 [40] with encoder-decoder transformer architectures. However, these pretrained PL models tend to rely heavily on self-supervised MLM for text generation and still struggle to ensure the syntactic and functional correctness of the generated codes.
### Leveraging Structure in Code Generation
Recently, there has been a growing interest in incorporating logical constructs such as abstract syntax trees (ASTs) [15, 29, 39], code sketches [26], and data-flow graphs (DFGs) [42, 12]. For example, GraphCodeBERT [12] uses DFGs to incorporate semantic information, but its decoder is completely unaware of the code structures. StructCoder [36] introduces a pretrained structure-aware encoder-decoder architecture. Despite these efforts, many code generation models still struggle to ensure the syntactic and functional correctness of the generated codes.
### RL for Sequence Generation
RL has been used to optimize non-differentiable metrics in sequence generation tasks [31, 3], such as using the REINFORCE [41] algorithm to improve BLEU [27] and ROUGE [21] scores in translation and summarization models. Unlike text generation, code generation requires not only syntactic but also functional correctness as the generated code must pass compilation and unit tests for machine execution. Recently, execution-guided approaches [7, 10, 8] and RL-based finetuning mechanisms [38, 44, 18] are used to enhance the quality of generated codes. For example, [18] has recently studied the integration of RL with unit test signals in the finetuning of the program synthesis models. However, existing RL-based methods still encounter several limitations. They are often designed for a particular task (e.g., only program synthesis) or a particular PL (e.g., only Python), receive a sparse and discrete compiler signal only at the end of the generation episode, and are susceptible to memorization and poor performance on unseen data due to the use of cross-entropy loss with the policy gradient objective in the RL optimization. Our model, PPOCoder, makes the RL framework transferable to diverse code generation tasks and PLs by incorporating a PPO-based framework that integrates compiler feedback with the syntactic and semantic matching scores in the reward and utilizes a KL-divergence penalty to prevent large deviations, while reducing the chance of memorization.
## 3 PPOCoder
PPOCoder provides a systematic mechanism for finetuning code generation models using deep reinforcement learning (RL) by effectively and efficiently incorporating compiler feedback as extra knowledge into the model optimization, thereby enhancing the quality of the generated codes in terms of code-specific sequence-level features such as syntactic and functional correctness. Fig. 2 shows the general structure of our proposed PPOCoder model with the policy network (actor) \(\pi_{\theta}\) responsible for code generation actions and the value
function (critic) \(V_{\pi}\) responsible for the return estimations. They are both learned with the proximal policy optimization (PPO) approach taking reward \(\mathcal{R}\). As shown in Fig. 2, the total reward is composed of four elements: (\(i\)) compiler feedback; (\(ii\)) syntactic match score; (\(iii\)) semantic match score; and (\(iv\)) KL-divergence penalty. We provide further details about each of these components in the subsections below.
### Problem Formulation
The code generation procedure can be formulated as a sequential discrete finite-horizon Markov Decision Process (MDP) with the use of RL in which an agent interacts with the compiler over discrete horizon \(T\) which is equivalent to the maximum number of generated code tokens. The proposed PPOCoder is formulated as follows:
**State \(\mathcal{S}\):** The state of environment at each time-step, denoted as \(s_{t}=(\hat{y}_{<t},x),s_{t}\in\mathcal{S}\), is determined by the source PL/NL data \(x\), as well as the set of generated tokens before \(t\), \(\hat{y}_{<t}\).
**Action \(\mathcal{A}\):** The PL model chooses the action at each time-step, denoted as \(a_{t}=\hat{y}_{t},a_{t}\in\mathcal{A}\), which is equivalent to the generated token at time-step \(t\).
**Policy \(\pi_{\theta}(a_{t}|s_{t})\):** The stochastic policy network parameterized by \(\theta\) is the downstream code generation model that predicts the next token conditioned on the previously generated tokens and the source data, so, \(\pi_{\theta}(\hat{y}_{t}|\hat{y}_{<t},x):\mathcal{S}\rightarrow\Delta( \mathcal{A})\) where \(\Delta(\mathcal{A})\) denotes the probability distribution over all actions (e.g., target vocabulary). The next action \(\hat{y}_{t}\) will be decided based on the _top-k_ sampling from this probability distribution. Policy is initialized with the pretrained reference PL model \(\rho\), i.e., \(\pi_{\theta}^{0}(.)=\rho\).
**Reward \(\mathcal{R}\):** The reward \(\mathcal{R}(\hat{y},x,y)\) will be obtained at the end of the generation episode (i.e., after generating the \(<endoftkens>\) token) based on the generated code's syntactic and functional correctness as well as its alignment with executable codes. The reward function \(\mathcal{R}(.)\) is composed of different components which are explained in Section 3.2.
**Advantage \(\hat{A}_{\pi}^{t}\):** Inspired by the Generalized Advantage Estimator (GAE) [33], the advantage at time-step \(t\) is defined as follows.
\[\hat{A}_{\pi}^{t}= \delta_{t}+\gamma\delta_{t+1}+\ldots+\gamma^{T-t+1}\delta_{T-1}, \tag{1}\] \[\delta_{t}= r_{t}-V_{\pi}(\hat{y}_{<t},x)+\gamma V_{\pi}(\hat{y}_{<t+1},x),\]
where \(\gamma\) is the discount rate; \(r_{t}\) is the reward at time-step \(t\); and \(V_{\pi}(s_{t})\) is the state value function at \(t\) which can be approximated by a dense token-level value head on top of the hidden states of PL model.
**Objective:** The objective of PPOCoder is to find a policy that
Figure 2: Overview of the PPOCoder with actor and critic models. The action is sampled from the policy based on the given source data \(x\) (NL or PL). Then, a reward is obtained for each action to guide and control policy updates. The reward function is composed of four elements: (\(a\)) compiler feedback; (\(b\)) syntactic matching score based on ASTs; (\(c\)) semantic matching score based on DFGs; and (\(d\)) KL-divergence penalty between active policy and the reference pretrained model. The critic model estimates value based on the obtained reward and PPOCoder will be optimized withPPO, which takes into account both value and policy optimization.
maximizes the expected reward of generated codes sampled from the policy.
\[\max_{\theta}\mathbb{E}_{x\sim\mathcal{X},\hat{y}\sim\pi_{\theta}(.|x)}\big{[} \mathcal{R}(\hat{y},x,y)\big{]}, \tag{2}\]
where \(\mathcal{X}\) is the training set of source data; \(\pi_{\theta}(.)\) is the policy network; and \(\mathcal{R}(.)\) is the reward function. We formulate the objective function as a maximization of the advantage instead of reward, as shown in Eq. (3), in order to reduce the variability of predictions.
\[\max_{\theta}\mathbb{E}_{x\sim\mathcal{X},\hat{y}\sim\pi_{\theta}(.|x)}\left[ \sum_{t=0}^{T}\hat{A}_{\pi}^{t}\big{(}(\hat{y}_{<t},x),\hat{y}_{t}\big{)}\right], \tag{3}\]
We adopt the policy gradient to estimate the gradient of non-differentiable reward-based objectives in Eqs. (2) and (3). Therefore, updating policy parameters for a given source data \(x\) can be derived as:
\[\max_{\theta}\mathcal{L}_{\theta}^{PG}=\max_{\theta}\mathbb{E}_{ \hat{y}\sim\pi_{\theta}}\left[\sum_{t=0}^{T}\left(log\pi_{\theta}(\hat{y}_{t} |\hat{y}_{<t},x)\;\hat{A}_{\pi}^{t}\right)\right], \tag{4}\] \[\text{where}\;\;\nabla_{\theta}\mathcal{L}_{\theta}^{PG}=\mathbb{ E}_{\hat{y}\sim\pi_{\theta}}\left[\sum_{t=1}^{T}\left(\nabla_{\theta}log\pi_{ \theta}(\hat{y}_{t}|\hat{y}_{<t},x)\;\hat{A}_{\pi}^{t}\right)\right], \tag{5}\]
where \(\nabla_{\theta}\mathcal{L}_{\theta}^{PG}\) refers to the estimated gradient of objective function based on the policy parameterized by \(\theta\). In order to further reduce the variations and avoid significantly changing the policy at each iteration, the objective function in Eq. (4) will be reformulated as shown in Eq. (6), called the conservative policy iteration.
\[\mathcal{L}_{\theta}^{CPI}= \mathbb{E}_{\hat{y}\sim\pi_{\theta}}\left[\sum_{t=0}^{T}\left( \frac{log\pi_{\theta}(\hat{y}_{t}|\hat{y}_{<t},x)}{log\pi_{\theta_{old}}(\hat{ y}_{t}|\hat{y}_{<t},x)}\;\hat{A}_{\pi}^{t}\right)\right] \tag{6}\] \[= \mathbb{E}_{\hat{y}\sim\pi_{\theta}}\left[\sum_{t=0}^{T}\left(c_{ \pi}^{t}(\theta)\;\hat{A}_{\pi}^{t}\right)\right],\]
where \(\theta_{old}\) is the policy parameters before the update; and \(c_{\pi}^{t}(\theta)\) is the ratio of log-probabilities from new and old policies.
### Reward Function
Figure 2 illustrates that the reward of PPOCoder is composed of four different components which are designed to guide and control actions simultaneously towards generating more executable codes. These components are designed due to (1) the sparsity of compiler feedback which is only received at the end of code generation episode; and (2) the high chance of policy divergence from the pretrained PL models. (check Section 4.4 for the reward ablation results). Eq. (7) shows the combination of these different reward terms in the final reward vector \(\mathcal{R}(\hat{y},x,y)\in\;\mathbb{R}^{T}\) with \(T\) as the generation episode length.
\[\mathcal{R}(\hat{y},x,y) =\{r_{t}:t=1,\ldots,T\}, \tag{7}\] \[r_{t} =\mathbb{1}(cond)\Big{[}R_{cs}(\hat{y})+\;R_{ast}(\hat{y},y)+\;R_ {dfg}(\hat{y},y)\] \[-\beta R_{kl}(x,\hat{y}_{<t})\Big{]}+\mathbb{1}\left(\neg cond \right)[-\beta R_{kl}(x,\hat{y}_{<t})]),\] \[cond=(\hat{y}_{t}==\langle endofttokens\rangle)\]
where \(r_{t}\) is the combined reward at time-step \(t\); \(R_{cs}(.)\), \(R_{ast}(.)\), and \(R_{dfg}(.)\) are the compiler signal, syntactic match score, and the semantic match score reward terms, respectively. Note that, these terms will be received at the end of the generation episode where \(\hat{y}_{t}==\langle endofttokens\rangle\). The \(R_{kl}(x,\hat{y}_{<t})\) is a KL-divergence penalty between the reference pretrained model and the active policy which is imposed to reward at each time-step to control actions. \(\beta\) is also the coefficient of penalty to balance the combination of different reward terms.
**Compiler Signal**
For each source data \(x\), we sample multiple generated codes in the target language based on the current policy network, \(\hat{y}\sim\pi_{\theta}(.|x)\). Then, we pass these sampled codes \(\hat{y}\) to a compiler and determine the reward based on the parsing signal. In case unit tests are available for the source data, the reward is determined by the functional correctness of generated codes, i.e., passing all unit tests, as shown in Eq. (8). If unit tests are not provided, compiler returns the syntactic correctness of generated codes (i.e., compilable or non-compilable) as shown in Eq. (9). This reward term is designed to guide the model to take actions which can generate higher quality codes in terms of syntactic/functional correctness.
_Functional Correctness:_
\[R_{cs}(\hat{y})=\begin{cases}+1\;\;,\;\text{if}\;\hat{y}\text{ passed all unit tests}\\ -0.3,\;\text{if}\;\hat{y}\text{ failed any unit test}\\ -0.6,\;\text{if}\;\hat{y}\text{ received RunTime error}\\ -1\;\;,\;\text{if}\;\hat{y}\text{ received Compile error}\end{cases} \tag{8}\]
_Syntactic Correctness:_
\[R_{cs}(\hat{y})=\begin{cases}+1,\text{if}\;\hat{y}\text{ passed compilation test}\\ -1,\text{otherwise}\end{cases} \tag{9}\]
**Syntactic Matching Score**
Since the compiler signal alone is too sparse, we also add additional information to better control and guide the structure of policy samples. To do so, we define a syntactic matching score \(R_{ast}(\hat{y},y)\) between the generated hypothesis \(\hat{y}\sim\pi_{\theta}(.|x)\) and the parallel executable target \(y\). The goal is to maximize this matching score for better compilability or syntactic correctness. We use the abstract syntax tree (AST) to find a tree representation of the code's abstract syntax structure. Then, we compare the sub-trees extracted from the hypothesis and reference target ASTs, respectively, and calculate the syntactic match score as a percentage of matched AST sub-trees.
\[R_{ast}(\hat{y},y)=Count(AST_{\hat{y}}\cap AST_{y})/Count(AST_{y}) \tag{10}\]
where \(Count(AST_{\hat{y}}\cap AST_{y})\) is the number of matched AST sub-trees between the hypothesis \(\hat{y}\) and reference \(y\); and \(Count(AST_{y})\) is the total number of reference AST sub-trees. This score can assess the syntactic quality of code since the differences between ASTs can be affected by syntactic issues such as token missing and data type errors.
### Semantic Matching Score
To improve the functional correctness, we need to also take into account the semantic matching between hypothesis \(\hat{y}\) and the executable target \(y\), in addition to their syntactic matching. In PLs, code semantics are closely related to the dependencies of its variables. As a result, in order to construct a semantic matching score, we make use of the data-flow graphs (DFGs), a graph representation of code in which the nodes stand in for variables and the edges for the sources of each variable's values. We denote DFG of a code \(Y\) as \(\mathcal{G}(Y)=(V;E)\) where \(V=\{v_{1},\ldots,v_{m}\}\) is the set of variables, and \(e_{i,j}=\langle v_{i},v_{j}\rangle\) is the \(i\to j\) edge showing that value of the \(j\)-th variable originates from the \(i\)-th variable. Then, we calculate the semantic match score as a percentage of matched data-flows in DFGs.
\[R_{dfg}(\hat{y},y)=Count(\mathcal{G}(\hat{y})\cap\mathcal{G}(y))/Count( \mathcal{G}(y)) \tag{11}\]
where \(Count(\mathcal{G}(\hat{y})\cap\mathcal{G}(y))\) represents the number of matched DFG edges between hypothesis \(\hat{y}\) and reference \(y\); and \(Count(\mathcal{G}(y))\) represents the total number of reference DFG edges. Maximizing this score can guide and control policy to generate codes which are more aligned with executable target codes in terms of variable relations, thus, enhancing the semantic quality and logical correctness of the generated codes.
### KL-Divergence Constraint
We incorporate a negative KL-divergence penalty \(KL(\pi||\rho)\) into the reward to prevent the active policy \(\pi\) deviating away from the pretrained PL model \(\rho\). The KL-penalty at time \(t\) can be approximated as:
\[R_{kl}\left(x,\hat{y}_{<t}\right)= KL\left(\pi||\rho\right)\approx\log\frac{\pi\left(.|x,\hat{y}_{<t} \right)}{\rho\left(.|x,\hat{y}_{<t}\right)} \tag{12}\] \[= \log\left(\pi\left(.|x,\hat{y}_{<t}\right)\right)-\log\left(\rho \left(.|x,\hat{y}_{<t}\right)\right)\]
where \(\log\left(\pi\left(.|x,\hat{y}_{<t}\right)\right)\) and \(log\left(\rho\left(.|x,\hat{y}_{<t}\right)\right)\) are the log-probabilities obtained from the active policy \(\pi\) and pretrained model \(\rho\) at time \(t\) given source data \(x\) and the previously predicted tokens \(\hat{y}_{<t}\). This reward term can control actions and play the role of entropy bonus in controlling exploration and exploitation where greater \(\beta\) in Eq. (7) provides less exploration and more exploitation.
### Loss Function
We employ proximal policy optimization (PPO) [34] and define the loss function of PPOCoder as follows.
\[\mathcal{L}_{\theta}=-\mathcal{L}_{\theta}^{CPI}+\alpha\mathcal{L }_{\theta}^{VF} \tag{13}\] \[\mathcal{L}_{\theta}^{CPI}=\ \mathbb{E}_{y\sim\pi_{a}}\left[\sum_{t=0}^{T} \left(c_{\pi}^{t}(\theta)\hat{A}_{\pi}^{t},clip\left(c_{\pi}^{t}(\theta),1- \epsilon,1+\epsilon\right)\hat{A}_{\pi}^{t}\right)\right]\] (14) \[\mathcal{L}_{\theta}^{VF}=\ \mathbb{E}_{\hat{y}\sim\pi_{a}}\left[\sum_{t=0}^ {T}\left(V_{\pi}(\hat{y}_{<t},x)-\left(\hat{A}_{\pi}^{t}+V_{\pi_{ad}}(\hat{y}_ {<t},x)\right)\right)^{2}\right] \tag{15}\]
where the loss function \(\mathcal{L}_{\theta}\) is the linear combination of surrogate policy objective function \(\mathcal{L}_{\theta}^{CPI}\) and the value function squared error term \(\mathcal{L}_{\theta}^{VF}\). Therefore, minimizing loss function leads to the maximization of the surrogate advantage policy objective (actor optimization) as well as the minimization of value error (critic optimization). In other words, the actor is guided to maximize the advantage policy objective which is correlated with maximizing the expected reward as explained in Eqs. (4)-(6); and the critic is enforced to minimize the token-level value estimation error which is defined based on the difference between the values of new policy \(V_{\pi}(\hat{y}_{<t})\) and the estimated dense returns of the old policy \(\hat{A}_{\pi}^{t}+V_{\pi_{ad}}(\hat{y}_{<t})\). In Eqs. (13)-(15), \(\epsilon\) is the proximal policy ratio clip range, and \(\alpha\) is the linear combination weight between loss terms of actor and critic.
Algorithm 1 provides the pseudocode of PPOCoder. For each source-target pair \((x,y)\), we sample multiple translated hypotheses from the policy network \(\hat{y}\sim\pi_{\theta}(.|x)\). After generating each hypothesis, we find the integrated reward based on the reward function defined in Section 3.2, estimate the advantage, calculate the corresponding PPO loss function, and update the policy and value head parameters based on the final gradients (as shown in lines 5-19).
## 4 Experiments
We evaluate PPOCoder on three different code generation tasks: \((i)\)_Code Completion_ automatically completes partial Python code snippets; \((ii)\)_Code Translation_ involves translating between any language-pair among six different PLs (Python, Java, C#, C++, PHP, C); and \((iii)\)_Program Synthesis_ (NL2Code) generates a Python function given a natural language (NL) description.
### Code Completion
For this downstream task, we employ the Python corpus in CodeSearchNet (CSN) 1[14]. We extract \(50\)k compilable Python methods with sufficient length (at least 64 tokens) and randomly split the data to train/val/test sets with \(40\)k\(/5\)k\(/5\)k samples. We mask the last 25 tokens of the source code and ask the model to complete it. To evaluate the quality of generated codes, three metrics are used: \((i)\)_Exact Match_ (xMatch) which checks if the prediction is the same as the ground truth, \((ii)\)_Levenshtein Edit Similarity_ (Edit Sim) [23, 35] which measures the number of single-character edits needed to match the generated code with the correct target, and \((iii)\)_Compilation Rate_ (Comp Rate) [17] that shows the success rate of compilation among completed programs. Since unit tests are not provided, we focus on the syntactic correctness
of the completed codes and take the compiler signal as reward.
Table 1 shows the results of PPOCoder along with the baselines on the code completion task. In this table, the BiLSTM [24] and Transformer [37] models are not pretrained. The GPT-2 [30] model was pretrained on text corpus, while CodeGPT [23] and CodeT5 [40] models are pretrained on the large-scale source code corpus. The reported results for these pretrained models are after the finetuning step on the code completion task. More details of the experimental setup are provided in Appendix A.1 It can be observed that CodeGPT and CodeT5 have a compilation rate of \(46.84\) and \(52.14\), respectively, indicating that about half of the generated codes are not compilable. By employing our proposed PPOCoder framework on the finetuned CodeT5 model (PPOCoder + CodeT5), the compilation rate improves significantly from \(52.14\) to \(97.68\), demonstrating the importance of incorporating compiler feedback into the model's optimization and the effectiveness of PPOCoder in code completion. We can also see that the PPOCoder performs similarly to other SOTA models in terms of Edit sim and xMatch scores, showing that the actor model effectively explores without deviating much from the pretrained model distributions.
### Code Translation
We use the XLCoST 2[45] dataset for the code translation task which is a parallel dataset that includes solutions for problems related to data structures and algorithms in six languages: C++, Java, Python, PHP, C, and C#. In our experiments, we only use the compilable filtered parallel data in source and target language pairs. Table 6 in Appendix A.2 shows the detailed statistics of these compilable filtered samples across all six PLs. To evaluate the quality of translated codes, we use two metrics: \((i)\)_Comp Rate_ that measures compilation success rate, and \((i)\)_CodeBLEU_[32] score which combines the weighted BLEU [28] based on the code-related keywords with the the syntactic and semantic alignment measures. As unit tests are not available for parallel language pairs, we focus on syntactic correctness with the help of compiler signal.
Footnote 2: [https://github.com/reddy-lab-code-research/XLCoST](https://github.com/reddy-lab-code-research/XLCoST)
Table 2 presents the results of PPOCoder on code translation along with the baselines. In this table, column and row headers represent the translation source and target PLs, respectively. The Naive Copy baseline [23] simply copies the source code as the output, showing how similar two PLs are. The reported results of pretrained CodeBERT and PLBART are after finetuning on the code translation task for each language pair. The experimental setup and implementation details are provided in Appendix A.1 Table 2 demonstrates that incorporating our proposed PPOCoder +CodeT5 improves the overall compilation rate across all language pairs, in comparison to the SOTA baseline CodeT5. Specifically, we observe an absolute increase of \(9.92\%\), \(22.22\%\), \(21.62\%\), \(13.20\%\), \(7.46\%\), and \(6.11\%\) in the compilation rate for C++, Java, Python, C#, PHP, and C target PLs, respectively. PPOCoder also obtains a comparable CodeBLEU score to other baselines, meaning that it does not deviate a lot from the pretrained code fluency distribution. Among high-resource languages, results show relatively greater compilation rate improvements for Python and Java as target PL. This is likely due to their high-level constructs, such as the absence of pointers and memory management constructs, which can be a source of errors in languages like C++ and C#. Additionally, Java and Python feature a more lenient compilation process and extensive runtime error checking, resulting in many errors that would cause C++ and C# compilation to fail, being detected only at runtime. The table shows a significantly lower compilation rate for code translation with C as target PL among all baselines. This is likely due to the limited number of samples with C as a target PL in the dataset (as shown in Table 6 in Appendix A.2).
### Program Synthesis
In this task, we use the APPS [13] dataset comprising \(10\)k coding problems of varying difficulty levels, split 50/50 for
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & _xMatch_ & _Edit Sim_ & _Comp Rate_ \\ \hline BiLSTM & 20.74 & 55.32 & 36.34 \\ Transformer & 38.91 & 61.47 & 40.22 \\ GPT-2 & 40.13 & 63.02 & 43.26 \\ CodeGPT & 41.98 & 64.47 & 46.84 \\ CodeT5 & 42.61 & 68.54 & 52.14 \\ PPOCoder + CodeT5 & **42.63** & **69.22** & **97.68** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on the code completion task for completing the last 25 masked tokens from CodeSearchNet.
train/test sets. The dataset consists of Introductory, Interview, and Competition level problems with respective train/test samples of 2639/1000, 2000/3000, and 361/1000. Each problem has \(23\) Python solutions and \(21\) unit tests on average. To evaluate the generated codes, we employ the _pass@k_ metric [6] which calculates the percentage of problems for which all unit tests are passed using \(k\) synthetically generated programs per problem. Since unit tests are provided in APPS, we use them in the PPOCoder's reward (as defined in Eq. 9).
Table 3 demonstrates the results of program synthesis on the APPS dataset along with other baselines reported in [13] including GPT-2 [30], GPT-3 [5], GPT-Neo [4], Codex [6], AlphaCode [20] and CodeRL [18]. The reported results for various models are post-finetuning on APPS, except for GPT-3 and Codex. For the experimental setup details of all methods, please refer to Appendix A.1 The results indicate that the smaller encoder-decoder architecture of CodeT5 outperforms larger models, and PPOCoder with CodeT5 further improves performance, surpassing even larger pretrained LMs such as GPTs. As demonstrated in Table 3, PPOCoder +CodeT5 exhibits comparable or even superior _pass@k_ performance than CodeRL+CodeT5, another RL-based finetuning mechanism for program synthesis.
To further evaluate the generalizability of these models, the zero-shot performance of the APPS finetuned models was examined on the MBPP [2] program synthesis benchmark, which is a collection of 974 short (one sentence) problems, each including 1 correct Python solution and 3 corresponding unit tests. Table 4 shows the results of program synthesis on the MBPP benchmark. Both RL-based methods, PPOCoder +CodeT5 and CodeRL+CodeT5, finetuned on APPS, exhibit remarkable zero-shot performance on MBPP with a _pass@k_ of \(63\%\) and \(68\%\), respectively, surpassing even the largest GPT-137B's performance of \(61.4\%\). As observed in Table 4, the proposed PPOCoder +CodeT5 outperforms CodeRL+CodeT5 on MBPP by a significant margin of \(5.2\%\). This can be attributed to two factors. Firstly, CodeRL integrates the supervised cross-entropy loss to the RL policy gradient objective to maintain consistency in performance and prevent deviation from the pretrained model distribution. However, over-optimization of the supervised cross-entropy on synthetic data increases the chance of memorization on the training data and leads to inferior performance on unseen data. PPOCoder regulates deviation by employing the KL-divergence penalty for generation instead of the supervised cross-entropy loss. This can reduce the likelihood of memorization, resulting in improved generalizability on the MBPP benchmark. Secondly, CodeRL utilizes the actor-critic algorithm with REINFORCE reward policy gradient objective, while PPOCoder employs the PPO algorithm with actor-critic advantage policy gradient objective, and a trust region mechanism to ensure minimal deviation from the previous policy. This leads to a more stable and generalizable model optimization for new environments (tasks or datasets).
### Ablation Study
To investigate the effect of different components of PPOCoder, we conduct ablation experiments with several variants of our model, including different reward terms, RL objective terms, action space size, and the number of synthetic samples. We take the Java-Python translation as a case study and present the results in Fig. 3. Please check Appendix A.3 for more ablation experiments with other target PLs.
**Reward Elements.** Fig. 3(a) shows the effect of including different reward terms in the performance of PPOCoder. Models tested include CodeT5 without RL training, and with
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{High Resource} & \multicolumn{4}{c}{Low Resource} & \multicolumn{4}{c}{Overall} \\ \cline{2-13} \multicolumn{1}{c}{\multirow{-2}{*}{Model}} & \multicolumn{2}{c}{C++} & \multicolumn{2}{c}{Java} & \multicolumn{2}{c}{Python} & \multicolumn{2}{c}{C\#} & \multicolumn{2}{c}{PHP} & \multicolumn{2}{c}{C} & \multicolumn{2}{c}{C} & \multicolumn{2}{c}{C\#} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} \\ \hline \multicolumn{13}{c}{Nava Copy} & – & – & 42.56 & 30.28 & 17.81 & 37.7 & 47.28 & 37.25 & 19.38 & 5.21 & 5.394 & 4.62 & 38.08 & 12.22 \\ \multirow{-13}{*}{C++} & CodeRLBERT & – & – & 62.56 & 37.12 & 36.41 & 26.72 & 67.12 & 38.52 & 38.77 & 12.23 & 21.84 & 2.34 & 45.34 & 23.38 \\ & PIBART & – & – & 71.23 & 44.51 & 60.99 & 45.92 & **47.44** & 51.66 & 62.35 & 53.63 & 52.76 & 36.22 & 66.03 & 46.42 \\ & Cadex & – & – & 80.17 & 59.0 & 72.83 & 53.33 & 73.11 & 60.31 & 67.47 & 68.21 & 71.44 & 71.92 & 62.46 \\ & PPOCoder + CodeT5 & – & & **81.14** & **70.33** & **74.03** & **63.38** & 72.93 & **69.18** & **68.24** & **80.62** & 64.21 & **79.03** & **72.11** & **73.28** \\ \hline \multirow{13}{*}{Java} & Naive Copy & 52.32 & 14.50 & – & – & 36.51 & 22.16 & 69.08 & 41.55 & 39.91 & 2.10 & 54.18 & 2.10 & 50.39 & 16.38 \\ & CodeBERT & 69.21 & 30.21 & – & – & 45.41 & 43.51 & 74.86 & 55.01 & 48.33 & 10.72 & 19.53 & 0 & 51.28 & 27.89 \\ & PLBERT & 72.41 & 47.12 & – & – & 70.31 & 53.79 & 76.19 & 45.75 & 64.06 & 21.47 & 46.21 & 72.22 & 65.52 & 35.67 \\ & CadexT5 & 78.59 & 59.81 & – & – & 75.98 & 60.61 & 83.14 & 70.66 & 63.54 & 61.67 & 67.41 & 67.89 & 79.18 & 64.73 \\ & PPOCoder + CodeT5 & **79.14** & **82.80** & – & – & **76.65** & **92.14** & **85.66** & **86.09** & **64.16** & **90.88** & 60.52 & **81.66** & **73.22** & **86.95** \\ \hline \multirow{13}{*}{PHOCoder} & 37.41 & 21.47 & 39.72 & 17.27 & – & – & 38.52 & 10.71 & 43.91 & 16.84 & 35.11 & 0 & 38.93 & 13.26 \\ & CodeBERT & 69.93 & 42.15 & 45.76 & 38.10 & – & – & 40.23 & 26.10 & 52.12 & 31.74 & 18.32 & 0 & 45.07 & 27.62 \\ \cline{1-1} & PHOBERT & 74.49 & 61.20 & 63.23 & 54.59 & – & – & 67.35 & 44.65 & 69.86 & 66.76 & 39.15 & 6.12 & 62.93 & 46.66 \\ \cline{1-1} & CadexT5 & 79.86 & 74.11 & 74.15 & 62.74 & – & – & 75.54 & 58.26 & **79.83** & **80.56** & **56.38** & 70.81 & **73.24** & **69.19** \\ \cline{1-1} & POOCoder + CodeT5 & **80.34** & **87.72** & **71.62** & **92.70** & – & **76.99** & **83.33** & 79.65 & **93.51** & 52.15 & **95.80** & 72.67 & **90.81** \\ \cline{1-1} & Naive Copy & 45.14 & 10.74 & 17.61 & 13.14 & 400.09 & – & – & 37.79 & 42.14 & 601.77 & 42.52 & 80.36 \\ \cline{1-1} & CodeBERT & 74.51 & 18.02 & 81.25 & 27.88 & 50.83 & 37.05 &
RL training utilizing different combinations of reward terms: (compiler feedback), _kl_ (KL-divergence penalty), _dfg_ (semantic matching score from DFGs), and _ast_ (syntactic matching score from ASTs). Results show that the discrete compiler feedback alone is insufficient, however, integrating it with the KL-divergence penalty as well as the syntactic/semantic matching score boosts the compilation rate. The best performance is achieved by utilizing all four reward terms.
**Loss Elements.** Fig. 3(b) represents the results of PPOCoder with different objective configurations. We observe that the policy gradient objective alone (_+PG_), i.e., the REINFORCE algorithm, can boost the performance of the CodeT5 model. The compilation rate further improves by introducing the value function as critic (_+PG+VF_), i.e., A2C algorithm. Results show that the best performance is achieved by utilizing proximal conservative policy iteration with value optimization (_+CPI+VF_), indicating that the PPO algorithm performs superior to others on code generation.
**Action Space Size.** We examine the effectiveness of action space size on PPOCoder's performance by adjusting the \(k\) parameter in the \(top-k\) policy synthetic sampling. Fig. 3(c) shows that when \(k=1\), PPOCoder may not be able to have enough exploration for the better possible policy updates. On the other hand, when \(k\) gets too large, PPOCoder may become overwhelmed by many different possible actions and struggle to learn the optimal policy, leading to degraded performance. Therefore, results reveal that a small value of \(k\) (\(k=1\)) may not provide sufficient exploration, while a large value (\(k=50265\) (vocab size) ) can hinder the learning of optimal policy. In the code generation experiments, we usually use the action space size \(5\) which provides a good balance for optimal exploration in most cases.
**No. of Synthetic Samples.** The effect of synthetic policy sample size on PPOCoder's performance is examined by modifying the \(num\_samples\) in Alg. 1. Fig. 3(d) shows that an increase in \(num\_samples\) from \(1\) to \(10\) improves performance, but further increases lead to a decline in performance. This suggests that while additional synthetic samples can enhance the ability to identify underlying patterns, a large number of synthetic samples may not be representative of the general population and can negatively impact performance by causing confusion in model updates.
### Case Study
Fig. 4 shows an example of Java to C++ translation for both CodeT5 and PPOCoder +CodeT5. Similar to the previous case, it can be observed that the compilation is improved by PPOCoder. For this example, CodeT5's translation has these issues: (1) CodeT5 generates a non-standard data type called subset which takes in a pair of integers. The use of the non-standard data-type without importing it or defining it causes a compilation error, while PPOCoder +CodeT5 generates the
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Size & State & _pass@80_ \\ \hline GPT & 224M & fine-tuned & 7.2 \\ GPT & 422M & fine-tuned & 12.6 \\ GPT & 1B & fine-tuned & 22.4 \\ GPT & 4B & fine-tuned & 33.0 \\ GPT & 8B & fine-tuned & 40.6 \\ GPT & 68B & fine-tuned & 53.6 \\ GPT & 137B & fine-tuned & 61.4 \\ CodeT5 & 60M & fine-tuned & 19.2 \\ CodeT5 & 220M & fine-tuned & 24.0 \\ CodeT5 & 770M & fine-tuned & 32.4 \\ \hline CodeRL+CodeT5 & 770M & zero-shot & 63.0 \\ PPOCoder +CodeT5 & 770M & zero-shot & **68.2** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of the zero-shot transferability on MBPP. Both zero-shot models are finetuned on APPS and evaluated on MBPP in the zero-shot setting.
Figure 3: Ablation experiment results on Java-Python translation with different configurations of (a) reward, (b) loss, (c) action space size, and (d) number of synthetic samples.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{_pass@1_} & \multicolumn{4}{c}{_pass@5_} & \multicolumn{4}{c}{_pass@1000_} \\ \cline{2-13} Model & Size & Intro & Inter & Comp & All & Intro & Inter & Comp & All & Intro & Inter & Comp & All \\ \hline Codex & 12B & 4.14 & 0.14 & 0.02 & 0.92 & 9.65 & 0.51 & 0.09 & 2.25 & 25.02 & 3.70 & 3.23 & 7.87 \\ AlphaCode & 1B & – & – & – & – & – & – & – & 17.67 & 5.24 & 7.06 & 8.09 \\ GPT-3 & 175B & 0.20 & 0.03 & 0.00 & 0.06 & – & – & – & – & – & – & – \\ GPT-2 & 0.1B & 1.00 & 0.33 & 0.00 & 0.40 & 2.70 & 0.73 & 0.00 & 1.02 & – & – & – & – \\ GPT-2 & 1.5B & 1.30 & 0.70 & 0.00 & 0.68 & 3.60 & 1.03 & 0.00 & 1.34 & 25.00 & 9.27 & 8.80 & 12.32 \\ GPT-Neo & 2.7B & 3.90 & 0.57 & 0.00 & 1.12 & 5.50 & 0.80 & 0.00 & 1.58 & 27.90 & 9.83 & 11.40 & 13.76 \\ CodeT5 & 60M & 1.40 & 0.67 & 0.00 & 0.68 & 2.60 & 0.87 & 0.10 & 1.06 & – & – & – & – \\ CodeT5 & 220M & 2.50 & 0.73 & 0.00 & 0.94 & 3.30 & 1.10 & 0.10 & 1.34 & – & – & – & – \\ CodeT5 & 770M & 3.60 & 0.90 & 0.20 & 1.30 & 4.30 & 1.37 & 0.20 & 1.72 & – & – & – & – \\ CodeRL+CodeT5 & 770M & 4.90 & **1.06** & **0.5** & 1.71 & 8.60 & **2.64** & 1.0 & 3.51 & **36.10** & 12.65 & 13.48 & 17.50 \\ PPOCoder +CodeT5 & 770M & **5.20** & 1.00 & **0.5** & **1.74** & **9.10** & 2.50 & **1.20** & **3.56** & 35.20 & **13.10** & **13.60** & **17.62** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of the program synthesis task on the APPS dataset. | プログラミング言語(PL)モデルを用いたソフトウェアエンジニアリングプロセス自動化は、コード補完、コード翻訳、プログラム合成といったコード生成タスクを効率的に実行できる可能性を示しており、その潜在性は非常に高い。しかし、現在のアプローチは主にテキスト生成から借用した supervised fine-tuning を中心とし、コードのユニークなシーケンスレベルの特性を軽視している。これには、コンパイルの可行性、構文と機能的な正確性など、コードに関連する特徴が含まれる。この制約を克服するため、私たちは、PPOCoderというコード生成のための新しいフレームワークを提案した。PPOCoderは、事前学習されたPLモデルと Proximal Policy Optimization (PPO) を組み合わせることで、コード生成に最適化プロセスをシームレスに統合する。PPOCoderは、コード実行と構造の整合性から得られる非微分的なフィードバックを利用することで、コード |
2310.20545 | Optimizing accuracy and diversity: a multi-task approach to forecast
combinations | Forecast combination involves using multiple forecasts to create a single,
more accurate prediction. Recently, feature-based forecasting has been employed
to either select the most appropriate forecasting models or to optimize the
weights of their combination. In this paper, we present a multi-task
optimization paradigm that focuses on solving both problems simultaneously and
enriches current operational research approaches to forecasting. In essence, it
incorporates an additional learning and optimization task into the standard
feature-based forecasting approach, focusing on the identification of an
optimal set of forecasting methods. During the training phase, an optimization
model with linear constraints and quadratic objective function is employed to
identify accurate and diverse methods for each time series. Moreover, within
the training phase, a neural network is used to learn the behavior of that
optimization model. Once training is completed the candidate set of methods is
identified using the network. The proposed approach elicits the essential role
of diversity in feature-based forecasting and highlights the interplay between
model combination and model selection when optimizing forecasting ensembles.
Experimental results on a large set of series from the M4 competition dataset
show that our proposal enhances point forecast accuracy compared to
state-of-the-art methods. | Giovanni Felici, Antonio M. Sudoso | 2023-10-31T15:26:33 | http://arxiv.org/abs/2310.20545v2 | # Multi-task learning of convex combinations of forecasting models
###### Abstract
Forecast combination involves using multiple forecasts to create a single, more accurate prediction. Recently, feature-based forecasting has been employed to either select the most appropriate forecasting models or to learn the weights of their convex combination. In this paper, we present a multi-task learning methodology that simultaneously addresses both problems. This approach is implemented through a deep neural network with two branches: the regression branch, which learns the weights of various forecasting methods by minimizing the error of combined forecasts, and the classification branch, which selects forecasting methods with an emphasis on their diversity. To generate training labels for the classification task, we introduce an optimization-driven approach that identifies the most appropriate methods for a given time series. The proposed approach elicits the essential role of diversity in feature-based forecasting and highlights the interplay between model combination and model selection when learning forecasting ensembles. Experimental results on a large set of series from the M4 competition dataset show that our proposal enhances point forecast accuracy compared to state-of-the-art methods.
keywords: Forecasting, Forecast combination, Forecast diversity, Meta-learning, Time series features +
Footnote †: journal:
## 1 Introduction
Forecasting is an essential component of operational research (Fildes et al., 2008; Nikolopoulos, 2021) having applications in different domains, such as finance (Tung and Wong, 2009), energy (Taylor, 2017), supply chains (Syntetos et al., 2016), and inventory management (Syntetos et al., 2009). Forecast combination - also known as ensemble forecasting - is a popular technique used in the field of forecasting, which involves the aggregation of multiple forecasts to produce a single and more accurate prediction (Clemen, 1989; Timmermann, 2006). Many studies have shown that forecast combinations lead to improvements in forecast accuracy, as they reduce the impact of biases in individual forecasts and considerably decrease their variance (Atiya, 2020). The M4 competition (Makridakis et al., 2020) has proven to be highly valuable for researchers, providing them with insights into the performance of various forecasting models and facilitating the establishment of best practices in the field (Hyndman, 2020). This competition has reaffirmed and built upon the knowledge gained from previous competitions, with a significant discovery being the success of forecast combinations. Out of the top 17 most accurate methods, 12 were combinations of forecasting models. This marks the first time that forecast combinations have demonstrated such strong dominance in a competition. However, it is important to note that the benefits of forecast combination are not guaranteed, and can depend on factors such as the quality of the individual forecasts, the nature of the data being forecasted, and the method used for combining the forecasts (Timmermann, 2006; Petropoulos et al., 2018; Atiya, 2020). The influential work by Bates and Granger (1969) indicates that combining forecasts can enhance accuracy, as long as the forecast sets incorporate independent information. Since then, researchers have demonstrated the effectiveness of forecast combinations through various weighting methods (Jose and Winkler, 2008; Andrawis et al., 2011; Cang and Yu, 2014; Lichtendahl and Winkler, 2020). The literature on forecast combinations covers a wide range of topics, including the methods used for combining forecasts, the benefits of using forecast combinations, and the potential pitfalls that can arise when using this technique (Bunn, 1988; De Menezes et al., 2000; Armstrong, 2001). One potential pitfall is overfitting, which occurs when the resulting method is excessively flexible and fits very well the individual forecasts in the training phase while failing in accurately forecasting out-of-sample data. Another pitfall is the risk of creating a forecast that is too complex or difficult to interpret, and hence harder to use in decision-making. Moreover, the success of ensemble forecasting heavily depends on the selection of component models as proven by many studies (Kourentzes et al., 2019; Atiya, 2020; Lichtendahl and Winkler, 2020). For a recent overview of forecast combinations, we direct the readers to the encyclopedic review article authored by Petropoulos et al. (2022).
One of the most commonly used methods for combining forecasts is the simple average (Jose and Winkler, 2008), which involves taking the arithmetic mean of the individual forecasts. This approach is easy to implement and is often effective. For this reason, it is used in practice as a benchmark against which more sophisticated combination techniques are tested. Such techniques
include weighted averages, where weights can be determined through basic regression techniques (Winkler & Clemen, 1992), or with more sophisticated model-based approaches based on machine learning algorithms (Petropoulos et al., 2022). Our paper centers its focus on model-based approaches designed to learn convex combinations of forecasting models. Formally, the goal is to combine time series predictions generated by a set of models, known as _base forecasters_. Let \(M\) and \(H\) be the number of algorithms in the forecast pools and the forecast horizon, respectively. For a given time series \(\{y_{1},\ldots,y_{T}\}\) of length \(T\), we denote the \(h\)-th step forecast produced by the \(i\)-th individual method as \(f_{i}^{h}\), where \(i=1,\ldots,M\) and \(h=1,\ldots,H\). We say that the combined forecast \(f_{\text{comb}}^{h}\) at the \(h\)-th step is a convex combination of \(M\) base models when
\[f_{\text{comb}}^{h}=\sum_{i=1}^{M}w_{i}f_{i}^{h}\quad\text{s.t.}\quad\sum_{i=1 }^{M}w_{i}=1,\ w_{i}\geq 0\quad\forall i\in\{1,\ldots,M\}, \tag{1}\]
where \(w_{i}\) is the weight of the forecast produced by the \(i\)-th method. Given this general framework, building an ensemble forecast involves two steps: selecting a methodology for training the base forecasters and choosing an appropriate method for combining their outputs based on the weights \(w_{i}\). The problem of choosing a suitable combination of forecasting methods is known in the literature as the "forecast combination puzzle" and it is generally accepted that not all combinations perform well. Lichtendahl & Winkler (2020) conducted a study to investigate why some combinations performed better than others in the recent M4 competition. Their findings highlighted the significance of _diversity_ and the _accuracy_ of individual models as crucial factors for effective forecast combinations. The accuracy of individual forecasts is important because the accuracy of the combined forecast will be limited by the accuracy of the weakest individual forecast. At the same time, diversity is important because it can help to reduce the impact of biases and errors that may be present in individual forecasts (Atiya, 2020). Diversity can be achieved by selecting a range of forecasting methods that use different techniques, assumptions, and models. For example, a combination of statistical methods, machine learning algorithms, and expert judgment may provide a diverse set of forecasts (Makridakis et al., 2020).
The ambiguity decomposition theory by Krogh & Vedelsby (1994) can be easily applied to the forecast combination task. It reveals the relationship between the error of the ensemble model and the error of base models for regression tasks and states that the overall mean squared error of a
weighted forecast combination model over the whole forecast horizon \(H\) can be written as
\[\text{MSE}_{\text{comb}} =\frac{1}{H}\sum_{h=1}^{H}\left(f_{\text{comb}}^{h}-y_{T+h}\right)^ {2} \tag{2}\] \[=\frac{1}{H}\sum_{h=1}^{H}\left(\sum_{i=1}^{M}w_{i}\left(f_{i}^{h}- y_{T+h}\right)^{2}-\sum_{i=1}^{M}w_{i}\left(f_{i}^{h}-f_{\text{comb}}^{h} \right)^{2}\right)\] (3) \[=\frac{1}{H}\sum_{h=1}^{H}\left(\sum_{i=1}^{M}w_{i}\left(f_{i}^{h} -y_{T+h}\right)^{2}-\sum_{i=1}^{M-1}\sum_{j=1,j>i}^{M}w_{i}w_{j}\left(f_{i}^{h} -f_{j}^{h}\right)^{2}\right). \tag{4}\]
The first term on the right-hand side is the weighted squared error of the individual forecast \(f_{ih}\) with respect to the true observation \(y_{T+h}\). The second term quantifies the diversity of the ensemble and is the squared error spread of the individual forecast \(f_{i}^{h}\) around \(f_{\text{comb}}^{h}\), or equivalently, the degree of diversity between the \(i\)-th and \(j\)-th method in the pool of base forecasters. If two ensembles have the same weighted mean squared error, the one with more diversity will have a lower overall squared error. In other words, greater diversity among the forecasting methods in the pool results in improved overall forecast accuracy (Kang et al., 2022).
Recent literature uses the term _meta-learning_ to describe the process of automatically acquiring knowledge for model selection and combination (Prudencio and Ludermir, 2004; Lemke and Gabrys, 2010). The concept of meta-learning for time series forecasting is formalized by Talagala et al. (2018), where a machine learning approach is employed to acquire meta-knowledge by establishing connections between summarized features extracted from time series data, and the forecasting performance of base forecasters. This acquired knowledge, replacing human expertise, is then used to select appropriate forecasting methods. However, opting for a single apparently best forecasting model is risky due to the possibility of selecting an inadequately specified model. Conversely, employing meta-learning to combine different forecasts to mitigate model risk is a more promising approach employed by recent meta-learning approaches, reviewed in the next section.
## 2 Forecast combination by meta-learning
Meta-learning can be employed to either select the most appropriate model or to estimate the weights used to combine the different base forecasts. Methods of such type may share a pool of forecasting techniques and differ in how they identify and combine features from the time series. Indeed, regardless of the task (model selection or model combination), a common challenge in feature-based forecasting is the selection, extraction, or estimation of the right features. These features can be, for example, statistical representations of time series characteristics, such as mean, standard deviation, autocorrelation, and seasonality. Using feature-based time series representations has gained interest in various time series mining tasks, including clustering, classification, and forecasting (Mancuso et al., 2021; Petropoulos et al., 2022). Successful applications of features in
time series forecasting are based on meta-learning. Talagala et al. (2018) developed a meta-learning approach called FFORMS (Feature-based FORecast Model Selection), which uses a Random Forest (RF) classifier to choose the best forecasting method from nine base forecasters based on a set of time series features. To build a reliable classifier, they proposed augmenting the set of observed time series by simulating new time series similar to those in the assumed population. Time series features are based on a set of manually selected 25 features for non-seasonal data and 30 features for seasonal data. Montero-Manso et al. (2020) improved FFORMS and proposed a meta-learning approach to learn the weights of convex combinations of forecasting models, resulting in the framework called FFORMA (Feature-based FORecast Model Averaging). Prior to forecasting with FFORMA, 42 hand-crafted features are extracted from the original time series and the overall weighted average error of each forecasting method in the pool is computed. To determine the optimal weights of the combination, the problem is framed as a non-linear regression where time series features are linked with the forecasting errors using the XGBoost algorithm (Chen & Guestrin, 2016). Di Gangi (2022) proposed a meta-learning system based on a Multilayer Perceptron (MLP) that takes as input the same pre-computed time series feature representations used in FFORMA and automatically provides sparse convex combinations of forecasting methods. One advantage of this approach is that it eliminates the need to compute forecasts for the excluded methods during the pre-processing stage. This leads to computational savings and improved reliability in real-time applications. However, the obtained results are worse in terms of forecast accuracy than the FFORMA approach by Montero-Manso et al. (2020). To demonstrate the importance of accuracy and diversity when selecting combination methods, Lichtendahl & Winkler (2020) analyzed the M4 competition's top strategies submitted by Montero-Manso et al. (2020). They used a screen on accuracy to eliminate inaccurate methods and a screen on diversity to eliminate methods with highly dependent forecast errors; they then found that a simple trimmed mean of the subset of surviving methods was nearly as effective as the combination produced by Montero-Manso et al. (2020) through FFORMA. To incorporate diversity, Kang et al. (2022) tailored the state-of-the-art FFORMA framework to allow for the diversity of the forecasts as the inputs. This results in a supervised approach where time series features are extracted by looking at the diversity of forecasts among the methods in the pool. The newly obtained hand-crafted features are then employed to train the weighted combination of forecasting models. The inclusion of diverse methods within the combination scheme enhanced the point forecast accuracy of FFORMA. In contrast to the methods mentioned earlier, there exists a body of research focusing on the use of Deep Neural Networks (DNN). Ma & Fildes (2021) proposed a meta-learning algorithm designed for retail forecasting where time series features are automatically extracted through a Convolutional Neural Network (CNN). These features are then linked with a set of weights which are used to combine the forecasting methods. Li et al. (2020) adopted a similar approach where a CNN is employed to learn time series features from recurrence plots. First, time series data is converted into images via recurrence plots. Then, computer vision algorithms are applied to extract features from these recurrence plots.
plots, and these features are further converted into weights for forecasting methods by minimizing the same weighted average loss function used in FFORMA. Table 1 provides a summary for a clear comparison of the recent research studies on time series forecasting with meta-learning.
Contributions and OutlineIn this paper, we introduce a novel framework for combining forecasts. Building upon existing literature, we use meta-learning to optimize the weights of convex combinations of forecasting methods. We formulate this optimization problem as a _multi-task learning_ (MTL) problem and propose a tailored deep neural network. MTL is a machine learning paradigm where a single model is trained to perform multiple related tasks simultaneously with the goal of improving the performance of each individual task (Caruana, 1997). Instead of training separate models for each task, MTL leverages the commonalities and relationships between tasks to enhance overall performance. In our meta-learning approach, we focus on two tasks: regression and classification. On the one hand, the regression task aims to learn the optimal weights of the base forecasting methods by minimizing the error of combined forecasts. On the other hand, the classification task aims to learn and select suitable forecasting methods based on accuracy and diversity among the base forecasters. To perform the two tasks, we design a neural network with two branches. The outputs of these branches are then combined, and the entire network is jointly trained on both tasks by minimizing a custom loss function via gradient descent optimization. We empirically demonstrate the effectiveness of our meta-learning algorithm by testing it on a large
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Paper** & **Feature extraction** & **Meta-learner** & **Learning task** & **Diversity** \\ \hline Talagala et al. (2018) & Judgmental & RF & Classification & No \\ & Unsupervised & XGBoost & Regression & No \\ \hline Montero-Manso et al. (2020) & Judgmental & & & \\ & Unsupervised & XGBoost & Regression & No \\ \hline Li et al. (2020) & Automatic & DNN & Regression & No \\ & Supervised & & & \\ \hline Lichtendahl \& Winkler (2020) & Judgmental & XGBoost & Regression & Yes \\ & Unsupervised & & & & Post-processing \\ \hline Ma \& Fildes (2021) & Automatic & DNN & Regression or & No \\ & Supervised & & & & \\ \hline Di Gangi (2022) & Judgmental & MLP & Regression or & No \\ & Unsupervised & & & & \\ \hline Kang et al. (2022) & Judgmental & XGBoost & Regression & Yes \\ & Supervised & & & Pre-processing \\ \hline This paper & Automatic & DNN & Regression and & Yes \\ & Supervised & & classification & Learned \\ \hline \hline \end{tabular}
\end{table}
Table 1: Research studies on time series forecasting with meta-learning. In the “Feature extraction” column, the term “judgmental” denotes manually crafted features, while “automatic” signifies features identified by the meta-learner. The “Diversity” column assesses whether the meta-learner accounts for diversity and, if so, details how it is incorporated.
number of series from the M4 competition dataset. As is shown in Table 1, the meta-learning framework proposed in this paper contributes to this stream of literature mainly in three aspects:
1. _Learning task_: The main contribution of this paper is to define a classification problem from the native regression one and highlight the interplay between both classification and regression tasks in improving the accuracy of the forecast combinations through meta-learning. Our multi-task model jointly optimizes the weights used to combine the individual forecasts via regression and performs diversity-based model selection via classification. To tackle the classification task, we introduce a procedure for labeling each series. This involves solving an optimization problem where accuracy and diversity among the base learners are maximized.
2. _Diversity_: In the literature, diversity among the base forecasters has been considered either as a pre-processing or a post-processing step. More in detail, in Lichtendahl and Winkler (2020), diversity is taken into account by using a trimmed mean of the forecasts produced by the set of base forecasting methods identified by FFORMA. In Kang et al. (2022), instead, diversity is encoded though time series features in a pre-processing step and given as input of FFORMA replacing the original hand-selected features identified by Montero-Mango et al. (2020). In our approach, we shift the focus and we propose to learn diversity from the outcome of an optimization model, during training. To some extent, we may say that diversity is learned by reproducing the solution of an optimization problem.
3. _Feature extraction_: Drawing inspiration from deep learning techniques applied to time series data, we use CNNs to learn a supervised feature representation from raw time series automatically. Similar methods can be found in existing research, such as the work by Ma and Fildes (2021). Nevertheless, what sets our approach apart is that the learned features are not only associated with forecasting accuracy but also with the diversity of the individual methods. Moreover, to compensate for the reduced interpretability, we leverage gradient-based visual explanations. This technique makes the meta-learning model more transparent and allows the identification of the most discriminative regions in the input data that contributed to the network's selection of a specific forecasting method in the pool.
Using meta-learning, we leverage a set of time series to capture the diversity of their forecasts and determine the combination weights. Once the model is trained, we can calculate such weights for any new series that requires forecasting. Empirically, we show that incorporating diversity into the learning process leads to more accurate and robust point forecasts. The remainder of the paper is organized as follows. Section 3 presents the evaluation metrics and a description of the employed pool of forecasting methods. Section 4 outlines the proposed multi-task learning methodology. Section 5 describes the implementation details. Section 6 describes the experimental validation. Finally, Section 7 concludes the paper with possible future research directions.
## 3 Forecasting methods and metrics
This section describes the forecasting pool of methods and the evaluation metrics. We start with some definitions of measures that were used in the M4 competition for evaluating the forecasting accuracy. Throughout the paper, we denote vectors and matrices in bold. Consider a collection of \(N\) univariate time series \(\left\{\mathbf{y}^{1},\ldots,\mathbf{y}^{N}\right\}\) where each individual time series, denoted as \(\mathbf{y}^{i}=\left[y_{1}^{i},\ldots,y_{T_{i}}^{i}\right]^{\top}\), has length \(T_{i}\). The value of the \(i\)-th time series at time \(t\in\left\{1,\ldots,T_{i}\right\}\) is given by \(y_{t}^{i}\). For each time series, the forecasting horizon is denoted as \(H_{i}\) and represents the number of future time steps to predict. The predicted values for the \(i\)-th series are denoted as \(\hat{\mathbf{y}}^{i}=\left[\hat{y}_{T_{i}+1}^{i},\ldots,\hat{y}_{T_{i}+H_{i}}^ {i}\right]^{\top}\). Here, \(\hat{y}_{T_{i}+h}^{i}\) represents the predicted value of the \(i\)-th time series at time \(h\in\left\{1,\ldots,H_{i}\right\}\). The Overall Weighted Average (OWA) score is the measure used in the M4 competition to determine the point forecasting winner (Makridakis et al., 2020). It is based on two commonly used forecasting error metrics that are the Symmetric Mean Absolute Percentage Error (SMAPE) and the Mean Absolute Scaled Error (MASE). For forecasts of the \(i\)-th time series at various forecast horizons, the SMAPE\({}_{i}\) is given by
\[\text{SMAPE}_{i}=\text{sMAPE}(\mathbf{y}^{i},\hat{\mathbf{y}}^{i})=\frac{1}{H_ {i}}\sum_{h=1}^{H_{i}}\frac{200\cdot\left|y_{T_{i}+h}^{i}-\hat{y}_{T_{i}+h}^{ i}\right|}{\left|y_{T_{i}+h}^{i}\right|+\left|\hat{y}_{T_{i}+h}^{i}\right|}. \tag{5}\]
The SMAPE is easy to interpret and has an upper bound of 200 when either actual or predicted values are zero or when actual and predicted are opposite signs. The \(\text{MASE}_{i}\) compares the forecast accuracy between a specific forecast algorithm and the naive method where the one-step-ahead forecast is equal to the most recent available observation (Hyndman and Koehler, 2006). It is defined as
\[\text{MASE}_{i}=\text{MASE}(\mathbf{y}^{i},\hat{\mathbf{y}}^{i})=\frac{1}{H_ {i}}\frac{\sum_{t=1}^{H_{i}}\left|y_{T_{i}+h}^{i}-\hat{y}_{T_{i}+h}^{i}\right|} {\frac{1}{T_{i}-s_{i}}\sum_{t=s_{i}+1}^{T_{i}}\left|y_{t}^{i}-y_{t-s_{i}}^{i} \right|}, \tag{6}\]
where \(s_{i}\) is the frequency of the data considered by the organizers, e.g., 12 for monthly, 4 for quarterly, and 1 for yearly series (Makridakis et al., 2020). The numerator is the out-of-sample mean absolute error of the method evaluated across the forecast horizon \(H_{i}\), and the denominator is the in-sample one-step-ahead naive forecast with seasonal period \(s_{i}\). The SMAPE\({}_{i}\) and \(\text{MASE}_{i}\) can be used to compare forecasting methods on a single series and, because they are scale-free, to compare the forecasting accuracy across series. Then, the OWA measure is given by
\[\text{OWA}=\frac{1}{2}\frac{\sum_{i=1}^{N}\text{sMAPE}(\mathbf{y}^{i},\hat{ \mathbf{y}}^{i})}{\sum_{i=1}^{N}\text{sMAPE}(\mathbf{y}^{i},\hat{\mathbf{z}}^{ i})}+\frac{1}{2}\frac{\sum_{i=1}^{N}\text{MASE}(\mathbf{y}^{i},\hat{ \mathbf{y}}^{i})}{\sum_{i=1}^{N}\text{MASE}(\mathbf{y}^{i},\hat{\mathbf{z}}^{ i})}, \tag{7}\]
where \(\hat{\mathbf{z}}^{i}\) is the vector obtained with the naive method for the \(i\)-th time series from one to \(H_{i}\) steps ahead (Makridakis et al., 1982). OWA provides a score for the entire collection of time series that may mislead the evaluation. For this reason, SMAPE and MASE can be normalized for each series
in the series-level OWA, introduced by Lichtendahl and Winkler (2020) as:
\[\text{sOWA}_{i}=\frac{1}{2}\frac{\text{sMAPE}(\mathbf{y}^{i},\hat{\mathbf{y}}^{i} )}{\text{sMAPE}(\mathbf{y}^{i},\hat{\mathbf{z}}^{i})}+\frac{1}{2}\frac{\text{ MASE}(\mathbf{y}^{i},\hat{\mathbf{y}}^{i})}{\text{MASE}(\mathbf{y}^{i},\hat{ \mathbf{z}}^{i})}. \tag{8}\]
Note that, having a score for each specific series permits an evaluation of the accuracy risk, which relates to the variability of the accuracy of a forecasting method or combination across the set of series. This offers insights into the potential risk of producing a highly inaccurate forecast for an individual series. We use the nine most popular time series forecasting methods as candidates for forecast combinations, which are also used in recent studies (Montero-Manso et al., 2020; Li et al., 2020; Di Gangi, 2022; Kang et al., 2022). The nine forecasting methods in the pool are described in Table 2 and implemented in the forecast package in R (Hyndman and Khandakar, 2008; Hyndman et al., 2023).
## 4 Meta-learner: regression and classification tasks
As anticipated, our meta-learning approach aims to determine a set of weights that combine forecasts generated from a pool of methods, with the goal of exploiting accuracy and diversity among these methods. The meta-learning approach in (Talagala et al., 2018) involves selecting the best method for each series from the pool of methods based on the smallest forecasting error. This transforms the problem into a traditional classification problem, where the individual forecasting methods are encoded as classes and the best method becomes the target class for each time series. However, there might be other methods that yield similar forecast errors to the best method, making the specific class chosen less relevant compared to the forecast error produced by each method. Indeed, the problem of finding a function that assigns weights to each forecasting method is usually framed as a regression task where the objective is to minimize the error of the combined
\begin{table}
\begin{tabular}{l|l|l} \hline \hline
**Method** & **Description** & **R function** \\ \hline ARIMA & Automated ARIMA algorithm (Hyndman and Khandakar, 2008) & auto.arima() \\ ETS & Automated exponential smoothing algorithm (Hyndman et al., 2002) & ets() \\ NNETAR & Feed-forward neural networks with a single hidden layer and AR inputs & nnetar() \\ TBATS & TBATS model (De Livera et al., 2011) & tbats() \\ STLM & Seasonal and trend decomposition using Loess with AR seasonally adjusted series (Cleveland et al., 1990) & stlm(modelfunction=’ar’) \\ RW & Random walk with drift & rwf(drift=TRUE) \\ THETA & Theta method (Assimakopoulos and Nikolopoulos, 2000; Spiliotis et al., 2020) & thetaf() \\ NAIVE & Naïve method (forecasting based on the most recent observation) & naive() \\ SNAIVE & Seasonal naive method (forecasting based on the most recent seasonal period) & snaive() \\ \hline \hline \end{tabular}
\end{table}
Table 2: The nine individual forecasting methods of the pool, or base learners. The table provides the acronym of the method used throughout the paper (first column), the main reference where the method was introduced (second column), and the R function used for the experiments (third column).
forecasts. The regression approach can be seen as a classification task with varying per-class weights for each instance, combined with per-instance weights that put greater importance on certain series (Montero-Manso et al., 2020). This implies that the nature of the regression task is closely related to the classification one. Thus, in the following, we describe how to exploit the interplay between these two tasks within a multi-task learning methodology and provide a new meta-learning approach where both tasks are solved simultaneously.
As shown in Figure 1, the proposed meta-learning framework consists of two distinct phases: meta-data generation & model training (offline phase), and forecasting (online phase). Each time series is divided into training and testing periods, where the length of the testing period matches its forecasting horizon. For each time series, we fit the forecasting methods in the pool on the training period and extract the forecasts produced by different forecasting methods on the testing period. The forecasts from different methods are gathered into a matrix and then compared with the actual observations of the test period, leading to the matrix of forecasting errors. From this matrix, accuracy and diversity information are summarized as a binary vector of labels by solving an optimization problem. Subsequently, a meta-learner implemented by a deep neural network is trained using gradient descent optimization, minimizing a custom loss function. This step aims to estimate combination weights for each series, allowing the production of weights and hence the combined forecasts for any target series in the online phase.
### Neural network design
In our multi-task methodology, the meta-learning model is a deep neural network composed of two subnetworks, or branches: the first one solves a regression task while the second one solves a classification task. The goal of the regression task is to learn the weights of the base forecasting methods by minimizing the error of combined forecasts, while the selection of forecasting methods according to the diversity criterion is treated as an auxiliary classification task. The output of both tasks is then combined to obtain the final weights of the convex combination.
Extending the notation already used in the previous sections, we consider a collection of \(N\) time series \(\{\mathbf{s}^{1},\ldots,\mathbf{s}^{N}\}\) of length \(T\), a set of \(M\) forecasting methods and a forecasting horizon of length \(H\). Every time series \(\mathbf{s}^{i}\) is split into training \(\mathbf{x}^{i}=[y^{i}_{1},\ldots,y^{i}_{T}]^{\top}\) and test \(\mathbf{y}^{i}=[y^{i}_{T+1},\ldots,y^{i}_{T+H}]^{\top}\) periods. We denote by \(\hat{\mathbf{F}}^{i}\in\mathbb{R}^{H\times M}\) the matrix of forecasts produced by the \(M\) methods for the entire forecasting horizon \(H\), where \(\hat{F}^{i}_{hm}\) is the \(h\)-step ahead forecast produced by the method \(m\in\{1,\ldots,M\}\) for the series \(i\in\{1,\ldots,N\}\). Let \(\mathbf{1}_{M}\) be the vector of all ones of length \(M\), then \(\mathbf{F}^{i}=\mathbf{y}^{i}\mathbf{1}_{M}^{\top}\) denotes the matrix where each column is equal to ground-truth observations \(\mathbf{y}^{i}\), and \(\mathbf{E}^{i}=\mathbf{F}^{i}-\hat{\mathbf{F}}^{i}\) is the matrix of forecasting errors produced by the \(M\) methods over the forecasting horizon \(H\) for the series \(i\in\{1,\ldots,N\}\).
The regression subnetwork takes as input raw time series \(\mathbf{x}^{i}\) of length \(T\), extracts time series features by means of convolutional layers, and returns a set of \(M\) un-normalized weights. We
denote the un-normalized weights estimation model \(f_{\text{reg}}\colon\mathbb{R}^{T}\to\mathbb{R}^{M}\) as
\[\hat{\mathsf{o}}_{\text{reg}}^{i}=f_{\text{reg}}(\mathbf{x}^{i};\theta_{\text{ reg}}).\]
Thus, the function \(f_{\text{reg}}\) is a function parameterized by the subnetwork weights \(\theta_{\text{reg}}\) which first maps a time series \(\mathbf{x}^{i}\in\mathbb{R}^{T}\) to a hidden representation \(\mathbf{h}_{\text{reg}}^{i}\in\mathbb{R}^{D}\), where \(D\) is the dimension of the learned feature vector, and then outputs a set of un-normalized weights \(\hat{\mathsf{o}}_{\text{reg}}^{i}\in\mathbb{R}^{M}\) of the base forecasters for that time series.
Figure 1: The metal-learning framework proposed in this paper. On the left the off-line meta-data generation and model training are depicted; on the right the simpler steps that compose the online use of the method to provide forecasts for new series.
We emphasize that our network design choices are guided by the principles of accuracy and diversity. In this context, we present an approach aimed at learning the base models that are most appropriate for forecasting a specific time series. We frame the resulting learning problem as a multi-label classification problem, where the individual forecasting methods are encoded as classes and the most accurate and diverse methods become the target classes for each time series. To generate training labels for the classification task we adapt the Quadratic Programming (QP) feature selection method proposed by Rodriguez-Lujan et al. (2010). Given a set of \(M\) forecasting methods, our goal is to select a subset of them in order to maximize the combination's performance while satisfying two key requirements: accuracy and diversity among the methods. The resulting optimization problem for the \(i\)-th time series can be expressed as:
\[\begin{split}\min_{\mathbf{x}}&\frac{1}{2}(1- \alpha)\mathbf{x}^{\top}\mathbf{Q}^{i}\mathbf{x}-\alpha\mathbf{x}^{\top} \mathbf{c}^{i}\\ \text{s.t.}&\mathbf{1}_{M}^{\top}\mathbf{x}=1,\ \mathbf{x} \geq\mathbf{0}_{M}.\end{split}\] (QP-LAB)
where \(\mathbf{x}\) is an \(M\) dimensional vector representing the relative importance of each method, \(\mathbf{1}_{M}\) is the vector of all ones of size \(M\), \(\mathbf{Q}^{i}\) is an \(M\times M\) symmetric positive semidefinite matrix, which represents the redundancy among the forecasting methods, and \(\mathbf{c}^{i}\) is an \(M\) dimensional vector of non-negative values, which represents the accuracy of each forecasting methods for the time series \(i\in\{1,\ldots,N\}\). The diversity extraction procedure is based on the correlation between forecasting methods. More precisely, the pairwise forecast diversity among the \(M\) methods is evaluated using the Pearson correlation coefficient of the individual methods among their forecasting errors \(\mathbf{E}^{i}\) and stored in the matrix \(\mathbf{Q}^{i}\). The components of the relevance term \(\mathbf{c}^{i}\) are computed as the opposite of sOWA\({}_{i}\) and scaled between 0 and 1. The scalar quantity \(\alpha\in[0,1]\) represents the relative importance of non-redundancy amongst the methods and their relevance. Our choice of \(\alpha\) aims to achieve equilibrium between the quadratic and linear terms in the objective function. This balance is reached when \(\alpha\) satisfies the equation \((1-\alpha)\bar{q}=\alpha\bar{c}\), where \(\bar{q}\) is the mean value of the elements of the matrix \(\mathbf{Q}^{i}\) and \(\bar{c}\) is the mean value of the elements of vector \(\mathbf{c}^{i}\). Problem (QP-LAB) is convex since the correlation matrix \(\mathbf{Q}^{i}\) is positive semidefinite. By solving an optimization problem for each time series, we can identify a subset of accurate and diverse forecasting methods that can be combined to produce high-quality forecasts for that series. The label vector \(\mathbf{o}^{i}\in\{0,1\}^{M}\) for the \(i\)-th time series is then constructed from the optimal solution \(\mathbf{x}^{\star}\) as follows:
\[o_{j}^{i}=\begin{cases}1,&\text{if }x_{j}^{\star}\geq\tau\\ 0,&\text{otherwise}\end{cases}\qquad\forall j\in\{1,\ldots,M\},\]
where \(\tau\) is a user-defined threshold. Note that, if \(\tau\) is too large, the number of target forecasting methods may decrease up to the point where only one forecasting method is selected. This turns the multi-label classification problem into a conventional single-label one, thereby overlooking the
benefits of forecast combinations. In our experiments, we set \(\tau=\frac{1}{M}\) as it was observed to represent a good trade-off between the two cases and produce a balanced distribution of labels, with all methods being equally represented. The empirical analysis of this choice will be evident in the computational results. Thus, the auxiliary neural network \(f_{\text{cls}}\colon\mathbb{R}^{T}\to[0,1]^{M}\) solves a multi-label classification problem by learning the mapping
\[\hat{\mathbf{o}}^{i}_{\text{cls}}=f_{\text{cls}}(\mathbf{x}^{i};\theta_{\text {cls}}).\]
The function \(f_{\text{cls}}\) is parameterized by the subnetwork weights \(\theta_{\text{cls}}\). Similarly to the regression subnetwork, it first maps a time series \(\mathbf{x}^{i}\in\mathbb{R}^{T}\) to a time series future vector \(\mathbf{h}^{i}_{\text{cls}}\in\mathbb{R}^{D}\) and then outputs a set of predicted labels \(\hat{\mathbf{o}}^{i}_{\text{cls}}\in\mathbb{R}^{M}\). More in detail, each element in \(\hat{\mathbf{o}}^{i}_{\text{cls}}\) is a continuous value between 0 and 1 and represents the probability that a method is appropriate for forecasting the \(i\)-th time series based on accuracy and diversity principles.
Finally, we apply the softmax function to the combined output of the subnetworks which is obtained by multiplying the output of both branches as
\[\hat{w}^{i}_{j}=\text{softmax}_{j}(\hat{\mathbf{o}}^{i}_{\text{reg}}\odot \hat{\mathbf{o}}^{i}_{\text{cls}})=\frac{\exp\left((\hat{\mathbf{o}}^{i}_{ \text{reg}}\odot\hat{\mathbf{o}}^{i}_{\text{cls}})_{j}\right)}{\sum_{k=1}^{M} \exp\left((\hat{\mathbf{o}}^{i}_{\text{reg}}\odot\hat{\mathbf{o}}^{i}_{\text{ cls}})_{k}\right)},\quad\forall j\in\{1,\dots,M\},\]
where \(\odot\) represents the element-wise multiplication. The softmax function maps vectors from the Euclidean space to probability distributions, thus allowing to output the estimated weights \(\hat{\mathbf{w}}^{i}\in\mathbb{R}^{M}\) of the convex combination for the \(i\)-th time series. Note that, the element-wise multiplication enables to share the knowledge between the main regression and the auxiliary classification task. Generally speaking, if the \(j\)-th forecasting method is suitable for the \(i\)-th time series, i.e., \((\hat{\mathbf{o}}^{i}_{\text{cls}})_{j}=1\), the output of the overall network \(\hat{w}^{i}_{j}\) will be large. Similarly, the estimated weight \(\hat{w}^{i}_{j}\) of the \(j\)-th forecasting method should be close to zero if it is not suitable for that time series, i.e., \((\hat{\mathbf{o}}^{i}_{\text{cls}})_{j}=0\).
### Loss function design
Generally, for optimizing multi-task learning architectures, it is necessary to weigh the importance of each task. Specific to our problem, the regression task represents the main forecast combination task and hence should receive more attention, whereas the classification task is treated as an auxiliary task to enhance the performance of the main task. Given this requirement, we now present the components of the loss function used for training the overall model.
As pointed out by Ma & Fildes (2021), one limitation of FFORMA is that it focuses on minimizing the combined errors of the base forecasters, not the combined forecasts directly, which can lead to suboptimal combinations. Let \(\mathbf{\Theta}=\{\theta_{\text{reg}},\theta_{\text{cls}}\}\) be the the parameters of the overall neural network. The meta-learner is trained by minimizing the scaled mean squared error with respect to
\(\mathbf{\Theta}\), which is defined as
\[\mathcal{L}_{\text{comb}}(\mathbf{x}^{i},\mathbf{y}^{i},\hat{\mathbf{F}}^{i};\bm {\Theta})=\frac{1}{N}\sum_{i=1}^{N}\frac{\|\hat{\mathbf{y}}^{i}-\mathbf{y}^{i} \|_{2}^{2}}{\|\hat{\mathbf{y}}^{i}-\mathbf{y}^{i}\|_{2}^{2}}=\frac{1}{N}\sum_{ i=1}^{N}\frac{\|\hat{\mathbf{F}}^{i}\hat{\mathbf{w}}^{i}-\mathbf{y}^{i}\|_{2}^{2}}{ \left\|\frac{1}{M}\hat{\mathbf{F}}^{i}\mathbf{1}_{M}-\mathbf{y}^{i}\right\|_{2 }^{2}}, \tag{9}\]
where \(\|\cdot\|_{2}^{2}\) is the squared \(\ell_{2}\) norm, \(\hat{\mathbf{y}}^{i}=\hat{\mathbf{F}}^{i}\hat{\mathbf{w}}^{i}=[\hat{y}_{T+1}^ {i},\ldots,\hat{y}_{T+H}^{i}]^{\top}\in\mathbb{R}^{H}\) contains the weighted combination of the forecasts, \(\bar{\mathbf{y}}^{i}=\frac{1}{M}\hat{\mathbf{F}}^{i}\mathbf{1}_{M}\) is the forecast combination obtained by averaging the forecast produced by the \(M\) methods. This scaling is useful in training phase to perceive the differences between the performance of forecast combinations provided by the meta-learner and the simple average combinations which represent a strong baseline. Note that, \(\mathcal{L}_{\text{comb}}\) is jointly decided by \(\hat{\mathbf{o}}^{i}_{\text{reg}}\) and \(\hat{\mathbf{o}}^{i}_{\text{cls}}\). We here perform some additional analysis to better understand the impact of \(\mathcal{L}_{\text{comb}}\) on the gradient. To compute the gradient of the combined loss with respect to the \(k\)-th parameter \(\Theta_{k}\in\mathbf{\Theta}\) of the network, we can follow the chain rule of differentiation. We first rewrite the softmax activation function as
\[\hat{w}^{i}_{j}=\frac{\exp(\hat{z}^{i}_{j})}{\sum_{t=1}^{M}\exp(\hat{z}^{i}_{ t})},\quad\forall j\in\{1,\ldots,M\},\]
where \(\hat{z}^{i}_{j}=(\hat{\mathbf{o}}^{i}_{\text{reg}}\odot\hat{\mathbf{o}}^{i}_{ \text{cls}})_{j}\). Then, we have
\[\frac{\partial\mathcal{L}_{\text{comb}}}{\partial\Theta_{k}}=\sum_{i=1}^{N} \frac{\partial\mathcal{L}^{i}_{\text{comb}}}{\partial\Theta_{k}}=\sum_{i=1}^{ N}\sum_{j=1}^{M}\frac{\partial\mathcal{L}^{i}_{\text{comb}}}{\partial\hat{w}^{i}_{ j}}\sum_{t=1}^{M}\frac{\partial\hat{w}^{i}_{j}}{\partial\hat{z}^{i}_{t}}\cdot \frac{\partial\hat{z}^{i}_{t}}{\partial\Theta_{k}}. \tag{10}\]
One can verify that the derivative terms are given by:
\[\frac{\partial\mathcal{L}^{i}_{\text{comb}}}{\partial\hat{w}^{i}_ {j}} =\frac{2}{N}\sum_{h=1}^{H}\frac{\left(\sum_{k=1}^{M}\hat{F}^{i}_{hk} \hat{w}^{i}_{k}-y^{i}_{h}\right)\hat{F}^{i}_{hj}}{\left\|\frac{1}{M}\hat{ \mathbf{F}}^{i}\mathbf{1}_{M}-\mathbf{y}^{i}\right\|_{2}^{2}}, \tag{11}\] \[\frac{\partial\hat{w}^{i}_{j}}{\partial\hat{z}^{i}_{t}} =\hat{w}^{i}_{j}(\delta_{jt}-\hat{w}^{i}_{t}),\] (12) \[\frac{\partial\hat{z}^{i}_{t}}{\partial\Theta_{k}} =\frac{\partial(\hat{\mathbf{o}}^{i}_{\text{reg}})_{t}}{\partial \Theta_{k}}\cdot(\hat{\mathbf{o}}^{i}_{\text{cls}})_{t}+(\hat{\mathbf{o}}^{i}_ {\text{reg}})_{t}\cdot\frac{\partial(\hat{\mathbf{o}}^{i}_{\text{cls}})_{t}}{ \partial\Theta_{k}}, \tag{13}\]
where \(\delta_{jt}\) is the Kronecker delta function. Therefore, after plugging the above equations in Equation (10), it can be seen that the individual outputs of regression and classification tasks contribute to the parameter updates of the overall network through backpropagation.
The classification subnetwork is trained to predict the output labels we are ultimately interested
in and naturally induces the following loss function:
\[\mathcal{L}_{\text{cls}}(\mathbf{x}^{i},\mathbf{o}_{\text{cls}}^{i}; \boldsymbol{\Theta})=-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{M}\Big{(}(\mathbf{o} _{\text{cls}}^{i})_{j}\log(\hat{\mathbf{o}}_{\text{cls}}^{i})_{j}+(1-(\mathbf{ o}_{\text{cls}}^{i})_{j})\log(1-(\hat{\mathbf{o}}_{\text{cls}}^{i})_{j})\Big{)}, \tag{14}\]
that is the binary cross-entropy loss for a multi-label classification task and should be minimized with respect to the overall neural network weights.
For the computation of time series features we consider two distinct feature extractors placed independently through the overall network. One reason to learn distinct feature representations is that a single and shared feature extractor may not have enough expressive power for both tasks (Zhang and Yang, 2021). Another motivation is that our meta-learning model consists of two learners: the regression and the classification subnetworks. After these subnetworks make their respective predictions, their outputs are combined. Consequently, similar to the ideas behind ensemble methods, where combining diverse methods leads to better results, we let these distinct learners to acquire different knowledge. Such considerations motivate the following orthogonality term, which penalizes redundant latent representations and encourages the two feature extractors to encode different aspects of the input time series - an addition whose effectiveness will be made empirically evident later:
\[\mathcal{L}_{\text{ort}}(\mathbf{x}^{i};\boldsymbol{\Theta})=\| \mathbf{H}_{\text{reg}}\mathbf{H}_{\text{cls}}^{\top}\|_{F}^{2}, \tag{15}\]
where \(\|\cdot\|_{F}^{2}\) is the squared Frobenius norm, \(\mathbf{H}_{\text{reg}}\in\mathbb{R}^{N\times D}\) and \(\mathbf{H}_{\text{cls}}\in\mathbb{R}^{N\times D}\) are two matrices, whose rows are the output of the task-specific feature extractors of an input time series. This loss is added to the overall training objective, in order to encourage the task-specific features to be orthogonal.
The overall cost function is given by the sum of the cost functions of the main task and the auxiliary task. The goal of training is to minimize the following loss with respect to the network's parameters \(\boldsymbol{\Theta}\):
\[\mathcal{L}(\mathbf{x}^{i},\mathbf{y}^{i},\hat{\mathbf{F}}^{i}, \mathbf{o}_{\text{cls}}^{i};\boldsymbol{\Theta}) =\mathcal{L}_{\text{comb}}(\mathbf{x}^{i},\mathbf{y}^{i},\hat{ \mathbf{F}}^{i};\boldsymbol{\Theta})+\mathcal{L}_{\text{cls}}(\mathbf{x}^{i}, \mathbf{o}_{\text{cls}}^{i};\boldsymbol{\Theta})+\lambda\ \mathcal{L}_{\text{ort}}(\mathbf{x}^{i}; \boldsymbol{\Theta}) \tag{16}\] \[=\frac{1}{N}\sum_{i=1}^{N}\frac{\|\hat{\mathbf{F}}^{i}\hat{ \mathbf{w}}^{i}-\mathbf{y}^{i}\|_{2}^{2}}{\big{\|}\frac{1}{M}\hat{\mathbf{F}}^ {i}\mathbf{1}_{M}-\mathbf{y}^{i}\big{\|}_{2}^{2}}\] (17) \[-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{M}\Big{(}(\mathbf{o}_{\text {cls}}^{i})_{j}\log(\hat{\mathbf{o}}_{\text{cls}}^{i})_{j}+(1-(\mathbf{o}_{ \text{cls}}^{i})_{j})\log(1-(\hat{\mathbf{o}}_{\text{cls}}^{i})_{j})\Big{)}\] (18) \[+\lambda\ \|\mathbf{H}_{\text{reg}}\mathbf{H}_{\text{cls}}^{\top}\|_ {F}^{2}, \tag{19}\]
where \(\lambda\) is a hyperparameter chosen by cross-validation. The overall meta-learning approach is shown in Algorithm 1.
```
/* OFFLINE PHASE: BUILD THE METADATA AND TRAIN THE LEARNING MODEL */ Input Dataset of \(N\) time series \(\{\mathbf{s}^{1},\ldots,\mathbf{s}^{N}\}\), set of \(M\) methods, forecasting horizon \(H\). Output A function \(f_{\text{meta}}\) from a time series to a set of \(M\) weights, one for each method. for\(i=1:N\)do
1. Split \(\mathbf{s}^{i}\) into training \(\mathbf{x}^{i}=[y^{i}_{1},\ldots,y^{i}_{T}]^{\top}\) and test \(\mathbf{y}^{i}=[y^{i}_{T+1},\ldots,y^{i}_{T+H}]^{\top}\) periods.
2. Fit each base forecasting method over the training period \(\mathbf{x}^{i}\) and generate the matrix of forecasts \(\hat{\mathbf{F}}^{i}\in\mathbb{R}^{H\times M}\) over the test period \(\mathbf{y}^{i}\).
3. Extract the diversity matrix \(\mathbf{Q}^{i}\in\mathbb{R}^{M\times M}\) and the relevance vector \(\mathbf{c}^{i}\in\mathbb{R}^{M}\) over the test period \(\mathbf{y}^{i}\) from the matrix of forecasting errors \(\mathbf{E}^{i}\).
4. Given \(\mathbf{Q}^{i}\) and \(\mathbf{c}^{i}\), solve Problem (QP-LAB) and construct the label vector \(\mathbf{o}^{i}_{\text{cls}}\) indicating the most accurate and diverse base forecasters. end for Train a learning model \(f_{\text{meta}}\colon\mathbb{R}^{T}\to\mathbb{R}^{M}\) on the meta-data by solving \[\min_{\boldsymbol{\Theta}}\ \mathcal{L}_{\text{comb}}(\mathbf{x}^{i}, \mathbf{y}^{i},\hat{\mathbf{F}}^{i};\boldsymbol{\Theta})+\mathcal{L}_{\text{ cls}}(\mathbf{x}^{i},\mathbf{o}^{i}_{\text{cls}};\boldsymbol{\Theta})+ \lambda\ \mathcal{L}_{\text{ort}}(\mathbf{x}^{i};\boldsymbol{\Theta})\] /* ONLINE PHASE: FORECAST NEW TIME SERIES */ Input Trained meta-learner \(f_{\text{meta}}\), dataset of \(K\) time series \(\{\tilde{\mathbf{s}}^{1},\ldots,\tilde{\mathbf{s}}^{K}\}\). Output Combined forecast for each time series. for\(i=1:K\)do
1. Use the meta-learner to produce the vector of weights \(\tilde{\mathbf{w}}^{i}=f_{\text{meta}}(\tilde{\mathbf{s}}^{i})\).
2. Compute the individual forecasts of the \(M\) methods in the pool and store them in the forecasting matrix \(\tilde{\mathbf{F}}^{i}\in\mathbb{R}^{H\times M}\).
3. Compute the convex combination as \(\tilde{\mathbf{y}}^{i}=\tilde{\mathbf{F}}^{i}\tilde{\mathbf{w}}^{i}\) and output the forecasts \(\tilde{\mathbf{y}}^{i}\). end for
```
**Algorithm 1**Forecast combination based on multi-task learning (DNN-MTL)
### Neural network architecture
In this section, we describe the architectural components of the deep neural network. Our network is built on CNNs, which employ convolution operations to automatically extract features. They are comprised of a series of convolution layers and nonlinear activation functions, which are trained to identify valuable and complex features within the input data (Kraus et al., 2020). As a result, CNNs are commonly used in various computer vision tasks, including object detection, image segmentation, and image classification (Li et al., 2021). However, CNNs' advantages are
not limited to visual data alone, as they are also used in applications that handle one-dimensional sequential data, such as time-series forecasting (Liu et al., 2019; Semenoglou et al., 2023).
Since the CNN architecture itself is not the main focus of our proposal, we employ for both subnetworks the same CNN architecture used by Ma and Fildes (2021). To extract features, each subnetwork consists of three stacked temporal convolutional blocks. Each convolutional block comprises a convolutional layer and a ReLU activation function. Furthermore, convolutional blocks incorporate a squeeze and excite layer (Hu et al., 2018), which uses global average pooling to generate summary statistics over the learned feature map, capturing contextual information outside each filter's focused feature of the input series. The excite operation in the squeeze and excite blocks introduces dynamic dependencies among the learned features, allowing the network to assign more importance to relevant features when needed. The filters of the three convolutional layers in each subnetwork are set to 64, 128, and 64 respectively, whereas the kernel sizes are set to 2, 4, and 8. As we can see in Figure 2, our architecture consists of two identical and independent feature extractors with task-specific output branches. In both subnetworks, the last temporal convolutional block is followed by a global average pooling layer to reduce network parameters. In the classification subnetwork, a dense layer with a sigmoid activation function is used to convert the learned features into a set of labels, where the dimension matches the number of base forecasters. In contrast, the regression subnetwork outputs a set of unnormalized weights, also through a dense layer with linear activation function, with a dimension equal to the number of base forecasters. To effectively leverage information between tasks, the unnormalized weights of the convex combination are then element-wise multiplied with the labels learned by the auxiliary task. The classification task allows the network to emphasize forecasting methods that are more accurate and diverse for the corresponding task, and downplay the effect of inaccurate and highly correlated ones. Both branches are then followed by a softmax layer that is trained to predict the final weighted combination of the forecasting methods.
## 5 Experimental setup
In line with the existing research, we use the M4 competition dataset to evaluate the forecasting accuracy of the proposed methodology. This dataset includes 100,000 time series of varying frequencies and is publicly available in the "M4comp2018" R package (Montero-Manso et al., 2019). We focus on the yearly, quarterly, and monthly series which represent 95% of the competition's series. At the conclusion of the competition, each time series in the M4 dataset is already split into two parts: a training period (historical data) and a testing period (future data). The observations from the testing period are exclusively used to evaluate the forecasts generated by the trained meta-learner. Therefore, the only information known by the meta-leaner during training time is the training part of the M4 dataset. The yearly subset includes 23,000 series with lengths ranging from 13 to 835 observations and with forecast horizons of 6 periods. The quarterly subset con
sists of 24,000 series with 8 forecast horizons and the series length ranges from 16 to 866 periods. Finally, the monthly subset contains 48,000 time series with a forecasting horizon of 18 periods ranging from 42 to 2,794 sample observations. For each frequency, we use the corresponding M4 series to create the input data and we optimize the combination weights by training a distinct meta-learner for each group of series. Before feeding data to the neural network, all the input time series are standardized so that the mean of observed values is 0 and the standard deviation is 1. To ensure that all the input time series of a given frequency have the same length, we pad and truncate the data when needed. To determine this target length, a statistical analysis of lengths is carried out for each type of time series. More in detail, for series with a length shorter than the target length we apply a pre-padding operation, which involves adding the value 0 at the beginning of the series until the defined length is reached. Conversely, for time series with a longer length, a truncation operation is performed, removing observations from the beginning to achieve the fixed length. As a result, for yearly, quarterly, and monthly time series we consider lengths of 32, 64, and 128 observations, respectively. Adam optimizer (Kingma & Ba, 2014) with
Figure 2: The network architecture designed to output the weights of the convex combination of base forecasting models for an input time series. In each convolutional layer, “F” and “K” denote the number of filters and the kernel size, respectively.
an initial learning rate set to \(0.001\) and a batch size of \(32\) series is used to minimize the custom loss function. Furthermore, a grid search cross-validation strategy has been set up to validate the hyperparameter \(\lambda\in\{1\times 10^{-3},5\times 10^{-3},\ldots,1\times 10^{-1},5\times 10^{-1}\}\) in the loss function. As a result, the best \(\lambda\) values are \(1\times 10^{-1}\), \(5\times 10^{-3}\), \(1\times 10^{-2}\) for yearly, quarterly, and monthly time series, respectively. The neural network is written in Python 3.9 and implemented with TensorFlow 2.12. QP programs are solved using "quadprog" package in R. All the experiments are performed on a laptop with an Intel(R) Core(TM) i7-8565U processor clocked at 1.80GHz with 4 cores, 16 GB of RAM and Ubuntu 20.04 LTS. Finally, to enhance the reproducibility of our proposed meta-learner, we have made the corresponding source code available through the following link: [https://github.com/antoniosudoso/mtl-comb](https://github.com/antoniosudoso/mtl-comb).
## 6 Experimental results
In this section, we first provide an analysis of the distribution of the base learners within the classification datasets. Then, we compare our approach against state-of-the-art algorithms based on meta-learning. Finally, we provide visual explanations showing the discriminative regions in the input data that contribute to the network's selection of a specific forecasting method in the pool.
### Distribution of methods within the training dataset
To evaluate the level of class imbalance of the training data for the classification task we examine the distribution of base forecasting methods across the labels. These labels are generated for the training series in the offline phase by solving (QP-LAB) and then rounding the solution with a threshold \(\tau\). We empirically find that setting \(\tau=\frac{1}{M}\) yields satisfactory results in terms of the balance among the methods of the pool. To illustrate this, we compute the relative presence of each base learner and show these statistics in Figure 3. Each chart represents a set of time series grouped by frequency. The horizontal axis lists the individual methods, while the vertical bars depict the ratio between the number of series having the corresponding forecasting method in the label and the total number of series in that particular group. For each group of series, a consistent and nearly uniform distribution of labels is observed, with each method being almost equally represented. Such evidence indicates that there is no discernible class imbalance issue. Consequently, our classification models will not exhibit a bias toward any particular method during training, thereby enhancing their ability to generalize effectively across the spectrum of base forecasters. In the general case, if some base forecasters are not sufficiently represented, one can adjust the threshold \(\tau\) to reach a sufficiently balanced distribution of labels, or employ various balancing techniques, such as resampling methods or cost-sensitive loss functions (Charte et al., 2015; Tarekgen et al., 2021).
### Comparison with benchmark methods
We compare the point forecast performance of the proposed multi-task forecast combination approach (DNN-MTL) against the following benchmark methods:
* The simple average approach, where the forecasts from all nine methods in the forecasting pool are combined with equal weights (AVERAGE).
* The meta-learner introduced by (Montero-Manso et al., 2020) that uses XGBoost to link hand-crafted statistical time series features with forecasting errors (FFORMA).
* The recent method proposed by Kang et al. (2022) that employs XGBoost to connect diversity-based time series features with forecasting errors (FFORMA-DIV).
Additionally, we perform ablation studies on DNN-MTL to evaluate the individual contributions of each introduced component. In the first ablation experiment, to assess the impact of the classification task on promoting diversity among the base learners, we remove the classification branch from the neural network. This transforms our meta-learner into a single-task regression model with a convolutional feature extractor, aligning with the approach presented by Ma & Fildes (2021). We denote this modified meta-learner as DNN-REG. In the second ablation experiment, we focus on exploring the impact of the hyperparameter \(\lambda\) on the loss function. This hyperparameter modulates the contribution to the objective function of the orthogonality among the features extracted by the regression subnetwork and those extracted by the classification subnetwork. For this study, we set \(\lambda\) to 0 so that its contribution to the loss function is excluded. This meta-learner is denoted as DNN-MTL\({}_{(\lambda=0)}\) and is trained and evaluated under the same experimental conditions as the previous experiments.
We assess the forecasting performance on the M4 test dataset by using two error metrics: the overall weighted average (OWA) and the mean series-level overall weighted average (MoOWA).
Figure 3: Relative presence of each forecasting method across the generated classification datasets. Base forecasters are listed horizontally.
Lower values for both OWA and MsOWA indicate better forecasting accuracy. Moreover, we include the standard deviation (SD) to provide insight into the variability of accuracy across the series, offering a perspective on accuracy risk (Lichtendahl & Winkler, 2020). We present numerical results for each group of series in Table 3. Our meta-learner DNN-MTL consistently outperforms other approaches in terms of OWA and MsOWA, surpassing the state-of-the-art meta-learner FFORMA-DIV, which also incorporates diversity information. In contrast, the simple average combination consistently falls short when compared to all the meta-learners. Furthermore, our first ablation experiment reveals valuable observations. While the meta-learner with the regression branch alone (DNN-REG) can produce reasonably accurate forecasts, being competitive with FFORMA, the removal of the classification branch strongly worsens OWA and MsOWA metrics. This decrease in forecasting accuracy can be attributed to the absence of diversity information among the base learners. The second ablation experiment underscores the importance of the orthogonality term in the loss function. Notably, setting \(\lambda\) to zero negatively impacts the model's performance. This observation indicates that incorporating orthogonality empirically leads to improved results, enabling the subnetworks to better exploit task-specific time series features. Another noteworthy finding is that our meta-learner consistently provides forecasts with lower accuracy risk, as measured by SD, when compared to other methods.
To formally test whether the performances among the considered methods are statistically different, we employ the non-parametric Friedman test and post-hoc Multiple Comparisons with the Best (MCB) Nemenyi test (Koning et al., 2005), as implemented in the R package "tsutils" (Kourentzes, 2022). As outlined by Kourentzes & Athanasopoulos (2019), the Friedman test initially assesses whether at least one of the methods significantly differs from the others. If such a difference exists, the Nemenyi test is applied to identify groups of forecasting methods that do not exhibit statistically significant differences. This testing approach offers the advantage of not imposing any assumptions about the distribution of data and avoids the need for multiple pairwise comparisons between forecasts, which could bias the test results. We apply these tests on each data frequency based on the sOWA errors as shown in Figure 4. One can interpret the results in the following manner: lower average ranks indicate better performance, but there are no significant performance differences between any two methods if their confidence intervals overlap. According to Figure 4 we observe that: (i) although FFORMA-DIV outperforms FFORMA on average, their differences are not statistically significant. These findings align with those documented in the recent study by Kang et al. (2022); (ii) our method, DNN-MTL, takes the top position and generates forecasts that exhibit statistically significant differences when compared to those produced by other meta-learners, except in the case of monthly series, where the prediction intervals of DNN-MTL and DNN-MTL\({}_{(\lambda=0)}\) slightly overlap.
## 6 Conclusion
Figure 4: MCB Nemenyi test results, average ranks, and 95% confidence intervals for yearly, quarterly, and monthly time series. Forecast combination methods are sorted vertically according to the sOWA mean rank. The mean rank of each approach is shown to the right of its name. Statistical differences in performance are observed if the intervals of two forecast combination procedures do not overlap.
\begin{table}
\begin{tabular}{l c c|c c c|c c c} \hline \hline & \multicolumn{3}{c}{**Yearly**} & \multicolumn{3}{c}{**Quarterly**} & \multicolumn{3}{c}{**Monthly**} \\ \hline
**Method** & **OWA** & **MsOWA (SD)** & **OWA** & **MsOWA (SD)** & **OWA** & **MsOWA (SD)** \\ \hline AVERAGE & 0.949 & 1.204 & (0.729) & 0.916 & 0.955 & (0.476) & 0.911 & 0.952 & (0.374) \\ FFORMA & 0.799 & 0.996 & (0.866) & 0.847 & 0.910 & (0.542) & 0.858 & 0.905 & (0.384) \\ FFORMA-DIV & 0.798 & 0.979 & (0.738) & 0.842 & 0.899 & (0.465) & 0.851 & 0.904 & (0.377) \\ DNN-REG & 0.801 & 0.896 & (0.565) & 0.845 & 0.902 & (0.497) & 0.862 & 0.909 & (0.476) \\ DNN-MTL\({}_{(\lambda=0)}\) & 0.798 & 0.879 & (0.401) & 0.839 & 0.891 & (0.482) & 0.849 & 0.898 & (0.338) \\ DNN-MTL & **0.792** & **0.866** & **(0.261)** & **0.830** & **0.873** & **(0.343)** & **0.841** & **0.886** & **(0.229)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test set evaluation metrics for yearly, quarterly, and monthly time series. OWA, mean (M) of sOWA, and standard deviation (SD) of sOWA for each combination method. Lower OWA and MsOWA correspond to better forecasting accuracy. The best-performing method is highlighted in bold.
### Visual explanations
After training, the classification subnetwork can be disconnected and used to predict, for an input time series, the probability that a forecasting method aligns with the principles of accuracy and diversity. This network extracts relevant features from the series by means of convolutional layers and subsequently maps these features to a vector of labels, where each label corresponds to a forecasting method. However, the interpretability of the features derived from CNNs frequently presents difficulties. To address this limitation and provide insights into the decision-making process of the meta-learner, we employ Gradient-weighted Class Activation Mapping (Grad-CAM) (Selvaraju et al., 2017). Grad-CAM is a method that enhances the interpretability of deep neural networks by producing visual heatmaps highlighting the regions of the input that significantly influence the network's decision for a particular class. This technique leverages the gradients of the predicted class score by the model with respect to the feature maps of the last convolutional layer. Specific to our problem, by computing the weighted sum of these gradients, we can identify the most discriminative regions in the input data that contributed to the network's selection of a specific forecasting method in the pool. Moreover, by overlaying the Grad-CAM heatmaps onto the original time series, we can visually understand the network's reasoning, aiding domain experts in comprehending the model's decision-making process. We consider a threshold of 0.5 to convert each predicted probability into a 0-1 binary label from which the gradient of the predicted class score is computed. In Figures 5, 6, and 7, for each base forecasting method selected by the network, we analyze the heatmap produced by applying Grad-CAM on a sample of yearly, quarterly, and monthly time series, respectively. At each timestep, an importance score ranging from 0 to 1 is assigned. A value close to 1 means high significance of the corresponding timestep, while values near 0 indicate timesteps with low importance for the classification outcome. Looking at the figures, the heatmaps exhibit diversity in terms of the temporal regions they focus on within the input time series. This finding implies that the neural network is not merely relying on a single common pattern present in the time series but is considering multiple relevant segments. Furthermore, these distinctive areas of interest indicate that the network is leveraging different features for different methods, likely reflecting the unique characteristics and strengths of each base forecaster. Finally, it is important to note that while the network is trained to learn both accurate and diverse methods, certain regions with high heatmap values appear to be shared among multiple methods. These common regions indicate the particular temporal segments that hold significance in the context of more than one forecasting technique. This phenomenon indicates the presence of features in the input time series that possess inherent characteristics that are beneficial for multiple forecasting methods.
Figure 5: Grad-CAM visual explanations of the predicted base learners for a sample of 4 yearly test series. Time series have been normalized to zero mean and unit variance. For an input series, there is a heatmap for each forecasting method selected by the classification subnetwork.
Figure 6: Grad-CAM visual explanations of the predicted base learners for a sample of 4 quarterly test series. Time series have been normalized to zero mean and unit variance. For an input series, there is a heatmap for each forecasting method selected by the classification subnetwork.
Figure 7: Grad-CAM visual explanations of the predicted base learners for a sample of 4 monthly test series. Time series have been normalized to zero mean and unit variance. For an input series, there is a heatmap for each forecasting method selected by the classification subnetwork.
## 7 Conclusion
We present a multi-task learning methodology to improve the forecasting performance of convex combinations of forecasting models. Building on the literature, we use meta-learning to link the features of time series with the forecasts provided by a pool of base learners. Our meta-learner simultaneously addresses two related tasks: combining forecasts based on their accuracy and selecting models based on their diversity. To accomplish this, we employ a deep neural network to jointly solve the associated regression and classification problems. To provide labels for the classification task, we introduce an optimization-driven approach to identify the most appropriate method for a given time series, considering accuracy and diversity among the methods in the pool. Experimental results on the M4 competition dataset demonstrate that this approach enhances the accuracy of point forecasts compared to state-of-the-art meta-learning methods. Moreover, gradient-based visual explanations provide interesting insights into discriminative regions in the input series that contribute to the network's selection of a specific forecasting method. Our approach presents an automated and adaptable tool for optimizing forecasting procedures, featuring the following key advantages. First, it relieves forecasters from the burden of filtering the methods according to diversity and accuracy criteria when conducting feature-based forecasting, as it effectively learns diversity information during training. Second, it can accommodate forecasts from a variety of methods, including statistical techniques, nonlinear approaches, and judgment-based forecasting. As a drawback, the proposed method is limited to producing only point forecasts; this motivates future research in the direction of adapting it to output interval forecasts. Another potential direction for future research is the extension to probability density forecasting, as described in the work by Hall & Mitchell (2007). In this context, meta-learning could play a role in generating a weighted combination of forecast distributions from various models or in creating a weighted average of these distributions.
## Acknowledgments
The work presented in this paper has been supported by PNRR MUR project PE0000013-FAIR and CNR DIT.AD106.097 project UISH - Urban Intelligence Science Hub.
## Declaration of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
| 予測組み合わせは、複数の予測を用いて、一つのより正確な予測を作成することを意味します。近年では、特徴に基づく予測は、最も適切な予測モデルを選択するか、またはそれらの組み合わせの重みを見直すために使用されています。この論文では、同時解決する両方の問題に焦点を当てた多タスク最適化パラダイムを提示します。これは、現在の運用研究に基づく予測方法の進化につながります。本質的に、このパラダイムは、標準の特長に基づく予測手法に加えて、最適な予測方法の特定のための追加の学習と最適化タスクを組み込みます。トレーニングフェーズでは、線形制約と二次関数を持つ最適化モデルを用いて、各時系列に対して正確で多様な方法を識別します。さらに、トレーニングフェーズでは、ニューラルネットワークを用いて、その最適化モデルの行動を学習します。トレーニングが完了すると、ネットワークを使って候補方法セットを識 |
2309.04590 | Robotic Defect Inspection with Visual and Tactile Perception for
Large-scale Components | In manufacturing processes, surface inspection is a key requirement for
quality assessment and damage localization. Due to this, automated surface
anomaly detection has become a promising area of research in various industrial
inspection systems. A particular challenge in industries with large-scale
components, like aircraft and heavy machinery, is inspecting large parts with
very small defect dimensions. Moreover, these parts can be of curved shapes. To
address this challenge, we present a 2-stage multi-modal inspection pipeline
with visual and tactile sensing. Our approach combines the best of both visual
and tactile sensing by identifying and localizing defects using a global view
(vision) and using the localized area for tactile scanning for identifying
remaining defects. To benchmark our approach, we propose a novel real-world
dataset with multiple metallic defect types per image, collected in the
production environments on real aerospace manufacturing parts, as well as
online robot experiments in two environments. Our approach is able to identify
85% defects using Stage I and identify 100% defects after Stage II. The dataset
is publicly available at https://zenodo.org/record/8327713 | Arpit Agarwal, Abhiroop Ajith, Chengtao Wen, Veniamin Stryzheus, Brian Miller, Matthew Chen, Micah K. Johnson, Jose Luis Susa Rincon, Justinian Rosca, Wenzhen Yuan | 2023-09-08T20:36:56 | http://arxiv.org/abs/2309.04590v1 | # Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components
###### Abstract
In manufacturing processes, surface inspection is a key requirement for quality assessment and damage localization. Due to this, automated surface anomaly detection has become a promising area of research in various industrial inspection systems. A particular challenge in industries with large-scale components, like aircraft and heavy machinery, is inspecting large parts with very small defect dimensions. Moreover, these parts can be of curved shapes. To address this challenge, we present a 2-stage multi-modal inspection pipeline with visual and tactile sensing. Our approach combines the best of both visual and tactile sensing by identifying and localizing defects using a global view (vision) and using the localized area for tactile scanning for identifying remaining defects. To benchmark our approach, we propose a novel real-world dataset with multiple metallic defect types per image, collected in the production environments on real aerospace manufacturing parts, as well as online robot experiments in two environments. Our approach is able to identify 85% defects using Stage I and identify 100% defects after Stage II. The dataset is publicly available at [https://zenodo.org/record/8327713](https://zenodo.org/record/8327713).
## I Introduction
Various large-scale manufacturing machinery and industries with large metal parts like aircraft components, experience various internal and external factors such as vibration, foreign objects debris, high temperature, friction, and corrosion. This can lead to fatigue or even part failure. Hence, to ensure safe operation, each industry requires surface inspection. For example, in the aircraft industry airplanes are inspected every 100 hours[1], according to Federal Aviation Administration(FAA) rules. The periodic inspection could extend the lifetime of the parts. However, human visual and touch inspection still accounts for more than 90% of inspection checks[2].
There is a significant interest in automating the surface defect detection process, as it allows for fast, repeatable, and cost-effective detection, as compared to the human expert inspection process. Surface defect detection on industrial parts is a fast-growing market [3]. Nowadays, more and more inspection systems use vision-based techniques combined with Deep Learning for defect detection[4][5]. However, aerospace and spacecraft industries have different inspection requirements - they have large metal parts which need to be scanned and the dimensions of the defects can be as small as 0.01mm.
Instead of relying on a vision-only system, we propose a visuotactile 2-stage pipeline for surface defect detection. Our method combines the advantages of both vision and tactile sensing and avoids their limitations: vision has high prediction speed and can cover large surface area, but typically attains low accuracy since the visual appearance of defects can be influenced by many sources of noise; contrarily, high-resolution tactile sensing, give high accuracy but has low speed because of the small coverage area in a single scan. The first stage of our pipeline uses an RGB camera to collect an image of a segment of the specimen and uses deep learning to identify potential defect regions. The regions with low defect confidence are passed onto the second stage of the pipeline which leverages a high-resolution vision-based tactile sensor, the GelSight Mobile, for taking a tactile scan. This tactile data is used to identify and classify the surface defect. This approach allows the scanning of large surfaces for small anomalies efficiently. We implemented the whole system on a robot arm, to allow for inspection in a production environment. Using our method, we are able to identify defects 100% of the time in a fraction of the time as compared to the tactile-only approach and more accurately than the vision-only approach.
We make 3 specific contributions in this work
* We introduce the first aerospace defect detection dataset containing metallic surfaces with multiple defects in a single image
* We propose a 2-stage defect detection approach using visuotactile sensing
* We integrate our detection approach into a prototype system on an industrial robot arm
We introduce the dataset and dataset collection details in Section (III), the visuotactile detection approach in Section (IV), and the integrated robot system for runtime defect detection in Section (V). Using our approach, we are able to achieve perfect recall in 70x less inspection as compared to the tactile-only approach. We successfully integrate our detection system in 2 separate environments(different arms, different illumination conditions, and different panels). The proposed techniques are widely applicable to various industries with large-scale components like ship hull inspection and heavy machinery.
## II Related Work
This section surveys works that present novel defect detection techniques as well as works that propose datasets with industrial defects.
_Defect detection methods_: This section covers various surface inspection techniques using different sensing techniques. In [5], authors used a depth camera to create a 3D reconstruction of the part under inspection, computer vision techniques for segmenting cracks, and machine learning for classifying them into defect vs non-defect patterns. In the aerospace industry, the most common type of part is metallic and very reflective. As noted in [6], commercial depth sensors exhibit poor accuracy when sensing metallic parts. In [7], authors train a custom deep CNN for automatic crack detection on concrete surfaces. Their approach gives a 0.87 F-measure value on the CrackLS315 dataset. In [8], authors similarly used a CNN and a vision-based tactile sensor for crack profile estimation. However, it is unclear how to extend the approach to images containing multiple kinds of defects that are not scratches. [9] is the closest to our work. They propose a 2-stage visuotactile pipeline targeted only to crack detection. They used 3500 images to train an object detector and used an optical fiber-based tactile-proximity sensor for assessing cracks. However, their method is tested on a toy dataset using 3D-printed parts containing cracks in a lab setting. Their dataset contains a single large crack across the image on a non-metallic surface. We have integrated our detection pipeline in a production setting and show results on real aerospace parts. Moreover, we require an order of magnitude less data than their work.
_Metal defect datasets_: In this section, we cover datasets that target defect detection in industrial parts and manufacturing processes. MVTec dataset [10] introduced a challenging dataset for unsupervised anomaly detection. The dataset contains RGB images of various small to medium manufactured parts like carpets, leather, metal nut, cable, hazelnut, etc. However, each image contains only a single type of anomaly. In comparison, our dataset contains multiple defects in a single image and can be very small(less than 2% of pixels in the image). The magnetic tile dataset[11] contains 1344 grayscale images along with a segmentation mask for each image. They provide a segmentation mask for each image. The dataset is targeted towards industrial parts (flat metallic sheets), which are challenging to image, similar to our case. However, the parts considered in the dataset are flat and have consistent illumination across the tile plane. This illumination setting is hard to replicate for aerospace parts, which can be curved and have a significant variation in color across the metallic part.
## III Boeing-CMU multi-defect dataset
We introduce a novel dataset of surface defect detection for aerospace metal parts. This dataset is used to test our defect detection algorithm in an offline setting. Our dataset contains 184 RGB images with bounding box annotations in Pascal VOC format[12] for each image. Each RGB image contains multiple defects. The defects are manually made on the parts by experts from Boeing with a process similar to the real defects in production, and they are a more challenging inspection cases since the defect density is higher than real parts in production. Each bounding box contains the location and class of the defect. This dataset contains 3 kinds of defects - _scratches_, _drill runs_, and _gouges_. Figure 2 illustrates the defects by showing its RGB images, GelSight tactile image, and depth profile along the defect, respectively. The standard definition of the defects is given in terms of depth and width of surface geometry, as marked in the _Heightfield_ in Figure 2. Table I shows the breakdown of the number of defects in our dataset.
The dataset was collected at the Boeing lab with an Intel RealSense D455 camera at a resolution of 1280 \(\times\) 800. The full setup is shown in Figure 3. We placed soft boxes (bulb with a diffuser cloth in front) at an angle of 45\({}^{\circ}\) along the vertical axis on either side of the camera. This illumination setting allows us to capture images of metallic curved panels without over-saturation or under-exposure in any part of the image. For the dataset, we used 18 curved metal (approximate radius of curvature 26.5 inch) panels - 2 panels of dimension 40 inch \(\times\) 40 inch, 15 panels of dimensions 56 inch \(\times\) 38 inch, and 1 panel of dimension 94in \(\times\) 20in. We collect 9 images at different locations per panel to cover the whole panel. Each panel is a piece of an aircraft with fasteners, a support structure underneath, and a green temporary protective coating. All the images were manually labeled by Boeing personnel using LabelImg1, a graphical image annotation tool. Figure 1 shows some illustrative images in the datasets. One noticeable feature is the presence of significant variation in the surface color. This is due to the surface being curved and metallic in appearance.
Footnote 1: [https://github.com/heartexlabs/labelImg](https://github.com/heartexlabs/labelImg)
**Tactile dataset**: We collected tactile data using a GelSight Mobile 0.5X[13], a high-resolution vision-based tactile sensor with 6.9\(\mu\)m resolution in x-y direction and 4\(\mu\)m in the z-direction. We manually pressed the sensor on the probable defect location. We collected 59 scans from 1 Boeing panel, containing - 17 scratches, 14 gouges, and 18 drill runs. We also collect 10 no-defect cases. Each tactile scan is manually labeled with a class label.
## IV Multi-modal defect detection method and setup
Figure 4 shows the proposed pipeline for surface defect detection and classification based on visual and tactile sensing. Our 2-stage pipeline uses RGB images for identifying
defect regions with a confidence value. We delegate bounding boxes with low confidence scores to the second stage and use high-resolution tactile images for identifying the defect. In the following section, we provide details about each stage.
### _Stage I: Vision-based defect detection_
The first stage uses an RGB camera to scan the surface and predict defects. We used a Faster Region-based Convolutional Neural Network(Faster R-CNN)[14] with MobileNet-v3 FPN backbone [15]. The neural network architecture was chosen based on empirical observation. The model was pretrained on Common Objects in Context (COCO) dataset [16]. We fine-tune the last 3 backbone layers, regression, and classification models after feature prediction. Note, the model can be used with images of any size without resizing, at both train and test time, as it is fully convolutional. The neural network model predicts multiple bounding boxes per image. Each bounding box contains the coordinates of the rectangle region in the camera coordinate frame, defect class, and confidence score for that class. At test time, we predict bounding boxes with a confidence score higher than 0.7 as surface defects with certainty and shortlist those with scores between 0.1 and 0.7 to delegate to the next stage of the pipeline. These threshold choices provide a good trade-off between detection in stage I and proposing candidates with minimal false positives for stage II.
While training, we use 3 data augmentation techniques - photometric (varying brightness, contrast, hue, and saturation), CutAndPaste [17], and translation. These augmentation techniques make our model robust to illumination changes and the presence of distracting features (like bolts and big cracks) at runtime when the inspection parts could be placed in a totally different environment and could be of different shapes. Figure 5 illustrates the augmentation techniques applied individually. At the training time, we apply all of them at the same time. The photometric data augmentation is specifically helpful to make the model robust to lighting variation which might occur in the production environment.
### _Stage II: Tactile-based defect detection_
We use GelSight Mobile [13] from GelSight Inc. for obtaining high-resolution tactile information. The tactile sensor provides a high-quality heightfield as shown in Figure 2 GelSight Image. Due to the high-quality heightfield, we can directly inspect anomalous regions and use the defect description to identify them. For figuring out the anomalous regions on heightfield, we use the canny edge detector without non-maximal suppression, followed by the Probabilistic Hough line for scratches & drill run and Hough Circle detection for gouges, respectively. We hand-tuned the parameters of canny edge detector and feature detection algorithms. This step is required to identify potential regions containing defects. After figuring out the anomalous region, we extract the depth profile by generating a line segment passing perpendicular to the scratch & drill run or passing through the center of the gouge, as shown in Figure 6C. After obtaining the depth profile, we detrend the depth by presuming the depth in the
Fig. 1: **Dataset Illustration**: It contains RGB images of aircraft parts from Boeing. Each panel is curved with 3 sizes 40in \(\times\) 40in, 56in \(\times\) 18in, and 94in \(\times\) 20in. For each image, we have bounding box annotations made by industry inspectors.
Fig. 3: **Dataset capture setup**: Left image contains - (1) Position for metal panel placement; (2) Newer 24 in \(\times\) 24 in soft boxes lights with 700 Watt, 5500K CFL Light Bulbs; (3) RealSense D435 camera. On the right, we show the real setup used to collect images for our dataset.
Fig. 2: **Dataset defect description**: The top image shows an RGB image and 3 types of defect. The bottom 3 rows show(left-to-right) zoomed-in RGB image, heightfield of the anomalous region, and detrended depth profile.
neighborhood of the defect is zero-level. The detrending is crucial to correctly identify the depth of the defect and use the defect definitions for identification. We use the depth and width defect descriptions, as mentioned in Table I, for identifying the defect in the extracted profile, as shown in Figure 6. For drill run detection, we require the number of minima peaks with depth \(>10\mu m\) to be greater than 3. This heuristic is motivated by the fact that the drill run forms a repeated pattern of bumps in the specimen.
## V Robot system integration
We integrated our defect detection pipeline with a robot system that is very similar to a system that can be applied for online detection in factories, as shown in Figure 7.
The robot system consists of a UR3 robot arm, a RealSense 435F RGBD camera mounted at the robot end-effector, a GelSight Mobile 0.5x mounted using a custom-designed mount at the robot end-effector and a Neewer \(24\)in \(\times\)\(24\)in a softbox. Note, the depth information is not used for defect detection purposes. The robot planner and defect prediction algorithms run on a computer with Intel i7-10850H CPU @ 2.7 GHz, 6 Cores with NVIDIA Quadro T200 GPU, and Windows 10 operating system. The GelSight tactile sensor mount is specifically designed in order to allow
Fig. 4: **Detection Overview**: Our approach consists of 2 stages A) Vision stage uses Deep Learning based bounding box detector for identifying defects in the RGB image from the global view. B) Based on the confidence threshold we identify defects or send them to stage 2. C) Tactile stage uses the high-resolution heightfield extracted from GelSight and inspects the depth profile of anomalous regions to identify the type of defect.
Fig. 5: **Data augmentation strategies**: This visual illustrates the original image and images after a single augmentation applied to the original image. We found that these augmentations make our detection robust to illumination changes, translation variations, and clutter(bolts).
Fig. 6: **Tactile detection pipeline**: The outline of our tactile sensor-based detection system A) Raw data capture by GelSight Mobile B) Output of Canny edge detection on heightfield image C) Automated anomalous profile selection D) Depth profile along the anomalous profile with width and depth annotations.
Fig. 7: **Runtime System**: The robot system contains (A) UR3 robot arm (B) RealSense RGBD 435F camera (C)Neewer Illumination source (D) Custom tactile sensor mount E) GelSight Mobile 0.5x (F) Specim under inspection. Our algorithm is run on a PC not shown in the figure.
compliance when indenting the metal specimen. Figure 8B shows the CAD drawing of the sensor mount. The camera to robot calibration is done using MoveIt hand-eye calibration. GelSight to end-effector transform is manually computed based on manufactured gripper mount.
In the first stage, the robot arm collects RGB images, using the algorithm defined in Section V-A and feeds them to phase I of the defect detection system described in Section IV-A. Phase I outputs defect regions and uncertain regions. Then, the robot control uses an algorithm mentioned in Section V-B, to collect the tactile image of each uncertain region. This tactile image is, then passed to phase II, tactile detection described in Section IV-B, for processing.
### _RGB Data Collection with the Robot_
In this section, we will describe the robot control technique which is used for capturing RGB images for surface defect detection. In our current testing setup, the capture locations are pre-defined manually in the robot's task-space coordinates (3D Cartesian locations). We request that the robot collect RGB images at multiple locations to ensure the entire surface of the panel is covered. In our initial experiment, those locations are manually chosen based on the fixed position of the parts. The robot calculates the joint angle configuration for a task-space location using inverse kinematics [18]. The robot then generates joint angle trajectories toward the target joint locations using linear interpolation. We leverage the robot simulation to check for collisions and singularity. After which, the trajectory is forwarded to the robot's controller.
### _Tactile Data Collection with the Robot_
In this section, we will describe the robot control strategy used to obtain tactile images using the GelSight sensor. To capture a focused tactile image, the robot needs to make the GelSight Mobile indent the surface in the perpendicular direction at the defect location. Therefore, to achieve normal indentation, we estimate coarse normal direction by obtaining a coarse depth measurement from the RGBD camera and fitting a polynomial function in \((x,y,z)\) to the specimen surface. Given the fitted surface function, we obtain the coarse surface normal at the target data capture location by differentiating the polynomial function w.r.t. \(x\) and \(y\), followed by a cross-product. We, then, use inverse kinematics and interpolation, as mentioned in the previous section, to move closer to the object. After that, we use tactile servoing until we obtain a focused tactile scan. We use background subtraction thresholding to estimate if the tactile scan is in focus.
## VI Experiments
To evaluate our proposed pipeline for defect detection, we perform analysis of each stage - vision only in Section VI-A and tactile only in Section VI-B. We, then, perform an analysis of our two-stage inspection system integrated with a robot in Section VI-C. For our on-site robot experiments, we record the detection runtime and the accuracy of defect detection.
### _Offline Vision-based surface defect detection_
We first evaluate the performance of our vision-based algorithm for defect detection using the offline dataset introduced in Section III. We fine-tuned the Neural Network using 150 training images of resolution 1280\(\times\)800. We investigate the effect of using data augmentation techniques for defect detection by comparing the performance of the trained model with various augmentations. Each model was trained on 150 images for 100 iterations using SGD with a learning rate of 0.005 and weight decay of 5e-4 in PyTorch.
During testing, we only consider bounding boxes that have a high confidence score (\(0.5\) in our experiments). For calculating the recall, we used _maximum detections_ allowed per image to be 100. This parameter intuitively means the bounding box predictions allowed in each image. Table II shows the evaluation metrics using the trained Neural Network with and without augmentations. For all the metrics, we used Intersection over Union = 0.4 (metric for finding the overlap between bounding boxes) as the threshold for finding correspondence between the ground truth bounding box and the predicted bounding box. Figure 9 shows the test results. We found that the common misclassification cases are: (i) confusion between scratch and drill run(Figure 9 case A); (ii) regions that look like scratches but do not have depth(Figure 9 case B); (iii) very few visual features for classification(Figure 9 case C, D, E and F) These issues would be solved by our tactile stage, as it accounts for indentation depth and captures an orthographic view of the defect.
In on-site robot experiments, we obtained images containing many challenging artifacts, as shown in Figure 10. Specifically, large bolt regions and bright light spots caused issues in the detection. Without augmentation, the probability of those areas being classified as a defect is high, as shown in Figure 10 left. However, with our augmentation techniques, the neural network is correctly able to identify those regions as normal regions.
defect classification result. We obtain average classification accuracy of 95.75%. Note, the tactile-only approach allows to identify defects with 100% success rate if the class identification is not of concern. We notice some misclassification due to the high variability in the defects and dirt on the sensor surface in the tactile data collection. We showcase the misclassified cases in Figure 11. For the drill run cases, we found the depth profile is significantly different than the ideal profile according to the industrial partners and the misclassified cases have fewer drill features. Therefore, all the misclassifications are reasonable.
### _Online Robot system evaluation_
In this section, we run our integrated robotic detection system to inspect an aerospace part for potential defect regions. We capture multiple RGB images at different locations to cover the entire surface of the part. Then the tactile exploration procedure is performed on each RGB-image-covered area.
We compare the performance of our system at runtime with vision-only and tactile-only approaches. We choose accuracy and runtime as the metric for comparison. Since tactile data capture (mean time = 22.26 seconds) takes 4x more time than visual data capture (mean time = 6.52 seconds). We use these to give an estimated time for all experiments instead of actual runtime. We use 1 panel for our robotic experiment containing 15 defects - 7 scratches, 7 gouges, and 1 Drill Run. We use 2 RGB images to cover the panel used in our experiment. Siemens engineer manually
Fig. 11: **Tactile detection failures**: This visual shows the illustrative failure cases in our tactile dataset with ground truth and predicted defect labels. We found 2 _Drill Run_ cases misclassified because the number of repeated features was very few.
Fig. 12: **Tactile confusion matrix**: We plotted the predicted label using our tactile detection algorithm on the x-axis and true labels on the y-axis. This visual highlights that our tactile detection algorithm can classify defects very well.
Fig. 10: **Comparison of RGB-based defect detection with/without data augmentation at robot experiment time**: In this figure, the ground truth boxes are marked with solid lines, and predicted areas are marked with dashed lines. The colors of the bounding box represent _drill run_, _sogue_, and _scratch_ in red, green, and blue color respectively. The left side shows the model performance without data augmentation on 2 test images. It identifies large bolt regions as scratch defects and empty bolt regions as gouges which is incorrect. The model trained with data augmentation is able to correctly identify those regions as background as shown on the right and obtains 94.58% recall rate without defect classification as compared to 63.56% without augmentations.
Fig. 9: **RGB-only detection results in offline dataset**: We highlight the prediction of our algorithm on reserved images in our offline dataset. In the bottom row, we highlight the failure cases in detection. The common causes of failure are insufficient visual features(drill run looking like a scratch in (A)) and no depth information at the defect location(B is a paint bump instead of a scratch in the surface. The depth profile between the paint bump and scratch is significantly different).
labeled the test data for this experiment. Table III compares the baselines with our approach quantitatively for a new aerospace panel at Boeing's facility. Our approach achieves a perfect recall rate(@IoU=0.4 and _max detections_=100) of 1.0, which is 26.5% higher than the vision-only method and takes 0.01x of runtime as compared to the tactile-only approach. The defect detection system has been integrated with multiple robotic systems at 2 different locations - Siemens research lab and Boeing production labs. These environments had 2 different robotic systems - UR3 in Siemens labs and UR10 in Boeing labs. These environments had different illumination settings and panels with different curvatures for testing. This highlights that our detection is easy to adapt to various environments.
## VII Conclusion
This work introduces a robotic aerospace defect dataset and a 2-stage pipeline for defect detection on large-scale parts. Stage I uses an RGB camera to identify defect areas with a preliminary estimation, followed by the stage the robot uses a high-resolution tactile sensor GelSight Mobile for precise inspection of the potential defect area. Our approach is shown to be beneficial in terms of accuracy (perfect recall) and speed of inspection (70x faster than the tactile-only approach). We were also successfully able to integrate the detection system in 2 different environments, containing different robot arms, different illumination, and different metal panel. Comprehensive evaluation in production environment out of the scope of this research work.
We did not have the capacity to test the robustness of the pipeline after repeated use. Touch sensor measurements become less accurate over time due to repeated interaction. Therefore, accuracy evaluations of the pipeline at repeated intervals may help the system to become robust. Transfer learning under significant illumination or inspection material is another avenue of research. Using multiple viewpoints in a single detection might be an interesting research direction to improve the accuracy of the vision stage. Another interesting extension would be to incorporate human feedback for the online update of the prediction model.
## VIII Acknowledgment
The research is partially sponsored by Advanced Robotics for Manufacturing Institute by the Office of the Secretary of Defense and was accomplished under Agreement Number W911NF-17-3-0004. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Office of the Secretary of Defense or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
| 製造工程において、表面検査は品質評価と損傷のlocalizeにとっての重要な要件です。そのため、自動表面異常検出は、様々な産業における検査システムの研究の重要な分野となっています。特に、大規模なコンポーネントを持つ産業、例えば航空機や重機では、非常に小さな欠陥サイズを持つ大規模な部品を検査することが課題となっています。また、これらの部品は曲線状になっていることも多いです。この課題に対処するため、私たちは2段階のマルチモーダル検査パイプラインを提案しました。視覚と触覚検知を用いて、このアプローチは、視覚的な全体的な視覚化と、触覚的に特定された領域を用いて残りの欠陥を特定することで、視覚と触覚検知の両方の優位性を組み合わせます。このアプローチを評価するために、私たちは、航空宇宙製造部品の実際の生産環境で収集された複数の金属欠陥の種類を持つ新しい現実世界データセットを提案しました |
2309.14783 | Manifestly covariant variational principle for gauge theories of gravity | A variational principle for gauge theories of gravity is presented, which
maintains manifest covariance under the symmetries to which the action is
invariant, throughout the calculation of the equations of motion and
conservation laws. This is performed by deriving explicit manifestly covariant
expressions for the Euler--Lagrange variational derivatives and Noether's
theorems for a generic action of the form typically assumed in gauge theories
of gravity. The approach is illustrated by application to two scale-invariant
gravitational gauge theories, namely Weyl gauge theory (WGT) and the recently
proposed `extended' Weyl gauge theory (eWGT), where the latter may be
considered as a novel gauging of the conformal group, but the method can be
straightforwardly applied to other theories with smaller or larger symmetry
groups. The approach also enables one easily to establish the relationship
between manifestly covariant forms of variational derivatives obtained when one
or more of the gauge field strengths is set to zero either before or after the
variation is performed. This is illustrated explicitly for both WGT and eWGT in
the case where the translational gauge field strength (or torsion) is set to
zero before and after performing the variation, respectively. | Michael Hobson, Anthony Lasenby, Will Barker | 2023-09-26T09:31:29 | http://arxiv.org/abs/2309.14783v1 | # Manifestly covariant variational principle for gauge theories of gravity
###### Abstract
A variational principle for gauge theories of gravity is presented, which maintains manifest covariance under the symmetries to which the action is invariant, throughout the calculation of the equations of motion and conservation laws. This is performed by deriving explicit manifestly covariant expressions for the Euler-Lagrange variational derivatives and Noether's theorems for a generic action of the form typically assumed in gauge theories of gravity. The approach is illustrated by application to two scale-invariant gravitational gauge theories, namely Weyl gauge theory (WGT) and the recently proposed 'extended' Weyl gauge theory (eWGT), where the latter may be considered as a novel gauging of the conformal group, but the method can be straightforwardly applied to other theories with smaller or larger symmetry groups. The approach also enables one easily to establish the relationship between manifestly covariant forms of variational derivatives obtained when one or more of the gauge field strengths is set to zero either before or after the variation is performed. This is illustrated explicitly for both WGT and eWGT in the case where the translational gauge field strength (or torsion) is set to zero before and after performing the variation, respectively.
## I Introduction
For any given action, the process of deriving the manifestly covariant equations of motion for the fields on which it depends can be very time-consuming. A key reason is that for an action that is invariant under some set of symmetries, either global or local, the individual terms making up the Euler-Lagrange equations are typically not covariant under those symmetries. One therefore usually obtains equations of motion that, although inevitably covariant, are not manifestly so. One then faces the task of combining terms in various ways to achieve manifest covariance before continuing with further analysis, and this process can require considerable trial and error, often relying on inspired guesswork. Similar difficulties are also encountered when deriving conservation laws, which must again be covariant under the symmetries of the action, but are typically not obtained in a manifestly covariant form when they are derived using the standard forms of Noether's theorems.
Here we present an alternative approach whereby one maintains manifest covariance throughout the calculation of the equations of motion and conservation laws, thereby circumventing the above difficulties. Methods for achieving this, at least for the equations of motion, have been considered previously in the context of gravitational theories that are interpreted in the usual geometrical manner, where the action depends typically on the spacetime metric \(g_{\mu\nu}\), together perhaps with some non-metric connection \({\Gamma^{\sigma}}_{\mu\nu}\)[1; 2; 3; 4; 5; 6]. Here we instead focus on developing a manifestly covariant variational principle for gauge theories of gravity [7; 8; 9; 10; 11]. In particular, we illustrate the method by application to the scale-invariant Weyl gauge theory (WGT) [12; 13; 14; 15; 16; 17; 18] and its recently proposed 'extended' version (eWGT) [19; 20], but the approach presented can be straightforwardly applied to other theories with smaller or larger symmetry groups, such as Poincare gauge theory (PGT) [21; 22; 23; 10] or conformal gauge theory (CGT) [24; 25; 26; 27; 28; 29]. In addressing WGT and eWGT, we assume the action to depend on a translational gauge field \({h_{a}}^{\mu}\), a rotational gauge field \({A^{ab}}_{\mu}\) and a dilational gauge field \(B_{\mu}\), together with some set of matter fields \(\varphi_{A}\), which may include a scalar compensator field (which we occasionally also denote by \(\phi\)). It is worth noting that gauge theories of gravitation are most naturally interpreted as field theories in Minkowski spacetime [30; 31], in the same way as the gauge field theories describing the other fundamental interactions, and this is the viewpoint that we shall adopt here. It is common, however, to reinterpret the mathematical structure of gravitational gauge theories geometrically, where in particular the translational gauge field \({h_{a}}^{\mu}\) is considered as forming the components of a vierbein (or tetrad) system in a more general Weyl-Cartan spacetime, in which \({A^{ab}}_{\mu}\) and \(B_{\mu}\) then correspond to the spin-connection and Weyl vector, respectively [10]. These issues are discussed in more detail elsewhere [11; 19].
The manifestly covariant approach presented here also enables one easily to establish the relationship between the forms of variational derivatives, and hence the field equations, obtained by applying first- and second-order variational principles, respectively. A particularly interesting case is provided by comparing the variational derivatives obtained by setting the translational gauge field strength (or torsion) to zero after the variation is performed (first-order
approach) with those obtained by setting the torsion to zero in the action before carrying out the variation (second-order approach). In the latter case, the rotational gauge field is no longer an independent field. In WGT (and also PGT and CGT), it may be written explicitly in terms the other gauge fields, whereas in eWGT there exists an implicit constraint relating all the gauge fields. In both cases, one may arrive at simple expressions for the variational derivatives in the second-order approach in terms of those from the first-order approach.
The outline of this paper is as follows. In Section II we briefly review the concepts of local symmetries and dynamics in classical field theory. We present our manifestly covariant variational principle in Section III, which is applied to WGT and eWGT in Sections IV and V, respectively. We conclude in Section VI. In addition, in Appendix A, we include a brief account of the Bessel-Hagen method [32] for expressing the variation of the vector potential in electromagnetism in a manifestly gauge-invariant form; it is this approach that we generalise to gauge theories of gravity in order to assist in directly obtaining manifestly covariant conservation laws.
## II Local symmetries and dynamics in classical field theory
We begin by presenting a brief outline of the consequences of local symmetries for classical field theories, focusing in particular on Noether's first and second theorems, the latter being discussed surprisingly rarely in the literature. These considerations allow one also to determine the dynamics of the fields.
Consider a spacetime manifold \(\mathscr{M}\), labelled using some arbitrary coordinates \(x^{\mu}\), in which the dynamics of some set of (tensor and/or spinor) fields \(\chi(x)=\{\chi_{A}(x)\}\) (\(A=1,2,\ldots\)) is described by the action1
Footnote 1: In our subsequent discussion, we will typically assume that \(\mathscr{M}\) is Minkowski spacetime and \(x^{\mu}\) are Cartesian inertial coordinates, but this is unnecessary for the analysis in this section.
\[S=\int\mathscr{L}(\chi,\partial_{\mu}\chi,\partial_{\mu}\partial_{\nu}\chi)\, d^{4}x. \tag{1}\]
It should be understood here that the label \(A\) merely enumerates the different fields, although (with some overloading of the notation) can also be considered to represent one or more coordinate and/or local Lorentz frame indices (either as subscripts or superscripts), which we denote by lower case Greek and Roman letters, respectively. It is worth noting that, in general, each field \(\chi_{A}(x)\) may be either a matter field \(\varphi_{A}(x)\) or gauge field \(g_{A}(x)\). Allowing the Lagrangian density \(\mathscr{L}\) in the action (1) to depend on field derivatives up to second order is sufficient to accommodate all the gravitational gauge theories that we will consider (and also general relativity).
Invariance of the action (1) under the infinitesimal coordinate transformation \(x^{\prime\mu}=x^{\mu}+\xi^{\mu}(x)\) and form variations \(\delta_{0}\chi_{A}(x)\) in the fields (where, importantly, the latter need not result solely from the coordinate transformation)2, implies that
Footnote 2: Adopting Kibble’s original notation, for an infinitesimal coordinate transformation \(x^{\prime\mu}=x^{\mu}+\xi^{\mu}(x)\), the ‘form’ variation \(\delta_{0}\chi(x)\equiv\chi^{\prime}(x)-\chi(x)\) is related to the ‘total’ variation \(\delta\chi(x)\equiv\chi^{\prime}(x^{\prime})-\chi(x)\) by \(\delta_{0}\chi(x)=\delta\chi(x)-\xi^{\mu}\partial_{\mu}\chi(x)\).
\[\delta S=\int\left[\delta_{0}\mathscr{L}+\partial_{\mu}(\xi^{\mu}\mathscr{L}) \right]\,d^{4}x=0, \tag{2}\]
in which the form variation of the Lagrangian density is given by
\[\delta_{0}\mathscr{L} = \frac{\partial\mathscr{L}}{\partial\chi_{A}}\delta_{0}\chi_{A}+ \frac{\partial\mathscr{L}}{\partial(\partial_{\mu}\chi_{A})}\delta_{0}( \partial_{\mu}\chi_{A})+\frac{\partial\mathscr{L}}{\partial(\partial_{\mu} \partial_{\nu}\chi_{A})}\delta_{0}(\partial_{\mu}\partial_{\nu}\chi_{A}). \tag{3}\]
One should note that \(\delta_{0}\) commutes with partial derivatives and, according to the usual summation convention, there is an implied sum on the label \(A\). The integrand in the invariance condition (2) can be rewritten directly using the product rule to yield
\[\delta S=\int\left(\frac{\delta\mathscr{L}}{\delta\chi_{A}}\delta_{0}\chi_{A} +\partial_{\mu}J^{\mu}\right)\,d^{4}x=0, \tag{4}\]
where the Euler-Lagrange variational derivative \(\delta\mathscr{L}/\delta\chi_{A}\) and the Noether current \(J^{\mu}\) are given, respectively, by
\[\frac{\delta\mathscr{L}}{\delta\chi_{A}} = \frac{\partial\mathscr{L}}{\partial\chi_{A}}-\partial_{\mu}\left( \frac{\partial\mathscr{L}}{\partial(\partial_{\mu}\chi_{A})}\right)+\partial _{\mu}\partial_{\nu}\left(\frac{\partial\mathscr{L}}{\partial(\partial_{\mu} \partial_{\nu}\chi_{A})}\right), \tag{5a}\] \[J^{\mu} = \left[\frac{\partial\mathscr{L}}{\partial(\partial_{\mu}\chi_{A} )}-\partial_{\nu}\left(\frac{\partial\mathscr{L}}{\partial(\partial_{\mu} \partial_{\nu}\chi_{A})}\right)\right]\delta_{0}\chi_{A}+\frac{\partial \mathscr{L}}{\partial(\partial_{\mu}\partial_{\nu}\chi_{A})}\partial_{\nu}( \delta_{0}\chi_{A})+\xi^{\mu}\mathscr{L}. \tag{5b}\]
It is worth noting that the equations of motion for the fields \(\chi_{A}(x)\) are also obtained by considering the behaviour of the action under variations of the fields, but with the coordinate system kept fixed, so that \(\xi^{\mu}(x)=0\). One further assumes that the variations \(\delta_{0}\chi_{A}(x)\) vanish on the boundary of the integration region of the action, and also that their first derivatives \(\partial_{\mu}(\delta_{0}\chi_{A}(x))\) vanish in the case where \(\mathscr{L}\) contains second derivatives of the fields. In order for the action to be stationary \(\delta S=0\) with respect to arbitrary such variations \(\delta_{0}\chi_{A}(x)\) of the fields, one thus requires (4) to hold in these circumstances, which immediately yields the equations of motion \(\delta\mathscr{L}/\delta\chi_{A}=0\).
Returning to considering (4) as denoting the invariance of the action (1) under some general infinitesimal coordinate transformation \(x^{\prime\mu}=x^{\mu}+\xi^{\mu}(x)\) and form variations \(\delta_{0}\chi_{A}(x)\) in the fields (which need not vanish on the boundary of the integration region), one sees that if the field equations \(\delta\mathscr{L}/\delta\chi_{A}=0\) are satisfied for all the fields, then (4) reduces to the (on-shell)3 conservation law' \(\partial_{\mu}J^{\mu}\backsimeq 0\), which holds up to a total divergence of any quantity that vanishes on the boundary of the integration region of the action (1). This is the content of Noether's first theorem, which applies both to global and local symmetries.
Footnote 3: We use Dirac’s notation \(F\simeq 0\) for local functions \(F\) that vanish on-shell (or _weakly_ vanish), i.e. when the equations of motion \(\delta\mathscr{L}/\delta\chi_{A}=0\) are satisfied for _all_ the fields. We further denote by \(F\stackrel{{\rm n}}{{\rightharpoonup}}0\) and \(F\stackrel{{\rm g}}{{\rightharpoonup}}0\) when functions vanish if only the equations of motion of the matter or gauge fields, respectively, need be satisfied.
We will focus on the invariance of the action (1) under a local symmetry. In particular, we consider the (usual) case in which the form variations of the fields can be written as
\[\delta_{0}\chi_{A}=\lambda^{C}f_{AC}(\chi,\partial_{\chi})+(\partial_{\mu} \lambda^{C})f^{\mu}_{AC}(\chi,\partial\chi), \tag{6}\]
where \(\lambda^{C}=\lambda^{C}(x)\) are a collection of independent arbitrary functions of spacetime position, enumerated by the label \(C\), and \(f_{AC}(\chi,\partial\chi)\) and \(f^{\mu}_{AC}(\chi,\partial\chi)\) are two collections of given functions that, in general, may depend on all the fields and their first derivatives. The general form (6) usually applies only when \(\chi_{A}=g_{A}\) is a gauge field, whereas typically \(f^{\mu}_{AC}(\chi,\partial\chi)=0\) if \(\chi_{A}=\varphi_{A}\) is a matter field. For each value of \(C\), the function \(\lambda^{C}(x)\) represents a set of infinitesimal functions carrying one or more coordinate or local Lorentz frame indices. It is worth noting that on substituting (6) into (5b), one obtains an expression for the current \(J^{\mu}\) where the first term is proportional to (6) and, in the event that \(\mathscr{L}\) depends on second derivatives of the fields, the second term is proportional to the first derivative of (6), which itself contains second derivatives of the functions \(\lambda^{C}(x)\).
Using the expression (6), and again employing the product rule, the corresponding variation of the action (4) is given by (suppressing functional dependencies for brevity)
\[\delta S\!=\!\int\lambda^{C}\left[f_{AC}\frac{\delta\mathscr{L}}{\delta\chi_ {A}}-\partial_{\mu}\left(f^{\mu}_{AC}\frac{\delta\mathscr{L}}{\delta\chi_{A} }\right)\right]+\partial_{\mu}(J^{\mu}-S^{\mu})\,d^{4}x\!=\!0, \tag{7}\]
where we define the new current \(S^{\mu}\equiv-\lambda^{C}f^{\mu}_{AC}\delta\mathscr{L}/\delta\chi_{A}\). It is worth noting that \(S^{\mu}\) depends much more simply than \(J^{\mu}\) on the functions \(\lambda^{C}\). Since the \(\lambda^{C}\) are arbitrary functions, for the action to be invariant one requires the separate conditions
\[f_{AC}\frac{\delta\mathscr{L}}{\delta\chi_{A}}-\partial_{\mu} \left(f^{\mu}_{AC}\frac{\delta\mathscr{L}}{\delta\chi_{A}}\right) = 0, \tag{8a}\] \[\partial_{\mu}(J^{\mu}-S^{\mu}) = 0, \tag{8b}\]
where the former hold for each value of \(C\) separately and the latter holds up to a total divergence of a quantity that vanishes on the boundary of the integration region.
The first set of conditions (8a) are usually interpreted as conservation laws, which are covariant under the local symmetry, although not manifestly so in the form given above. The condition (8b) implies that \(J^{\mu}=S^{\mu}+\partial_{\nu}Q^{\nu\mu}\), where \(Q^{\nu\mu}=-Q^{\mu\nu}\), so the two currents coincide up to a total divergence, which is notable given their very different dependencies on the functions \(\lambda^{C}\), \(f_{AC}\) and \(f^{\mu}_{AC}\), as described above. By contrast with the case of a global symmetry,4 if the field equations \(\delta\mathscr{L}/\delta\chi_{A}=0\) are satisfied for all fields, then the conservation laws (8a) hold identically and the new current vanishes \(S^{\mu}\simeq 0\), so that \(J^{\mu}\backsimeq\partial_{\nu}Q^{\nu\mu}\). Thus, the conditions (8a-8b) effectively contain no information on-shell, which is essentially the content of Noether's second theorem [33].
Footnote 4: For a global symmetry, the \(\lambda^{C}\) are constants and so the second term on the RHS of (6) vanishes. The Noether current (5b) can be then written as
\[J^{\mu}=\lambda^{C}\left\{\left[\frac{\partial\mathscr{L}}{\partial(\partial_ {\mu}\chi_{A})}-\partial_{\nu}\left(\frac{\partial\mathscr{L}}{\partial( \partial_{\mu}\partial_{\nu}\chi_{A})}\right)\right]f_{AC}+\frac{\partial \mathscr{L}}{\partial(\partial_{\mu}\partial_{\nu}\chi_{A})}\partial_{\nu}f_{AC}+ \xi^{\mu}_{C}\mathscr{L}\right\}\equiv\lambda^{C}J^{\mu}_{C},\]
where \(\xi^{\mu}_{C}\) are a given set of functions such that \(\xi^{\mu}=\lambda^{C}\xi^{\mu}_{C}\), and we have also defined the further set of functions \(J^{\mu}_{C}\). One can then replace the two conditions (8) with the following single condition that is not satisfied identically on-shell: \[f_{AC}\frac{\delta\mathscr{L}}{\delta\chi_{A}}-\partial_{\mu}J^{\mu}_{C}=0.\] (8)
corresponding action should still be invariant under the local symmetry). In particular, suppose one is considering a field theory for which the total Lagrangian density \(\mathscr{L}_{\rm T}=\mathscr{L}_{\rm M}+\mathscr{L}_{\rm G}\), where \(\mathscr{L}_{\rm G}\) contains every term that depends _only_ on the gauge fields \(g_{A}\) and/or their derivatives, and \(\mathscr{L}_{\rm M}\) contains all the remaining terms. Thus, if \(\mathscr{L}=\mathscr{L}_{\rm M}\), then only the matter field equations \(\delta\mathscr{L}/\delta\varphi_{A}=0\) can be imposed, whereas if \(\mathscr{L}=\mathscr{L}_{\rm G}\) none of the field equations can be imposed. In either case, the surviving terms in (8a-8b) do contain information [34].
## III Manifestly covariant variational principle
In the standard variational approach outlined above, one sees immediately from the plethora of partial derivatives throughout the analysis that the various expressions obtained are not, in general, manifestly covariant under the symmetry group to which the action is invariant. In particular, although the equations of motion \(\delta\mathscr{L}/\delta\chi_{A}=0\) for each field must be covariant under this symmetry group, it is clear that those derived from (5a) are not manifestly so. Moreover, the conservation laws (8a) suffer from the same shortcoming, but must also be expressible in a manifestly covariant form. By contrast, the currents \(J^{\mu}\) and \(S^{\mu}\) are not covariant (manifestly or otherwise), in general, since they both contain the arbitrary functions \(\lambda^{C}(x)\), and \(J^{\mu}\) also contains their partial derivatives. To obtain manifestly covariant variational derivatives and conservation laws directly, it is expedient to take a different approach that begins afresh by reconsidering the variation of the action in (2).
We are primarily concerned here with gauge theories of gravity. In constructing such theories, one typically begins with an action dependent only on some set of matter fields \(\varphi_{A}\), which is defined on Minkowski spacetime \(\mathscr{M}\) in Cartesian inertial coordinates \(x^{\mu}\) (which we will assume henceforth), and is invariant under some global spacetime symmetry group \(\mathcal{G}\), where the coefficients \(\lambda^{C}\) in (6) are constants. One then gauges the group \(\mathcal{G}\) by demanding that the action be invariant with respect to (infinitesimal, passively interpreted) general coordinate transformations (GCTs) and the local action of the subgroup \(\mathcal{H}\) (say), obtained by setting the translation parameters of \(\mathcal{G}\) to zero (which leaves the origin invariant), and allowing the remaining group parameters to become independent arbitrary functions of position. For example, if one considers global Weyl invariance, then \(\{\lambda^{1},\lambda^{2},\lambda^{3}\}=\{a^{\alpha},\omega^{\alpha\beta},\rho\}\), which denote a global spacetime translation, rotation and dilation, respectively. The symmetry is then 'promoted' to a local one by allowing \(\lambda^{C}(x)\) to become arbitrary functions of spacetime position \(x\). For local Weyl invariance, one thus has \(\{\lambda^{1}(x),\lambda^{2}(x),\lambda^{3}(x)\}=\{a^{\alpha}(x),\omega^{ab}( x),\rho(x)\}\), where \(a^{\alpha}(x)\) is interpreted as an infinitesimal general coordinate transformation and is usually denoted instead by \(\xi^{\alpha}(x)\), and \(\omega^{ab}(x)\) and \(\rho(x)\) denote a position-dependent rotation of the local Lorentz frames and a position-dependent dilation, respectively. For the action to remain invariant under the localised symmetry necessitates the introduction of gravitational gauge fields \(g_{A}\) with prescribed transformation properties under the action of the localised symmetry. We will also maintain the somewhat unorthodox viewpoint, albeit hinted at in Kibble's original paper, of considering the gravitational gauge fields as fields in Minkowski spacetime, without attaching any geometric interpretation to them. Consequently, we will adopt a global Cartesian inertial coordinate system \(x^{\mu}\) in our Minkowski spacetime, which greatly simplifies calculations, but more general coordinate systems may be straightforwardly accommodated, if required [19].
For an action (1) containing both matter fields \(\chi_{A}=\psi_{A}\) and gauge fields \(\chi_{A}=g_{A}\) to be invariant under a local symmetry of the form (6), one requires the Lagrangian density \(\mathscr{L}\) to be covariant under this symmetry. One typically always requires invariance of the action under at least (infinitesimal) general coordinate transformations (GCTs), which can be considered as promoting the set of constants \(\lambda^{C}\) representing global translations to arbitrary functions of position; this necessitates the introduction of the corresponding translational gravitational gauge field, which we will denote by \({h_{a}}^{\mu}\) and its inverse by \({b^{a}}_{\mu}\) (such that \({h_{a}}^{\mu}{b^{a}}_{\nu}=\delta^{\mu}_{\nu}\) and \({h_{a}}^{\mu}{b^{c}}_{\mu}=\delta^{c}_{a}\)). It is therefore convenient to write the Lagrangian density as the product \(\mathscr{L}=h^{-1}L\), where \(h=\det({h_{a}}^{\mu})\) is a scalar density, since \(h^{-1}\,d^{4}x\) is an invariant volume element under GCTs.5 The remaining factor \(L\), which we term the Lagrangian, is also a scalar density constructed from covariant quantities.6 These typically include the matter fields \(\varphi_{A}\) themselves and their covariant derivatives, together with the field strength tensors \(\mathscr{F}_{B}\) of the gauge fields \(g_{B}\), which typically depend both on the gauge fields themselves and their partial derivatives (where we have adopted a'symbolic' form that suppresses coordinate and local Lorentz frame indices). In this section, we will denote the generic covariant derivative by \(\mathcal{D}_{a}\equiv{h_{a}}^{\mu}\mathcal{D}_{\mu}={h_{a}}^{\mu}(\partial_{\mu }+\Gamma_{\mu})\), where \(\Gamma_{\mu}\) is a linear combination of the generators of the subgroup \(\mathcal{H}\) that may depend, in general, on the gauge fields \(g_{A}\) and their first derivatives \(\partial g_{A}\) (note that we will occasionally retain the indices on covariant derivatives, when convenient to do so). In any case, one can thus denote the functional dependence of the Lagrangian symbolically by \(L=L(\varphi_{A},\mathcal{D}_{a}\varphi_{A},\mathscr{F}_{B})\).
Footnote 6: We will also denote \(h^{-1}\) by \(b\) where \(b\equiv\det({b^{\mu}}_{\mu})\).
### Manifestly covariant variational derivatives
We begin by rewriting the variation of the action (2) so that one can directly identify manifestly covariant forms for the variational derivatives \(\delta\mathscr{L}/\delta\chi_{A}\). One must first obtain a covariant form for the divergence in (2) by constructing a further covariant derivative operator \(\mathfrak{D}_{a}\) such that, for any coordinate vector \(V^{\mu}\) (of the same Weyl weight as the Lagrangian density \(\mathscr{L}\)), one has \(\partial_{\mu}V^{\mu}=h^{-1}\mathfrak{D}_{a}(h\mathscr{V}^{a})\), where we define the local Lorentz frame vector7\(\mathscr{V}^{a}=b^{a}{}_{\mu}V^{\mu}\). The construction of such an operator requires one first to define the field strength tensor \(\mathcal{T}^{a}{}_{bc}=2h_{b}{}^{\mu}h_{c}{}^{c}\mathsf{D}_{[\mu}b^{\delta}{}_ {\nu]}\) of the translational gauge field, which has the unique (up to a sign) non-trivial contraction \(\mathcal{T}_{b}\equiv\mathcal{T}^{a}{}_{ba}=h\mathsf{D}_{\mu}(h^{-1}h_{b}{}^{ \mu})\). It is then straighforward to show that the required derivative operator is given by \(\mathfrak{D}_{a}=\mathcal{D}_{a}+\mathcal{T}_{a}\).
Footnote 7: We will typically denote a quantity possessing only Roman indices (and its contractions over such indices) as the calligraphic font version of the kernel letter of the corresponding quantity possessing only Greek indices (following [19]), with the exception of quantities having Greek or lower-case kernel letters.
One may then rewrite the variation of the action (2) in the alternative form
\[\delta S=\int\left[\delta_{0}\mathscr{L}+h^{-1}(\mathcal{D}_{a}+\mathcal{T}_{ a})(\xi^{a}L)\right]\,d^{4}x=0, \tag{9}\]
in which \(\xi^{a}=b^{a}{}_{\mu}\xi^{\mu}\) and the form variation of the Lagrangian density (3) can be rewritten symbolically as
\[\delta_{0}\mathscr{L}=h^{-1}\left[\frac{\bar{\partial}L}{\partial\varphi_{A}} \,\delta_{0}\varphi_{A}+\frac{\partial L}{\partial(\mathcal{D}_{a}\varphi_{A} )}\,\delta_{0}(\mathcal{D}_{a}\varphi_{A})+\frac{\partial L}{\partial\overline {\mathscr{F}}_{B}}\delta_{0}\mathscr{F}_{B}\right]+L\,\delta_{0}h^{-1}, \tag{10}\]
where \(\bar{\partial}L/\partial\varphi\equiv[\partial L(\varphi,\mathcal{D}_{a}u, \ldots)/\partial\varphi]_{u=\varphi}\), so that \(\varphi\) and \(\mathcal{D}_{a}\varphi\) are treated as independent variables, rather than \(\varphi\) and \(\partial_{\mu}\varphi\). In order to progress further, the variations \(\delta_{0}(\mathcal{D}_{a}\varphi_{A})\), \(\delta_{0}\mathscr{F}_{B}\) and \(\delta_{0}h^{-1}\) in (10) must be expressed in terms of the variations \(\delta_{0}\varphi_{A}\) and \(\delta_{0}g_{B}\), respectively, of the matter and gauge fields themselves. In so doing, one typically encounters terms of the (symbolic) form \(\mathcal{D}(\delta_{0}\varphi_{A})\,\partial L/\partial(\mathcal{D}\varphi_{A})\) and \(\mathcal{D}(\delta_{0}g_{B})\,\partial L/\partial\mathscr{F}_{B}\), which can be accommodated by considering the quantity \((\mathcal{D}_{a}+\mathcal{T}_{a})(h\mathscr{F}^{a})\), where (again in symbolic form) \(h\mathscr{T}\sim\delta_{0}\varphi_{A}\,\partial L/\partial(\mathcal{D}\varphi _{A})+\delta_{0}g_{B}\,\partial L/\partial\overline{\mathscr{F}}_{B}\), and then using the product rule. Following such a procedure, one may rewrite (10) in the general form
\[\delta_{0}\mathscr{L}=h^{-1}\left[\alpha^{A}\,\delta_{0}\varphi_{A}+\beta^{B} \,\delta_{0}g_{B}+(\mathcal{D}_{a}+\mathcal{T}_{a})(h\mathscr{T}^{a})\right], \tag{11}\]
where \(\alpha^{A}\) and \(\beta^{B}\) are manifestly covariant expressions that typically depend on \(\varphi_{A}\), \(\partial L/\partial\varphi_{A}\) and \(\mathscr{F}_{B}\), together with \(\partial L/\partial(\mathcal{D}\varphi_{A})\) and \(\partial L/\partial\mathscr{F}_{B}\) and their covariant derivatives. Inserting (11) into (9), Noether's first theorem (4) becomes
\[\delta S=\int\left[\alpha^{A}\,\delta_{0}\varphi_{A}+\beta^{B}\,\delta_{0}g_{ B}+(\mathcal{D}_{a}+\mathcal{T}_{a})(h\mathscr{J}^{a})\right]\,h^{-1}\,d^{4}x=0, \tag{12}\]
where the current \(h\mathscr{J}^{a}=h\mathscr{Y}^{a}+\xi^{a}L\) has the symbolic form
\[h\mathscr{J}\sim\frac{\partial L}{\partial(\mathcal{D}\varphi_{A})}\,\delta_ {0}\varphi_{A}+\frac{\partial L}{\partial\overline{\mathscr{F}}_{B}}\,\delta_ {0}g_{B}+\xi L. \tag{13}\]
By comparing (4) and (12), and noting that \(h^{-1}(\mathcal{D}_{a}+\mathcal{T}_{a})(h\mathscr{J}^{a})=\partial_{\mu}J^{\mu}\), one may then immediately identify manifestly covariant expressions for the variational derivatives with respect to the matter and gauge fields, respectively, as
\[\frac{\delta\mathscr{L}}{\delta\varphi_{A}}=b\alpha^{A},\qquad\frac{\delta \mathscr{L}}{\delta g_{B}}=b\beta^{B}. \tag{14}\]
If one does not wish to distinguish between matter and gauge fields, one can instead denote the above relations generically by \(\delta\mathscr{L}/\delta\chi_{A}=b\gamma^{A}\), where \(\gamma^{A}\) is a manifestly covariant expression.
### Manifestly covariant conservation laws
We now turn to the direct construction of manifestly covariant expressions for the conservation laws (8a). Clearly, the manifestly covariant expressions (14) may now be used for the variational derivatives, but one encounters two remaining issues, namely the presence of the explicit partial derivative in the second term in (8a), and the fact that the functions \(f_{AC}\) and \(f^{\mu}_{AC}\) may not be covariant quantities. Indeed, the latter problem always occurs when the functions
\(\lambda^{C}(x)\) (say for \(C=1\)) correspond to GCTs, such that \(\lambda^{1}(x)=\{\xi^{\alpha}(x)\}\); this arises because \(\delta_{0}\chi_{A}=\delta\chi_{A}-\xi^{\alpha}\partial_{\alpha}\chi_{A}\) for any field and so \(f_{A1}\) always contains the non-covariant term \(-\partial_{\alpha}\chi_{A}\). Other functions from the sets \(f_{AC}\) and \(f_{AC}^{\mu}\) may also be non-covariant, depending on the gauge theory under consideration.
Nonetheless, it is important to recall that the conservation law (8a) holds for _any_ set of form variations of the fields (6) that leave the action invariant. In particular, by generalising the approach first proposed by Bessel-Hagen for electromagnetism (see Appendix A), one can choose specific forms for the functions \(\lambda^{C}(x)\) for \(C\neq 1\) in terms of \(\lambda^{1}(x)\) and the non-translational gauge fields \(g_{B}\), such that all the functions \(f_{AC}^{\mu}\) become (manifestly) covariant (as typically do many of the functions \(f_{AC}\)). In this case, one may then write the second term in (8a) by extending the definition of the covariant derivative \((\mathcal{D}_{a}+\mathcal{T}_{a})\) to accommodate any additional free indices represented by the subscript \(C\). In particular, it is convenient to require that for any quantity \(V_{C}{}^{\mu}\) with this index structure (and the same Weyl weight as the Lagrangian density \(\mathscr{L}\)), one has \(h^{-1}(\mathcal{D}_{a}+\mathcal{T}_{a})(hb^{a}{}_{\mu}V_{C}{}^{\mu})=\mathsf{ D}_{\mu}V_{C}{}^{\mu}=(\partial_{\mu}+\Gamma_{\mu})V_{C}{}^{\mu}\), so that in the case where \(C\) does not represent any additional indices one recovers the original requirement that \(h^{-1}(\mathcal{D}_{a}+\mathcal{T}_{a})(hb^{a}{}_{\mu}V^{\mu})=\partial_{\mu }V^{\mu}\). One may then write the conservation law (8a) as
\[(\mathcal{D}_{a}+\mathcal{T}_{a})(b^{a}{}_{\mu}f_{AC}^{\mu}\gamma^{A})-(f_{AC} +\Gamma_{\mu}f_{AC}^{\mu})\gamma^{A}=0. \tag{15}\]
The first term on the LHS of (15) is now manifestly covariant. Consequently, although the second term on the LHS is not manifestly covariant, it must also be expressible in such a form; indeed, one typically finds that this second term immediately assembles as such, as we will demonstrate in Sections IV and V where we apply this approach to WGT and eWGT, respectively.
### Relationship between currents in Noether's second theorem
Finally, we consider the relationship (8b) between the two currents \(J^{\mu}\) and \(S^{\mu}\). As noted above, both currents depend on the functions \(\lambda^{C}\) and so neither is covariant. Nonetheless, from the above discussion, one may rewrite (8b) as \((\mathcal{D}_{a}+\mathcal{T}_{a})[h(\mathscr{J}^{a}-\mathscr{S}^{a})]=0\), in which
\[h\mathscr{S}^{a}=-\lambda^{C}hb^{a}{}_{\mu}f_{AC}^{\mu}\frac{\delta\mathscr{L }}{\delta\chi_{A}}=-\lambda^{C}b^{a}{}_{\mu}f_{AC}^{\mu}\gamma^{A}=-\lambda^{ C}b^{a}{}_{\mu}(f_{AC}^{\mu}\alpha^{A}+f_{BC}^{\mu}\beta^{B}), \tag{16}\]
where we have used the relations (14) to write the final expression in terms of the matter fields and gauge fields separately, in keeping with the (symbolic) expression (13) for \(h\mathscr{J}^{a}\). Thus, \(h\mathscr{S}^{a}\) has the form of linear combination of terms that are manifestly covariant (or can be made so using a generalisation of the Bessel-Hagen method) with coefficients \(\lambda^{C}\). Turning to \(h\mathscr{J}^{a}\), if one substitutes (6) into (13), and recalls that \(f_{AC}^{\mu}\) typically vanishes for matter fields, one obtains the (symbolic) expression
\[h\mathscr{J}\sim\lambda^{C}\left(\frac{\partial L}{\partial(\mathcal{D}_{\varphi _{A}})}f_{AC}+\delta_{C}^{1}L\right)+\frac{\partial L}{\partial\mathscr{F}_{ B}}(f_{BC}\lambda^{C}+f_{BC}^{\mu}\partial_{\mu}\lambda^{C}), \tag{17}\]
where we have again assumed that \(C=1\) corresponds to GCTs. One may show, in general, that the forms of the manifestly covariant expressions \(\alpha^{A}\) and \(\beta^{B}\) obtained in (11) guarantee that the relationship \((\mathcal{D}_{a}+\mathcal{T}_{a})[h(\mathscr{J}^{a}-\mathscr{S}^{a})]=0\) is satisfied, and so it contains no further information. It is worth noting, however, that for the special case in which \(L\) does not depend on the gauge field strengths, such that \(\partial L/\partial\mathscr{F}_{B}=0\), the relationship takes the form
\[(\mathcal{D}_{a}+\mathcal{T}_{a})\left[\lambda^{C}\left(\frac{\partial L}{ \partial(\mathcal{D}_{a}\varphi_{A})}f_{AC}+\delta_{C}^{1}L+b^{a}{}_{\mu}f_{ AC}^{\mu}\alpha^{A}\right)\right]=0, \tag{18}\]
which may be satisfied by requiring the term in parentheses to vanish for each value of \(C\). In so doing, one obtains a straightforward expression for \(\alpha^{A}\), which one can show agrees with that obtained in (11).
The procedures presented in this section are best illustrated by example and we apply them to WGT and eWGT in Sections IV and V, respectively. As we will also show in these examples, the general approach outlined above further lends itself to elucidating the relationship between first- and second-order variational derivatives.
## IV Weyl gauge theory
For WGT, the Lagrangian density has the usual form \(\mathscr{L}=h^{-1}L\), where the translational gauge field \(h_{a}{}^{\mu}\) is assigned a Weyl weight \(w=-1\), so that \(h=\det(h_{a}{}^{\mu})\) and \(L\) are scalar densities both of Weyl weight \(w=-4\), and hence the action \(S\) is invariant under local scale transformations. The Lagrangian has the functional dependencies
\[L=L(\varphi_{A},\mathscr{D}_{a}^{*}\varphi_{A},\mathscr{R}_{abcd},\mathscr{T}_{ abc}^{*}\mathscr{R}_{ab}), \tag{19}\]
where \(\varphi_{A}\) are the matter fields, which typically include a scalar compensator field of Weyl weight \(w=-1\) (that we sometimes denote also by \(\phi\)), and their covariant derivatives are denoted in this section by [11; 19; 20]
\[\mathscr{D}^{*}_{a}\varphi_{A}=h_{a}{}^{\mu}D^{*}_{\mu}\varphi_{A}=h_{a}{}^{\mu }(\partial_{\mu}+\Gamma^{*}_{\mu})\varphi_{A}=h_{a}{}^{\mu}(\partial_{\mu}+ \tfrac{1}{2}A^{cd}{}_{\mu}\Sigma_{cd}+w_{A}B_{\mu})\varphi_{A}, \tag{20}\]
in which \(h_{a}{}^{\mu}\) (with inverse \(b_{a}{}^{\mu}\)), \(A^{ab}{}_{\mu}=-A^{ba}{}_{\mu}\) and \(B_{\mu}\) are the translational, rotational and dilational gravitational gauge fields, respectively, and \(\Sigma_{ab}=-\Sigma_{ba}\) are the generator matrices of the \(\mathrm{SL}(2,C)\) representation to which the field \(\varphi_{A}\) belongs.8 In the expression (20), each field is assumed to have weight \(w_{A}\) (note that this appearance of the index \(A\) is purely a label and hence is understood never to be summed over). It is also convenient to define the further derivative operator \(\partial^{*}_{\mu}\varphi_{A}=(\partial_{\mu}+w_{A}B_{\mu})\varphi_{A}\), of which we will make occasional use.
Footnote 8: The asterisks in the definition of the derivative operator are intended simply to distinguish it from the usual notation used [11; 19; 20] for the covariant derivative \(\mathscr{D}_{a}\varphi_{A}=h_{a}{}^{\mu}D_{\mu}\varphi_{A}=h_{a}{}^{\mu}( \partial_{\mu}+\Gamma_{\mu})\varphi_{A}=h_{a}{}^{\mu}(\partial_{\mu}+\tfrac{1 }{2}A^{cd}{}_{\mu}\Sigma_{cd})\varphi_{A}\) of Poincaré gauge theory (PGT), and should not be confused with the operation of complex conjugation.
Under infinitesimal local Weyl transformations consisting of GCTs, rotations of the local Lorentz frames and dilations, which are parameterised by \(\xi^{\mu}(x)\), \(\omega^{ab}(x)\) and \(\rho(x)\), respectively, a matter field \(\varphi\) of weight \(w\) and the gauge fields transform as [11; 20]
\[\delta_{0}\varphi = -\xi^{\nu}\partial_{\nu}\varphi+(\tfrac{1}{2}\omega^{ab}\Sigma_{ ab}+w\rho)\varphi, \tag{21a}\] \[\delta_{0}h_{a}{}^{\mu} = -\xi^{\nu}\partial_{\nu}h_{a}{}^{\mu}+h_{a}{}^{\nu}\partial_{\nu} \xi^{\mu}-(\omega^{b}{}_{a}+\rho\,\delta^{b}_{a})h_{b}{}^{\mu},\] (21b) \[\delta_{0}A^{ab}{}_{\mu} = -\xi^{\nu}\partial_{\nu}A^{ab}{}_{\mu}-A^{ab}{}_{\nu}\partial_{\mu }\xi^{\nu}-2\omega^{[a}{}_{c}A^{b]c}{}_{\mu}-\partial_{\mu}\omega^{ab},\] (21c) \[\delta_{0}B_{\mu} = -\xi^{\nu}\partial_{\nu}B_{\mu}-B_{\nu}\partial_{\mu}\xi^{\nu}- \partial_{\mu}\rho, \tag{21d}\]
from which one may verify that \(\mathscr{D}^{*}_{a}\varphi_{A}\) does indeed transform covariantly under (infinitesimal) local Weyl transformations with weight \(w-1\)[19; 20].
The remaining quantities \(\mathscr{A}_{abcd}\), \(\mathscr{T}^{*}_{abc}\), \(\mathscr{H}_{ab}\) in (19) are the field strength tensors of the rotational, translational and dilational gauge fields, respectively, which are defined through the action of the commutator of two covariant derivatives on some field \(\varphi\) of weight \(w\) by
\[[\mathscr{D}^{*}_{c},\mathscr{D}^{*}_{d}]\varphi=(\tfrac{1}{2}\mathscr{R}^{ab }{}_{cd}\Sigma_{ab}+w\mathscr{H}_{cd}-\mathscr{T}^{*a}{}_{cd}\mathscr{D}^{*} _{a})\varphi. \tag{22}\]
The field strengths have the forms \(\mathscr{R}^{ab}{}_{cd}=h_{a}{}^{\mu}h_{b}{}^{\nu}R^{ab}{}_{\mu\nu}\), \(\mathscr{H}_{cd}=h_{c}{}^{\mu}h_{d}{}^{\nu}H_{\mu\nu}\) and \(\mathscr{T}^{*a}{}_{bc}=h_{b}{}^{\mu}h_{c}{}^{\nu}T^{*a}{}_{\mu\nu}\), where
\[R^{ab}{}_{\mu\nu} = 2(\partial_{[\mu}A^{ab}{}_{\nu]}+\eta_{cd}A^{ac}{}_{[\mu}A^{db}{ }_{\nu]}), \tag{23a}\] \[H_{\mu\nu} = 2\partial_{[\mu}B_{\nu]},\] (23b) \[T^{*a}{}_{\mu\nu} = 2D^{*}_{[\mu}b^{a}{}_{\nu]}. \tag{23c}\]
From the transformation laws (21), it is straightforward to verify that, in accordance with their index structures, the gauge field strength tensors \(\mathscr{R}^{ab}{}_{cd}\), \(\mathscr{H}_{cd}\) and \(\mathscr{T}^{*a}{}_{bc}\) are invariant under GCTs, and transform covariantly under local Lorentz transformations and dilations with Weyl weights \(w=-2\), \(w=-2\) and \(w=-1\), respectively [19; 20].
It is worth noting that \(\mathscr{R}^{ab}{}_{cd}\) has the same functional form as the rotational field strength in PGT, but that \(\mathscr{T}^{*a}{}_{bc}=\mathscr{T}^{a}{}_{bc}+\delta^{a}_{c}\mathscr{B}_{b}- \delta^{a}_{b}\mathscr{B}_{c}\), where \(\mathscr{T}^{a}{}_{bc}\) is the translational field strength in PGT; we also define \(\mathscr{R}_{a}=h_{a}{}^{\mu}B_{\mu}\). Moreover, using the expression (23c) and defining the quantities \(c^{*a}{}_{bc}\equiv 2h_{b}{}^{\mu}h_{c}{}^{\nu}\partial^{*}_{[\mu}b^{a}{}_{\nu]}\), one may show that the fully anholonomic rotational gauge field \(\mathscr{A}^{\mu}{}_{c}\equiv h_{c}{}^{\mu}A^{ab}{}_{\mu}\) can be written as [11; 19]
\[\mathscr{A}_{abc}=\tfrac{1}{2}(c^{*}_{abc}+c^{*}_{bca}-c^{*}_{cab})-\tfrac{1}{ 2}(\mathscr{T}^{*}_{abc}+\mathscr{T}^{*}_{bca}-\mathscr{T}^{*}_{cab}). \tag{24}\]
It is also convenient for our later development to obtain the Bianchi identities satisfied by the gravitational gauge field strengths \(\mathscr{R}^{ab}{}_{cd}\), \(\mathscr{T}^{*a}{}_{bc}\) and \(\mathscr{R}_{ab}\) in WGT. These may be straightforwardly derived from the Jacobi identity applied to the generalised covariant derivative, namely \([\mathscr{D}^{*}_{a},[\mathscr{D}^{*}_{b},\mathscr{D}^{*}_{c}]]\varphi+[ \mathscr{D}^{*}_{c},[\mathscr{D}^{*}_{a},\mathscr{D}^{*}_{b}]]\varphi+[ \mathscr{D}^{*}_{b},[\mathscr{D}^{*}_{c},\mathscr{D}^{*}_{a}]]\varphi=0\). Inserting the form (20) for the WGT generalised covariant derivative, one quickly finds the three Bianchi identities [19]9
Footnote 9: Note that these expressions correct a typographical error in [19] by reversing the sign of each term containing \(\mathscr{R}_{ab}\).
\[\mathscr{D}^{*}_{[a}\mathscr{R}^{ae}{}_{bc]}-\mathscr{T}^{*f}{}_{[ab} \mathscr{R}^{de}{}_{c]f} = 0, \tag{25a}\] \[\mathscr{D}^{*}_{[a}\mathscr{T}^{*d}{}_{bc]}-\mathscr{T}^{*e}{}_{[ab} \mathscr{T}^{*d}{}_{c]e} = 0,\] (25b) \[\mathscr{D}^{*}_{[a}\mathscr{R}_{bc]}-\mathscr{T}^{*e}{}_{[ab} \mathscr{R}_{c]e} = 0. \tag{25c}\]
By contracting over various indices, one also obtains the following non-trivial contracted Bianchi identities:
\[\mathscr{D}^{*}_{a}\mathscr{R}^{ae}{}_{bc}-2\mathscr{D}^{*}_{[b} \mathscr{R}^{e}{}_{c]}-2\mathscr{T}^{*f}{}_{a[b}\mathscr{R}^{ae}{}_{c]f}- \mathscr{T}^{*f}{}_{bc}\mathscr{R}^{e}{}_{f} = 0, \tag{26a}\] \[\mathscr{D}^{*}_{a}(\mathscr{R}^{a}{c}_{c}-\tfrac{1}{2}\delta^{a}_{c} \mathscr{R})+\mathscr{T}^{*f}{}_{bc}\mathscr{R}^{f}+\tfrac{1}{2}\mathscr{T}^{*f}{}_{ ab}\mathscr{R}^{ab}{}_{c}{}_{f} = 0,\] (26b) \[\mathscr{D}^{*}_{a}\mathscr{T}^{*a}{}_{bc}+2\mathscr{D}^{*}_{[b} \mathscr{T}^{*}_{c}]+\mathscr{T}^{*e}{}_{bc}\mathscr{T}^{*}_{e}+2\mathscr{R}_{[bc]}-2 \mathscr{R}_{bc} = 0. \tag{26c}\]
### Manifestly covariant variational derivatives in WGT
We now apply the manifestly covariant variational principle described in Section III to WGT. We begin by deriving the variational derivatives, and hence the EL equations, for the matter fields \(\varphi_{A}\) and the gravitational gauge fields \({h_{a}}^{\mu}\), \({A^{ab}}_{\mu}\) and \(B_{\mu}\). Using the fact that \(\delta_{0}h^{-1}=-h^{-1}{b^{a}}_{\mu}\,\delta_{0}{h_{a}}^{\mu}\), one may write (10) as
\[h\,\delta_{0}\mathscr{Z} = \delta_{0}L-{b^{a}}_{\mu}L\,\delta_{0}{h_{a}}^{\mu}, \tag{27}\] \[= \frac{\partial L}{\partial\varphi_{A}}\,\delta_{0}\varphi_{A}+ \frac{\partial L}{\partial(\mathscr{D}_{a}^{*}\varphi_{A})}\,\delta_{0}( \mathscr{D}_{a}^{*}\varphi_{A})+\frac{\partial L}{\partial\mathscr{R}_{abcd}} \,\delta_{0}\mathscr{R}_{abcd}+\frac{\partial L}{\partial\mathscr{F}_{abc}^{ *}}\,\delta_{0}\mathscr{F}_{abc}^{*}+\frac{\partial L}{\partial\mathscr{R}_{ ab}}\,\delta_{0}\mathscr{R}_{ab}-{b^{a}}_{\mu}L\,\delta_{0}{h_{a}}^{\mu}.\]
In order to progress further, one must determine how the variations in (27) depend on the variations of the matter and gravitational gauge fields themselves. This is easily achieved using the definition of the WGT covariant derivative and the expressions (23a-23c) for the field strengths. One must also make use of the fact that for any coordinate vector \(V^{\mu}\) of weight \(w=0\) (i.e. invariant under local scale transformations, like the Lagrangian density \(\mathscr{L}\)), one may show that \(\partial_{\mu}V^{\mu}=h^{-1}(\mathscr{D}_{a}^{*}+\mathscr{F}_{a}^{*})({h{b^{a}} _{\mu}}V^{\mu})\) or, equivalently, for any local Lorentz vector \(\mathscr{V}^{a}\) having Weyl weight \(w=-3\) one has [19]
\[(\mathscr{D}_{a}^{*}+\mathscr{F}_{a}^{*})\mathscr{V}^{a}=h\partial_{\mu}(h^{- 1}{h_{a}}^{\mu}\mathscr{V}^{a}). \tag{28}\]
Such expressions on the RHS of (27) therefore contribute only surface terms to the variation of the action in (9), but we will retain them nonetheless, as they are required for our later discussion.
We begin by considering together the first two terms on the RHS of (27), for which one obtains
\[\frac{\partial L}{\partial\varphi_{A}}\,\delta_{0}\varphi_{A}+ \frac{\partial L}{\partial(\mathscr{D}_{a}^{*}\varphi_{A})}\,\delta_{0}( \mathscr{D}_{a}^{*}\varphi_{A})\] \[=\frac{\partial L}{\partial\varphi_{A}}\,\delta_{0}\varphi_{A}+ \frac{\partial L}{\partial(\mathscr{D}_{a}^{*}\varphi_{A})}\left[\mathscr{D}_ {a}^{*}(\delta_{0}\varphi_{A})+\delta_{0}{h_{a}}^{\mu}D_{\mu}^{*}\varphi_{A}+{ h_{a}}^{\mu}(w_{A}\,\delta_{0}B_{\mu}+\tfrac{1}{2}\delta_{0}{A^{bc}}_{\mu} \Sigma_{bc})\varphi_{A}\right],\] \[=\left[\frac{\partial L}{\partial\varphi_{A}}-(\mathscr{D}_{a}^{* }+\mathscr{F}_{a}^{*})\frac{\partial L}{\partial(\mathscr{D}_{a}^{*}\varphi_ {A})}\right]\delta_{0}\varphi_{A}+\frac{\partial L}{\partial(\mathscr{D}_{a}^{ *}\varphi_{A})}\left[\delta_{0}{h_{a}}^{\mu}D_{\mu}^{*}\varphi_{A}+{h_{a}}^{ \mu}(w_{A}\,\delta_{0}B_{\mu}+\tfrac{1}{2}\delta_{0}{A^{bc}}_{\mu}\Sigma_{bc} )\varphi_{A}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
In the above expressions it is assumed that the appropriate antisymmetrisations, arising from the symmetries of the field strength tensors, are performed when the RHS are evaluated. It is also easily shown that the quantity in square brackets in each of the last terms in (30-32) has Weyl weight \(w=-3\), so according to (28) each such term contributes a surface term to the variation of the action (9).
One may then substitute the expressions (29-32) into (27), which may itself subsequently be substituted into (9) to obtain an expression of the general form (12) for Noether's first theorem, which may be written as
\[\delta S=\int\left[v^{A}\,\delta_{0}\varphi_{A}+\tau^{a}{}_{\mu}\,\delta_{0}h _{a}{}^{\mu}+\sigma_{ab}{}^{\mu}\,\delta_{0}A^{ab}{}_{\mu}+\zeta^{\mu}\,\delta _{0}B_{\mu}+h^{-1}(\mathscr{D}^{*}_{p}+\mathscr{F}^{*}_{p})(h\mathscr{J}^{p}) \right]\,d^{4}x=0, \tag{33}\]
where the current \(h\mathscr{J}^{p}\) is given by
\[h\mathscr{J}^{p}=\frac{\partial L}{\partial(\mathscr{D}^{*}_{p}\varphi_{A})} \delta_{0}\varphi_{A}+2\left(\frac{\partial L}{\partial\mathscr{T}^{*}_{abp}} b_{a\mu}\,\delta_{0}h_{b}{}^{\mu}-\frac{\partial L}{\partial\mathscr{R}_{ abcp}}h_{c}{}^{\mu}\,\delta_{0}A_{ab\mu}-\frac{\partial L}{\partial\mathscr{R}_{ ap}}h_{a}{}^{\mu}\,\delta_{0}B_{\mu}\right)+b^{p}{}_{\mu}\xi^{\mu}L, \tag{34}\]
and we have defined the variational derivative \(\upsilon^{A}\equiv\delta\mathscr{L}/\delta\varphi_{A}\) with respect to the matter field \(\varphi_{A}\), and the total dynamical energy-momentum \(\tau^{a}{}_{\mu}\equiv\delta\mathscr{L}/\delta h_{a}{}^{\mu}\), spin-angular-momentum \(\sigma_{ab}{}^{\mu}\equiv\delta\mathscr{L}/\delta A^{ab}{}_{\mu}\) and dilation current \(\zeta^{\mu}\equiv\delta\mathscr{L}/\delta B_{\mu}\) of both the matter and gravitational gauge fields. Manifestly covariant forms for these variational derivatives may be read off from the expressions (29-32). Converting all Greek indices to Roman and defining the quantities \(\tau^{a}{}_{b}\equiv\tau^{a}{}_{\mu}h_{b}{}^{\mu}\), \(\sigma_{ab}{}^{c}\equiv\sigma_{ab}{}^{\mu}b^{c}{}_{\mu}\) and \(\zeta^{a}\equiv\zeta^{\mu}b^{a}{}_{\mu}\), one then makes the following identifications
\[h\upsilon^{A} = \frac{\partial L}{\partial\varphi_{A}}-(\mathscr{D}^{*}_{a}+ \mathscr{T}^{*}_{a})\frac{\partial L}{\partial(\mathscr{D}^{*}_{a}\varphi_{A})}, \tag{35a}\] \[h\tau^{a}{}_{b} = \frac{\partial L}{\partial(\mathscr{D}^{*}_{a}\varphi_{A})} \mathscr{D}^{*}_{b}\varphi_{A}+2\frac{\partial L}{\partial\mathscr{R}_{pqra}} \mathscr{R}_{pqrb}+2\frac{\partial L}{\partial\mathscr{R}_{pa}}\mathscr{R}_{ pb}+2\frac{\partial L}{\partial\mathscr{R}_{pa}}\mathscr{F}^{*}_{pqb}-[ \mathscr{S}^{*a}{}_{qr}+2\delta^{a}_{q}(\mathscr{D}^{*}_{r}+\mathscr{F}^{*}_{r })]\frac{\partial L}{\partial\mathscr{T}^{*b}{}_{qr}}-\delta^{b}_{a}L,\] (35b) \[h\sigma_{ab}{}^{c} = \frac{1}{2}\frac{\partial L}{\partial(\mathscr{D}^{*}_{c}\varphi _{A})}\Sigma_{ab}\varphi_{A}+\left[\mathscr{S}^{*c}{}_{pq}+2\delta^{c}_{p}( \mathscr{D}^{*}_{q}+\mathscr{T}^{*}_{q})\right]\frac{\partial L}{\partial \mathscr{R}^{ab}{}_{pq}}-2\frac{\partial L}{\partial\mathscr{T}^{*[ab]_{c}}},\] (35c) \[h\zeta^{a} = \frac{\partial L}{\partial(\mathscr{D}^{*}_{a}\varphi_{A})}w_{A} \varphi_{A}+\left[\mathscr{S}^{*a}{}_{pq}+2\delta^{a}_{p}(\mathscr{D}^{*}_{q}+ \mathscr{T}^{*}_{q})\right]\frac{\partial L}{\partial\mathscr{R}_{pq}}+2 \frac{\partial L}{\partial\mathscr{T}^{*}{}_{pq}}\delta^{a}_{q}\delta^{p}_{r}, \tag{35d}\]
where, once again, it is assumed that the appropriate antisymmetrisations, arising from the symmetries of the field strength tensors, are performed when the RHS are evaluated. The expressions (35) constitute the completion of our first goal. One sees immediately that, unlike (5a), the above forms for the variational derivative of each field (and hence the equations of motion obtained by setting each RHS to zero) are manifestly covariant. Moreover, they are straightforward to evaluate, since they require one only to differentiate the Lagrangian \(L\) with respect to the matter fields, their covariant derivatives and the field strengths, respectively. One may easily confirm that the above expressions lead to precisely the same variational derivatives as those obtained by using the standard (but much longer) approach of evaluating (5a) for each field and then reassembling the many resulting terms into manifestly covariant forms.
The expressions (35) not only provide a significant calculational saving in obtaining the variational derivatives in WGT, but also yield a useful insight into their general form. In particular, one notes that for a Lagrangian \(L\) that does not contain the gauge field strength tensors, but depends only on the matter fields and their covariant derivatives, the variational derivatives with respect to the gauge fields reduce to the _covariant canonical currents_[11; 20] of the matter fields. For Lagrangians that do depend on the gauge field strengths, also of interest are the analogous forms of the penultimate terms on the RHS of (35b-35d), which are the only terms capable of producing a dependence on the covariant derivatives of the field strength tensors; in each case, the corresponding term depends on the covariant derivative of the field strength tensor for the gauge field with respect to which the variational derivative is taken. It is also worth pointing out that we have not assumed the equations of motion to be satisfied in deriving (35a-35d). Thus, one may calculate the corresponding variational derivatives for _any subset_ of terms in \(L\) that is a scalar density of weight \(w=-4\). Individually, however, such quantities do _not_ vanish, in general. Rather, each equation of motion requires only the vanishing of the sum of such quantities, when derived from disjoint subsets that exhaust the total Lagrangian \(L\).
Finally, we note that the above approach is easily adapted to other gravitational gauge theories. For example, to apply it to PGT one needs simply to'remove the asterisks', thereby replacing the WGT covariant derivative and torsion by their PGT counterparts, and set \(B_{\mu}\equiv 0\), so that \(\zeta^{a}\) and \(\mathscr{R}_{ab}\) also vanish identically. Indeed, the above
approach is of even greater use in PGT than WGT, since the functional dependence of the PGT Lagrangian on the matter fields, their covariant derivatives and the field strengths can be more complicated than in WGT, as in PGT one does not require \(L\) to have Weyl weight \(w=-4\)[11; 19].
### Relationship between first- and second-order variational derivatives in WGT
Before turning our attention to the direct derivation of manifestly covariant conservation laws for WGT, we first briefly demonstrate how the analysis in the previous section is well suited to comparing first- and second-order variational derivatives. In particular, we will focus on the example of the variational derivatives obtained by setting the WGT torsion to zero _after_ the variation is performed (first-order approach) with those obtained by setting the torsion to zero in the action _before_ carrying out the variation (second-order approach).
Let us begin by considering the simpler case of the first-order approach, where one merely sets \(\mathcal{F}^{\ast a}{}_{bc}=0\) (which is a properly WGT-covariant condition) in the expressions (35a-35d). The condition \(\mathcal{F}^{\ast a}{}_{bc}=0\) results in the rotational gauge field \(A^{ab}{}_{\mu}\) no longer being an independent field, but one determined explicitly by the other gauge fields \(h_{a}{}^{\mu}\) and \(B_{\mu}\), which we thus denote by \({}^{0}\!A^{\ast a}{}^{\mu}{}_{\mu}\) and term the'reduced' \(A\)-field [19; 20]. From (24), these quantities are given explicitly by \({}^{0}\!A^{\ast}_{ab\mu}=b^{c}{}_{\mu}{}^{0}\!\mathcal{A}^{\ast}_{abc}\), where
\[{}^{0}\!\mathcal{A}^{\ast}_{abc}=\tfrac{1}{2}(c_{abc}+c_{bca}-c_{cab})+\eta_{ ac}\mathcal{B}_{b}-\eta_{bc}\mathcal{B}_{a}, \tag{36}\]
in which \(c^{a}{}_{bc}\equiv h_{b}{}^{\mu}h_{c}{}^{\nu}(\partial_{\mu}b^{a}{}_{\nu}- \partial_{\nu}b^{a}{}_{\mu})\). Under a local Weyl transformation, the quantities \({}^{0}\!A^{\ast ab}{}_{b}\) transform in the same way as \(A^{ab}{}_{\mu}\), so one may construct the'reduced' WGT covariant derivative \({}^{0}\!\mathcal{D}^{\ast}_{a}\varphi=h_{a}{}^{\mu}{}^{0}\!D^{\ast}_{\mu} \varphi=h_{a}{}^{\mu}(\partial_{\mu}+\tfrac{1}{2}{}^{0}\!A^{\ast cd}{}_{\mu} \Sigma_{cd}+wB_{\mu})\varphi\), which transforms in the same way as \(\mathcal{D}^{\ast}_{a}\varphi\), but depends only on the \(h\) field, its first derivatives, and the \(B\)-field. Thus, the corresponding quantities to (35a-35d) are obtained simply by evaluating the RHS with \(\mathcal{F}^{\ast a}{}_{bc}\) (and its contractions) set to zero, which also implies \(\mathcal{D}^{\ast}_{a}\to{}^{0}\!\mathcal{D}^{\ast}_{a}\). This yields
\[h\,{}^{0}\!\mathcal{V}^{A} = \left.\frac{\partial L}{\partial\varphi_{A}}\right|_{0}-{}^{0} \!\mathcal{D}^{\ast}_{a}\left.\frac{\partial L}{\partial(\mathcal{D}^{\ast}_{ a}\varphi_{A})}\right|_{0}, \tag{37a}\] \[h\,{}^{0}\!\mathcal{V}^{a}{}_{b} = \left.\frac{\partial L}{\partial(\mathcal{D}^{\ast}_{a}\varphi_{ A})}\right|_{0}{}^{0}\!\mathcal{D}^{\ast}_{b}\varphi_{A}+2\left.\frac{\partial L}{ \partial\mathcal{B}_{pqra}}\right|_{0}{}^{0}\!\mathcal{B}_{pqrb}+2\left.\frac{ \partial L}{\partial\mathcal{B}_{pa}}\right|_{0}{}^{0}\!\mathcal{F}_{pb}+2 \left.{}^{0}\!\mathcal{D}^{\ast}_{r}\left.\frac{\partial L}{\partial\mathcal{ F}^{\ast b}{}_{ar}}\right|_{0}-\delta^{b}_{a}\left.L\right|_{0},\] (37b) \[h\,{}^{0}\!\mathcal{G}_{ab}{}^{c} = \left.\tfrac{1}{2}\left.\frac{\partial L}{\partial(\mathcal{D}^{ \ast}_{c}\varphi_{A})}\right|_{0}\Sigma_{ab}\varphi_{A}+2\delta^{c}_{r}{}^{0} \!\mathcal{D}^{\ast}_{s}\left.\frac{\partial L}{\partial\mathcal{B}^{ab}{}_{ rs}}\right|_{0}-2\left.\frac{\partial L}{\partial\mathcal{F}^{\ast[ab]}{}_{c}} \right|_{0},\] (37c) \[h\,{}^{0}\!\mathcal{C}^{a} = \left.\frac{\partial L}{\partial(\mathcal{D}^{\ast}_{a}\varphi_{ A})}\right|_{0}w_{A}\varphi_{A}+2\delta^{a}_{p}{}^{0}\!\mathcal{D}^{\ast}_{q}\left.\frac{ \partial L}{\partial\mathcal{F}_{pq}}\right|_{0}+2\left.\frac{\partial L}{ \partial\mathcal{F}^{\ast}{}_{pq}}\right|_{0}\delta^{a}_{q}\delta^{p}_{r}, \tag{37d}\]
where \(|_{0}\) denotes that the quantity to its immediate left is evaluated assuming \(\mathcal{F}^{\ast}_{abc}=0\). The equations of motion from the first-order approach are then given simply by equating each of (37a-37d) to zero. Once again, it is worth noting that we have not assumed any equations of motion to be satisfied in deriving the quantities (37a- 37d). Thus, one may derive corresponding quantities for _any subset_ of terms in \(L\) that are a scalar density with weight \(w=-4\), and these quantities do not vanish, in general.
We now consider the second-order approach, where one imposes \(\mathcal{F}^{\ast}_{abc}=0\) at the level of action, prior to evaluating the variational derivatives. In this case, the rotational gauge field \(A^{ab}{}_{\mu}\) is again determined explicitly by \(h_{a}{}^{\mu}\) and \(B_{\mu}\) according to (36), and so now the action depends only on these other gauge fields. From (36), one readily finds that
\[\delta_{0}A_{ab\mu}=b^{c}{}_{\mu}\left(h_{[c}{}^{\nu}\,{}^{0}\!\mathcal{D}^{ \ast}_{b]}\delta_{0}b_{a\nu}+h_{[a}{}^{\nu}\,{}^{0}\!\mathcal{D}^{\ast}_{c]} \delta_{0}b_{b\nu}-h_{[b}{}^{\nu}\,{}^{0}\!\mathcal{D}^{\ast}_{a]}\delta_{0}b_{ c\nu}+2\eta_{c[a}h_{b]}{}^{\nu}\delta_{0}B_{\nu}\right), \tag{38}\]
from which one may show that (up to terms that are the divergence of a quantity that vanishes on the boundary of the integration region) the integrand in the expression (2) for the variation of the action is given by
\[\frac{\delta\mathcal{F}}{\delta\chi_{A}}\,\delta_{0}\chi_{A} = {}^{0}\!\mathcal{V}^{A}\,\delta_{0}\varphi_{A}+\left.{}^{0}\!\tilde{ \tau}^{a}_{\mu}\,\delta_{0}h_{a}{}^{\mu}-bb{}^{f}{}_{\mu}\left(\eta_{fa}\delta^{c }_{[b}{}^{0}\!\mathcal{D}^{\ast}_{c]}+\eta_{fb}\delta^{c}_{[c}{}^{0}\!\mathcal{D}^{ \ast}_{a]}-\eta_{fc}\delta^{c}_{[a}{}^{0}\!\mathcal{D}^{\ast}_{b]}\right)(h\,{} ^{0}\tilde{\sigma}^{abc})\,\delta_{0}h_{e}{}^{\mu} \tag{39}\] \[+2\,{}^{0}\tilde{\sigma}^{abc}\eta_{c[a}h_{b]}{}^{\nu}\,\delta_{0} B_{\mu}+{}^{0}\tilde{\zeta}^{\mu}\,\delta_{0}B_{\mu},\] \[\equiv v^{A}\,\delta_{0}\varphi_{A}+t^{a}_{\mu}\,\delta_{0}h_{a}{}^{ \mu}+j^{\mu}\,\delta_{0}B_{\mu}, \tag{40}\]
where we have again made use of (28) and \({}^{0}\tilde{\tau}^{a}_{\phantom{a}\mu}\), \({}^{0}\tilde{\sigma}_{ab}^{\phantom{ab}c}\) and \({}^{0}\tilde{\zeta}^{\mu}\) denote quantities analogous to (37b-37d), respectively, but _without_ the terms containing \(\partial L/\partial\mathcal{T}^{a}_{abc}[0]\). In the last line, we have also defined the total dynamical energy-momentum \(t^{a}{}_{\mu}\) and dilation current \(j^{\mu}\) of both the matter and gravitational gauge fields, and the matter field variational derivatives \(v^{A}\), in the second-order approach. By comparing (39) and (40), and converting all indices to Roman, one finds that the second-order variational derivatives are given in terms of the first-order ones by
\[hv^{A} = h\,^{0}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where it is worth noting that \(h\upsilon^{A}=\delta L/\delta\varphi_{A}\). On multiplying through by \(h_{d}{}^{\nu}\), one may rewrite the conservation law wholly in term of quantities possessing only Roman indices as
\[(\mathcal{D}_{c}^{*}+\mathcal{T}_{c}^{*})(h\tau^{c}{}_{d})-h(\sigma_{ab}{}^{c} \mathcal{B}^{ab}{}_{cd}+\zeta^{c}\mathcal{B}_{cd}-\tau^{c}{}_{b}\mathcal{T}^{* }{}^{b}{}_{cd}-\upsilon^{A}\mathcal{D}_{d}^{*}\varphi_{A})=0. \tag{46}\]
We next consider invariance of the action under infinitesimal local Lorentz rotations characterised by \(\omega^{ab}(x)\) (which we take to correspond to \(C=2\)). In this case, the functions \(f^{\mu}_{A2}\) in the original set of transformation laws (21) are already manifestly covariant. One may thus insert the functions \(f^{\mu}_{A2}\) and \(f_{A2}\) read off from (21) directly into the general form (15), without employing the Bessel-Hagen method. On recalling that \(\Gamma^{*}_{\beta}\sigma_{pq}{}^{\beta}=-A^{r}{}_{p\beta}\sigma_{rq}{}^{\beta }-A^{r}{}_{q\beta}\sigma_{pr}{}^{\beta}\) (since \(\sigma_{ab}{}^{\mu}\) has Weyl weight \(w=0\)) one finds that the final set of terms on the LHS of (15) vanish when \(\gamma^{A}\) corresponds to \(h\sigma_{ab}{}^{\mu}\), and one immediately obtains the manifestly covariant conservation law
\[(\mathcal{D}_{c}^{*}+\mathcal{T}_{c}^{*})(h\sigma_{ab}{}^{c})+h\tau_{[ab]}+ \tfrac{1}{2}h\upsilon^{A}\Sigma_{ab}\varphi_{A}=0. \tag{47}\]
Finally, we consider invariance of the action under infinitesimal local dilations characterised by \(\rho(x)\) (which we take to correspond to \(C=3\)). Once again, the relevant functions \(f^{\mu}_{A3}\) in the original set of transformation laws (21) are already manifestly covariant. One may thus insert \(f^{\mu}_{A3}\) and \(f_{A3}\) read off from (21) directly into the general form (15), which immediately yields the manifestly covariant conservation law
\[(\mathcal{D}_{c}^{*}+\mathcal{T}_{c}^{*})(h\zeta^{c})-h\tau^{c}{}_{c}+h \upsilon^{A}w_{A}\varphi_{A}=0. \tag{48}\]
It is straightforward to verify that the manifestly covariant conservations WGT laws (46-47) have the correct forms [19; 20] and match those derived (albeit at considerably greater length) using the standard form of Noether's second theorem (8a). Before moving on to consider the further condition (8b) arising from Noether's second theorem, in the context of WGT, we note that the conservation law (47) may be used to simplify the expression (42) for the second-order variational derivative with respect to \(h_{a}{}^{\mu}\) in terms of first-order variational derivatives. Imposing the condition \(\mathcal{T}_{abc}^{*}=0\), the conservation law (47) becomes
\[{}^{0}\mathcal{D}_{c}^{*}(h\,{}^{0}\mathcal{T}_{ab}{}^{c})+h\,{}^{0}\mathcal{T }_{[ab]}+\tfrac{1}{2}h\,{}^{0}\mathcal{C}^{A}\Sigma_{ab}\varphi_{A}=0. \tag{49}\]
If one assumes the _matter_ equations of motion \({}^{0}\mathcal{C}^{A}=0\) are satisfied (or, equivalently, that the Lagrangian \(L\) does not depend on matter fields), the expression (42) can thus be written in the simpler and manifestly symmetric form
\[ht_{ab}\overset{\text{\tiny{m}}}{=}h\,{}^{0}\mathcal{T}_{(ab)}-2\,{}^{0} \mathcal{D}_{c}^{*}(h\,{}^{0}\mathcal{T}_{(ab)}^{c}). \tag{50}\]
### Relationship between currents in Noether's second theorem in WGT
We conclude this section by considering the relationship in WGT between the two currents that appear in Noether's second theorem (8b). As discussed in Section III.3, this equation may be re-written as \((\mathcal{D}_{a}^{*}+\mathcal{T}_{a}^{*})[h(\mathcal{F}^{a}-\mathcal{S}^{a}) ]=0\), where \(h\mathcal{F}^{a}\) for WGT is given by (34) and the expression for \(h\mathcal{S}^{a}\) may be obtained from the general form (16), which on using the original WGT field variations (21) yields
\[h\mathcal{S}^{p}=h\left[-\xi^{\mu}(\tau^{p}{}_{\mu}-\sigma_{ab}{}^{p}A^{ab}{} _{\mu}-\zeta^{p}B_{\mu})+\omega^{ab}\sigma_{ab}{}^{p}+\rho\zeta^{p}\right]. \tag{51}\]
It is worth noting that this expression does not depend on the variational derivatives \(\upsilon^{A}\equiv\delta\mathcal{Z}/\delta\psi_{A}\) with respect to the matter fields since, as expected, the functions \(f^{\mu}_{AC}\) vanish in this case, as can be read off from the field variations (21). Thus, in order for \(h\mathcal{S}^{p}\) to vanish, it is sufficient that just the equations of motion of the gauge fields are satisfied.
If one substitutes the original form variations (21) into the expression (34) for \(h\mathcal{F}^{p}\), one finds after a long calculation10, which requires careful use of the definition (22) of the field strength tensors, the contracted Bianchi identity (26c) and the manifestly covariant expressions (35b-35d) for the variational derivatives with respect to the gravitational gauge fields, that
Footnote 10: The calculation can be somewhat shortened, better organised and carried out in a largely manifestly covariant manner if one assumes the local Weyl transformaton parameters in (21) to have the forms \(\xi^{\mu}(x)\), \(\omega^{ab}(x)=\tilde{\omega}^{ab}(x)-A^{ab}{}_{b}\xi^{\nu}\) and \(\rho(x)=\bar{\rho}(x)-B_{\nu}\xi^{\nu}\), where \(\xi^{\mu}(x)\), \(\tilde{\omega}^{ab}(x)\) and \(\bar{\rho}(x)\) are arbitrary functions of position, and considers separately the three cases: (i) \(\tilde{\omega}^{ab}=0=\bar{\rho}\); (ii) \(\xi^{\mu}=0=\bar{\rho}\); and (iii) \(\xi^{\mu}=0=\tilde{\omega}^{ab}\). This is a similar approach to that used in Section IV.3 to derive directly the manifestly covariant forms of the WGT conservation laws and, in particular, allows one in case (i) to make use again of the manifestly covariant form variations (44) derived using the Bessel-Hagen method.
\[(\mathcal{D}_{p}^{*}+\mathcal{T}_{p}^{*})(h\mathcal{F}^{p})=(\mathcal{D}_{p}^{*}+ \mathcal{T}_{p}^{*})\left[-\xi^{\mu}h(\tau^{p}{}_{q}b^{q}{}_{\mu}-\sigma_{ab}{}^ {p}A^{ab}{}_{\mu}-\zeta^{p}B_{\mu})+\omega^{ab}h\sigma_{ab}{}^{p}+\rho h\zeta ^{p}\right]=(\mathcal{D}_{p}^{*}+\mathcal{T}_{p}^{*})(h\delta\mathcal{F}^{p}), \tag{52}\]
thereby verifying explicitly the relationship between the two currents that is implied by Noether's second theorem (8b). Thus, as expected for an action that is invariant under a set of local symmetries, this relationship contains no further information, but nonetheless provides a useful check of the derivation of the expressions (35b-35d). Indeed, the requirement \((\mathcal{D}^{\ast}_{a}+\mathcal{F}^{\ast}_{a})[h(\not{\mathcal{J}}^{a}- \mathcal{S}^{a})]=0\) from Noether's second theorem can thus be used as an alternative (albeit rather longer) means of deriving the expressions (35b-35d) for the variational derivatives with respect to the gravitational gauge fields; it has been demonstrated, however, that this equivalence between the Noether and Hilbert (variational) approaches does not hold in general for all modified gravity theories [59].
## V Extended Weyl gauge theory
We now move on to consider eWGT [19], which proposes an 'extended' form for the transformation laws of the rotational and dilational gauge fields under local dilations. In particular, under infinitesimal local Weyl transformations consisting of GCTs, rotations of the local Lorentz frames and dilations, parameterised by \(\xi^{\mu}(x)\), \(\omega^{ab}(x)\) and \(\rho(x)\), respectively, a matter field \(\varphi\) of weight \(w\) and the gauge fields transform as
\[\delta_{0}\varphi = -\xi^{\nu}\partial_{\nu}\varphi+(\tfrac{1}{2}\omega^{ab}\Sigma_{ ab}+w\rho)\varphi, \tag{53a}\] \[\delta_{0}h_{a}{}^{\mu} = -\xi^{\nu}\partial_{\nu}h_{a}{}^{\mu}+h_{a}{}^{\nu}\partial_{\nu }\xi^{\mu}-(\omega^{b}{}_{a}+\rho\,\delta^{b}_{a})h_{b}{}^{\mu},\] (53b) \[\delta_{0}A^{ab}{}_{\mu} = -\xi^{\nu}\partial_{\nu}A^{ab}{}_{\mu}-A^{ab}{}_{\nu}\partial_{ \mu}\xi^{\nu}-2\omega^{[a}{}_{c}A^{b]c}{}_{\mu}-\partial_{\mu}\omega^{ab}+2 \theta\eta\indices{c}{}^{[a}b^{b]}{}_{\mu}h_{c}{}^{\nu}\partial_{\nu}\rho,\] (53c) \[\delta_{0}B_{\mu} = -\xi^{\nu}\partial_{\nu}B_{\mu}-B_{\nu}\partial_{\mu}\xi^{\nu}- \theta\partial_{\mu}\rho, \tag{53d}\]
where \(\theta\) is an arbitrary parameter that can take any value. The proposed form for the transformation law (53c) of the rotational gauge field is motivated by the observation that the WGT (and PGT) matter actions for the massless Dirac field and the electromagnetic field (neither of which depends on the dilation gauge field) are invariant under local dilations even if one assumes this 'extended' transformation law for the rotational gauge field. A complementary motivation for introducing the extended transformation law (53c) is that under local dilations it places the transformation properties of the PGT rotational gauge field strength \(\mathcal{R}^{ab}{}_{cd}\) and translational gauge field strength \(\mathcal{F}^{\,a}{}_{bc}\) on a more equal footing: for general values of \(\theta\), neither \(\mathcal{R}^{ab}{}_{cd}\) nor \(\mathcal{F}^{\,a}{}_{bc}\) transforms covariantly, but \(\mathcal{R}^{ab}{}_{cd}\) does transform covariantly and \(\mathcal{F}^{\,a}{}_{bc}\) transforms inhomogeneously for \(\theta=0\), and _vice-versa_ for \(\theta=1\). It is also worth noting that the extended transformation law for the rotational gauge field reduces to that in WGT for \(\theta=0\), whereas the extended transformation law (53d) for the dilational gauge field reduces to the WGT form for \(\theta=1\); thus there is no value of \(\theta\) for which both transformation laws reduce to their WGT forms.
In eWGT, the covariant derivative, denoted by \(\mathcal{D}^{\dagger}_{a}\), has a somewhat different form to that shown in (20) for WGT. In particular, one does not adopt the standard approach of introducing each gauge field as the linear coefficient of the corresponding generator. Rather, in order to accommodate our proposed extended transformation law (53c) under local dilations, one is led to introduce the 'rotational' gauge field \(A^{ab}{}_{\mu}(x)\) and the 'dilational' gauge field \(B_{\mu}(x)\) in a very different way, so that11
Footnote 11: The daggers in the definition of the derivative operator are intended simply to distinguish it from the usual notation used [11; 19; 20] for the covariant derivatives of PGT and WGT, and should not be confused with the operation of Hermitian conjugation.
\[\mathcal{D}^{\dagger}_{a}\varphi_{A}=h_{a}{}^{\mu}D^{\dagger}_{\mu}\varphi_{A} =h_{a}{}^{\mu}(\partial_{\mu}+\Gamma^{\dagger}_{\mu})\varphi_{A}=h_{a}{}^{ \mu}[\partial_{\mu}+\tfrac{1}{2}A^{\dagger ab}{}_{\mu}\Sigma_{ab}+w_{A}(B_{ \mu}-\tfrac{1}{3}T_{\mu})]\varphi_{A}, \tag{54}\]
where we have introduced the modified \(A\)-field
\[A^{\dagger ab}{}_{\mu}\equiv A^{ab}{}_{\mu}+2b^{[a}{}_{\mu}\mathcal{R}^{b]}, \tag{55}\]
in which \(\mathcal{B}_{a}=h_{a}{}^{\mu}B_{\mu}\) and \(T_{\mu}=b^{a}{}_{\mu}\mathcal{T}_{a}\), where \(\mathcal{T}_{a}\equiv\mathcal{F}^{\,b}{}_{ab}\) is the trace of the PGT torsion.12 It is straightforward to show that, if \(\varphi\) has Weyl weight \(w\), then (54) does indeed transform covariantly with Weyl weight \(w-1\), as required. Unlike the transformation laws for \(A^{ab}{}_{\mu}\) and \(B_{\mu}\), the covariant derivative (54) does not explicitly contain the parameter \(\theta\). Consequently, it does _not_ reduce to the standard WGT covariant derivative \(\mathcal{D}^{\ast}_{a}\varphi_{A}\) in either special case \(\theta=0\) or \(\theta=1\), while retaining its covariant transformation law for _any_ value of \(\theta\).
Footnote 12: It is worth noting that \(A^{\dagger ab}{}_{\mu}\) is not considered to be a fundamental field (notwithstanding the variational approach adopted below), but merely a shorthand for the above combination of the gauge fields \(h_{a}{}^{\mu}\) (or its inverse), \(A^{ab}{}_{\mu}\) and \(B_{\mu}\). Similarly, \(T_{\mu}\) is merely a shorthand for the corresponding function of the gauge fields \(h_{a}{}^{\mu}\) (or its inverse) and \(A^{ab}{}_{\mu}\).
The derivative (54) does in fact transform covariantly under the much _wider_ class of gauge field transformations in which \(\theta\partial_{\mu}\rho(x)\) is replaced in (53c-53d) by an _arbitrary_ vector field \(Y_{\mu}(x)\). Indeed, one finds that the WGT (and PGT) matter actions for the massless Dirac field and the electromagnetic field are still invariant under local dilations after such a replacement, although the discussion above regarding the transformation properties of \(\mathcal{R}^{ab}{}_{cd}\) and \(\mathcal{F}^{\,a}{}_{bc}\)
following requires appropriate modification, since neither transforms covariantly if \(\theta\partial_{\mu}\rho(x)\) is replaced by an arbitrary vector \(Y_{\mu}(x)\). The covariance of \(\mathscr{Q}^{\dagger}_{a}\varphi_{A}\) under this wider class of transformations allows one to identify a further gauge symmetry of eWGT, namely under the simultaneous transformations
\[A^{ab}{}_{\mu}\to A^{ab}{}_{\mu}+2b^{[a}{}_{\mu}\not{\mathscr{Y}}^{b]},\qquad B _{\mu}\to B_{\mu}-Y_{\mu}, \tag{56}\]
where \(\not{\mathscr{Y}}_{a}=h_{a}{}^{\mu}Y_{\mu}\) and \(Y_{\mu}\) is an arbitrary vector field. Under this symmetry, both \(A^{\dagger ab}{}_{\mu}\) and \(B_{\mu}-\frac{1}{3}T_{\mu}\) remain unchanged and thus \(\mathscr{Q}^{\dagger}_{a}\varphi\) is invariant, as too are the eWGT field strengths and action discussed below. One may make use of this symmetry of eWGT to choose a gauge in which either \(B_{\mu}\) or \(T_{\mu}\) is self-consistently set to zero, which can considerably simplify subsequent calculations.
It was noted in [19] that the extended transformation laws (53c-53d) implement Weyl scaling in a novel way that may be related to gauging of the full conformal group. This is discussed in more detail in [20], where it is shown that eWGT does indeed constitute a valid novel gauge theory of the conformal group. We briefly summarise below the aspects of eWGT that are relevant to our present discussion, and refer the reader to [19; 20] for further details.
By analogy with WGT, the Lagrangian density in eWGT has the usual form \(\mathscr{L}=h^{-1}L\), where the translational gauge field \(h_{a}{}^{\mu}\) is assigned a Weyl weight \(w=-1\), so that \(h=\det(h_{a}{}^{\mu})\) and \(L\) are scalar densities both of Weyl weight \(w=-4\), and hence the action \(S\) is invariant under local scale transformations. The Lagrangian has the functional dependencies
\[L=L(\varphi_{A},\mathscr{Q}^{\dagger}_{a}\varphi_{A},\mathscr{R}^{\dagger}_{ abcd},\mathscr{S}^{\dagger}_{abc},\mathscr{K}^{\dagger}_{ab}), \tag{57}\]
where the quantities \(\mathscr{R}^{\dagger}_{abcd}\), \(\mathscr{F}^{\dagger}_{abc}\), \(\mathscr{K}^{\dagger}_{ab}\) are the eWGT 'rotational', 'translational' and 'dilational' gauge field strengths, respectively, which are defined through the action of the commutator of two eWGT covariant derivatives on some field \(\varphi\) of weight \(w\) by
\[[\mathscr{Q}^{\dagger}_{c},\mathscr{Q}^{\dagger}_{d}]\varphi=(\tfrac{1}{2} \mathscr{R}^{\dagger ab}{}_{cd}\Sigma_{ab}+w\not{\mathscr{F}}^{\dagger}_{cd}- \mathscr{F}^{\dagger a}{}_{cd}\mathscr{Q}^{\dagger}_{a})\varphi. \tag{58}\]
The field strengths have the forms \(\mathscr{R}^{\dagger ab}{}_{cd}=h_{a}{}^{\mu}h_{b}{}^{\nu}R^{\dagger ab}{}_{\mu\nu}\), \(\mathscr{R}^{\dagger}_{cd}=h_{c}{}^{\mu}h_{d}{}^{\nu}H^{\dagger}_{\mu\nu}\) and \(\mathscr{F}^{\dagger a}{}_{bc}=h_{b}{}^{\mu}h_{c}{}^{\nu}T^{\dagger a}{}_{\mu\nu}\), where
\[R^{\dagger ab}{}_{\mu\nu} = 2(\partial_{[\mu}A^{\dagger ab}{}_{\nu]}+\eta_{cd}A^{\dagger ac}{} _{[\mu}A^{\dagger db}{}_{\nu]}), \tag{59}\] \[H^{\dagger}_{\mu\nu} = 2(\partial_{[\mu}B_{\nu]}-\tfrac{1}{3}\partial_{[\mu}T_{\nu]}),\] (60) \[T^{\dagger a}{}_{\mu\nu} = 2D^{\dagger}_{[\mu}b^{\,a}{}_{\nu]}. \tag{61}\]
From the transformation laws (53), it is straightforward to verify that, in accordance with their index structures, the gauge field strength tensors \(\mathscr{R}^{\dagger ab}{}_{cd}\), \(\mathscr{K}^{\dagger}_{cd}\) and \(\mathscr{F}^{\dagger a}{}_{bc}\) are invariant under GCTs, and transform covariantly under local Lorentz transformations and dilations with Weyl weights \(w=-2\), \(w=-2\) and \(w=-1\), respectively [19; 20], similarly to their WGT counterparts.
It is worth noting, however, that \(\mathscr{R}^{\dagger ab}{}_{cd}\) and \(\mathscr{F}^{\dagger a}{}_{bc}\) differ in form substantially from those in WGT, and are given in terms of the PGT field strengths \(\mathscr{R}^{ab}{}_{cd}\) and \(\mathscr{F}^{\prime a}{}_{bc}\) by
\[\mathscr{R}^{\dagger ab}{}_{cd} = \mathscr{R}^{ab}{}_{cd}+4\delta^{[b}_{[c}\mathscr{Q}_{d]} \mathscr{R}^{a]}-4\delta^{[b}_{[c}\mathscr{R}_{d]}\mathscr{R}^{a]}-2\mathscr{R }^{2}\delta^{[a}_{c}\delta^{b]}_{d}-2\mathscr{R}^{[a}\mathscr{S}^{\prime b]}{}_ {cd},\] \[\mathscr{F}^{\prime\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
By contracting over various indices, one also obtains the following non-trivial contracted Bianchi identities:
\[\mathscr{D}^{\dagger}_{a}\mathscr{R}^{\dagger ae}{}_{bc}-2\mathscr{D}^{ \dagger}_{[b}\mathscr{R}^{\dagger e}{}_{c]}-2\mathscr{T}^{\dagger f}{}_{a[b} \mathscr{R}^{\dagger ae}{}_{c]f}-\mathscr{T}^{\dagger f}{}_{bc}\mathscr{R}^{ \dagger e}{}_{f} = 0, \tag{65a}\] \[\mathscr{D}^{\dagger}_{a}(\mathscr{R}^{\dagger a}{}_{c}-\tfrac{1} {2}\delta^{a}_{c}\mathscr{R}^{\dagger})+\mathscr{T}^{\dagger f}{}_{bc}\mathscr{ R}^{\dagger b}{}_{f}+\tfrac{1}{2}\mathscr{T}^{\dagger f}{}_{ab}\mathscr{R}^{ \dagger ab}{}_{cf} = 0,\] (65b) \[\mathscr{D}^{\dagger}_{a}\mathscr{T}^{\dagger a}{}_{bc}+2 \mathscr{R}^{\dagger}_{[bc]}-2\mathscr{R}^{\dagger}_{bc} = 0, \tag{65c}\]
which are somewhat simpler than their WGT counterparts (26a-26c) on account of the condition \(\mathscr{T}^{\dagger}_{a}=0\).
### Manifestly covariant variational derivatives in eWGT
As in WGT, we begin by considering directly the variation of the action. In particular, by analogy with (27), one may immediately write
\[h\,\delta_{0}\mathscr{L}=\frac{\bar{\partial}L}{\partial\varphi_{A}}\,\delta_{ 0}\varphi_{A}+\frac{\partial L}{\partial(\mathscr{D}^{\dagger}_{a}\varphi_{A}) }\,\delta_{0}(\mathscr{D}^{\dagger}_{a}\varphi_{A})+\frac{\partial L}{\partial \mathscr{R}^{\dagger}_{abcd}}\,\delta_{0}\mathscr{R}^{\dagger}_{abcd}+\frac{ \partial L}{\partial\mathscr{T}^{\dagger}_{abc}}\,\delta_{0}\mathscr{T}^{ \dagger}_{abc}+\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{ab}}\,\delta_ {0}\mathscr{R}^{\dagger}_{ab}-b^{a}{}_{\mu}L\,\delta_{0}h_{a}{}^{\mu}. \tag{66}\]
In eWGT, however, there is an additional subtlety compared with WGT: although the dynamical energy-momentum tensor \(\tau^{a}{}_{\mu}\equiv\delta\mathscr{L}/\delta h_{a}{}^{\mu}\) derived from the _total_ Lagrangian density is covariant, this does _not_ necessarily hold for the corresponding quantities obtained from _subsets_ of the terms in \(L\), even if they transform covariantly with weight \(w=-4\)[19]. This leads one to the construct an alternative quantity for which this more general covariance property does hold. This may be arrived at more directly from an alternative variational principle, in which one makes a change of field variables from the set \(\varphi_{A}\), \(h^{\mu}_{a}\), \(A^{ab}{}_{\mu}\) and \(B_{\mu}\) to the new set \(\varphi_{A}\), \(h^{\mu}_{a}\), \(A^{\dagger ab}{}_{\mu}\) and \(B_{\mu}\). It is worth noting that one is simply making a change of field variables here, rather than considering \(A^{\dagger ab}{}_{\mu}\) to be an independent field variable; in other words, one still considers \(A^{\dagger ab}{}_{\mu}\) to be given in terms of \(h^{\mu}_{a}\), \(A^{ab}{}_{\mu}\), \(B_{\mu}\) by its defining relationship (55), rather than an independent quantity whose relationship to the other variables would be determined from the variational principle. Moreover, as shown in [19], the eWGT covariant derivative can be expressed wholly in terms of the fields \(h^{\mu}_{a}\) (or its inverse) and \(A^{\dagger ab}{}_{\mu}\), and thus so too can the eWGT field strengths. In particular, if one defines the (non-covariant) derivative operator \(\mathscr{D}^{\natural}_{a}\varphi\equiv h_{a}{}^{\mu}D^{\natural}_{\mu} \varphi\equiv h_{a}{}^{\mu}(\partial_{\mu}+\tfrac{1}{2}A^{\dagger bc}{}_{\mu} \Sigma_{bc})\varphi\) and the quantities \(\mathscr{T}^{\dagger ab}{}_{bc}\equiv 2h_{b}{}^{\mu}h_{c}{}^{\nu}D^{\natural}_{[\mu}b^{a}{}_{ \nu]}\), then one may easily show that \(\mathscr{D}^{\dagger}_{a}\varphi=(\mathscr{D}^{\natural}_{a}-\tfrac{1}{3}w \mathscr{T}^{\natural}_{a})\varphi\). Consequently, in the new set of field variables, the Lagrangian \(L\) in (57) has no explicit dependence on \(B_{\mu}\).
Following the general procedure used for WGT, one must now determine how the variations in (66) depend on the variations the new set of fields \(\varphi_{A}\), \(h^{\mu}_{a}\) and \(A^{\dagger ab}{}_{\mu}\) themselves. This is easily achieved using the definition of the eWGT covariant derivative and the expressions (59-61) for the field strengths. By analogy with the approach adopted for WGT, one must also make use of the fact that for any coordinate vector \(V^{\mu}\) of weight \(w=0\) (i.e. invariant under local scale transformations, like the Lagrangian density \(\mathscr{L}\)), one may show that \(\partial_{\mu}V^{\mu}=h^{-1}\mathscr{D}^{\dagger}_{a}(hb^{a}{}_{\mu}V^{\mu})\) or, equivalently, for any local Lorentz vector \(\mathscr{V}^{a}\) having Weyl weight \(w=-3\) one has [19]
\[\mathscr{D}^{\dagger}_{a}\mathscr{V}^{a}=h\partial_{\mu}(h^{-1}h_{a}{}^{\mu} \mathscr{V}^{a}), \tag{67}\]
which is somewhat simpler than its WGT counterpart (67) because of the condition \(\mathscr{T}^{\dagger}_{a}=0\). Expressions of the form (67) on the RHS of (66) therefore contribute only surface terms to the variation of the action in (9), but we will retain them nonetheless, as they are required for our later discussion.
We begin by considering together the first two terms on the RHS of (66), for which one obtains (after a rather
lengthy calculation)
\[\frac{\bar{\partial}L}{\partial\varphi_{A}}\,\delta_{0}\varphi_{A}+ \frac{\partial L}{\partial(\mathfrak{D}^{\dagger}_{a}\varphi_{A})}\,\delta_{0}( \mathfrak{D}^{\dagger}_{a}\varphi_{A})\] \[=\frac{\bar{\partial}L}{\partial\varphi_{A}}\,\delta_{0}\varphi_{A }+\frac{\partial L}{\partial(\mathfrak{D}^{\dagger}_{a}\varphi_{A})}\left[ \mathfrak{D}^{\dagger}_{a}(\delta_{0}\varphi_{A})+\delta_{0}h_{a}{}^{\mu}D^{ \dagger}_{\mu}\varphi_{A}+(\tfrac{1}{2}h_{a}{}^{\mu}\Sigma_{bc}+\tfrac{1}{3}w_ {A}\eta_{[a}{}_{b]}{}^{\mu})\varphi_{A}\,\delta_{0}A^{\dagger bc}{}_{\mu}\right.\] \[\left.+\tfrac{2}{3}w_{A}\varphi_{A}(h_{[a}{}^{\mu}\mathfrak{D}^{ \dagger}_{b]}+\tfrac{1}{2}h_{c}{}^{\mu}\mathscr{S}{}^{\dagger}{}_{cb})\, \delta_{0}b^{b}{}_{\mu}\right],\] \[=\left[\frac{\bar{\partial}L}{\partial\varphi_{A}}-\mathfrak{D}^{ \dagger}_{a}\frac{\partial L}{\partial(\mathfrak{D}^{\dagger}_{a}\varphi_{A}) }\right]\delta_{0}\varphi_{A}+\left[\frac{\partial L}{\partial(\mathfrak{D}^{ \dagger}_{a}\varphi_{A})}D^{\dagger}_{\mu}\varphi_{A}+\tfrac{2}{3}w_{A}b^{c}{} _{\mu}\delta^{[}_{b]}\mathfrak{D}^{\dagger}_{c]}\left(\frac{\partial L}{ \partial(\mathfrak{D}^{\dagger}_{b}\varphi_{A})}\varphi_{A}\right)\right]\, \delta_{0}h_{a}{}^{\mu}\] \[+\frac{\partial L}{\partial(\mathfrak{D}^{\dagger}_{a}\varphi_{A })}(\tfrac{1}{2}h_{a}{}^{\mu}\Sigma_{bc}+\tfrac{1}{3}w_{A}\eta_{a[c}h_{b]}{}^ {\mu})\varphi_{A}\,\delta_{0}A^{\dagger bc}{}_{\mu},\] \[+\mathfrak{D}^{\dagger}_{a}\left[\frac{\partial L}{\partial( \mathfrak{D}^{\dagger}_{a}\varphi_{A})}\delta_{0}\varphi_{A}+\tfrac{2}{3} \frac{\partial L}{\partial(\mathfrak{D}^{\dagger}_{a}\varphi_{A})}w_{A}\varphi _{A}b^{b]}{}_{\mu}\,\delta_{0}h_{b}{}^{\mu}\right], \tag{68}\]
where both terms in square brackets in the last line are readily shown to have Weyl weight \(w=-3\), with the second one having no analogue in the corresponding expression (29) in WGT. Analysing the further terms containing derivatives on the RHS of (66) in a similar manner, one finds (again after lengthy calculations in each case)
\[\frac{\partial L}{\partial\mathfrak{A}^{\dagger}_{abcd}}\,\delta_ {0}\mathfrak{A}^{\dagger}_{abcd} = 2\frac{\partial L}{\partial\mathfrak{A}^{\dagger}_{abcd}}\left[R^ {\dagger}_{ab\mu d}\,\delta_{0}h_{c}{}^{\mu}+h_{d}{}^{\mu}\mathfrak{D}^{ \dagger}_{c}(\delta_{0}A^{\dagger}_{ab\mu})\right], \tag{69}\] \[= 2\frac{\partial L}{\partial\mathfrak{A}^{\dagger}_{abcd}}R^{ \dagger}_{ab[\mu d]}\,\delta_{0}h_{c}{}^{\mu}+\left(h_{e}{}^{\mu}\mathscr{S}{}^ {\dagger}{}_{ecd}+2h_{c}{}^{\mu}\mathfrak{D}^{\dagger}_{d}\right)\left(\frac{ \partial L}{\partial\mathfrak{A}^{\dagger}_{abcd}}\right)\,\delta_{0}A^{ \dagger}_{ab\mu}-2\mathfrak{D}^{\dagger}_{d}\left[\frac{\partial L}{\partial \mathfrak{A}^{\dagger}_{abcd}}h_{c}{}^{\mu}\,\delta_{0}A^{\dagger}_{ab\mu} \right],\] \[\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{abc}}\,\delta_ {0}\mathscr{T}^{\dagger}_{abc} = 2\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{abc}}\left[T^{ \dagger}_{a\mu\nu}h_{c}{}^{\nu}\,\delta_{0}h_{b}{}^{\mu}+h_{c}{}^{\nu} \mathfrak{D}^{\dagger}_{b}(\delta_{0}b_{a\nu})+h_{b}{}^{\mu}\,\delta_{0}A^{ \dagger}_{ac\mu}\right.\] (70) \[\left.-\tfrac{1}{3}\,\eta_{ac}(\eta_{[b}\eta_{[b}\eta_{[b}\eta_{[b} \,^{\mu}\delta_{0}A^{\dagger}{}^{\text{p}\mu}+2h_{[\eta}{}^{\mu}\mathfrak{D}^{ \dagger}_{b]}(\delta_{0}b^{\theta}{}_{\mu})+h_{p}{}^{\mu}\mathscr{S}{}^{ \dagger}{}_{pq}\delta_{0}b^{\theta}{}_{\mu})\right],\right.\] \[= 2\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{abc}}[(T^{ \dagger}_{a\mu\nu}h_{c}{}^{\nu}\delta^{\text{d}}_{b}-\tfrac{1}{2}\mathscr{S}{}^ {\dagger}{}_{bc}b_{a\mu})\delta_{0}h_{d}{}^{\mu}+h_{b}{}^{\mu}\delta_{0}A^{ \dagger}_{ac\mu}\right]-2\mathfrak{D}^{\dagger}_{c}\left(\frac{\partial L}{ \partial\mathscr{T}^{\dagger}_{abc}}\right)b_{a\mu}\,\delta_{0}h_{b}{}^{\mu}\] \[-\tfrac{2}{3}\eta_{ac}\left[(b^{p}{}_{\mu}\mathfrak{D}^{ \dagger}_{b}-\mathfrak{D}^{p}_{b}b{}^{\mu}\mu\mathfrak{D}^{\dagger}_{q}) \left(\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{abc}}\right)\delta_{0}h_{ p}{}^{\mu}+\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{abc}}\eta_{[b}\eta_{[b}\eta_{[b} \,^{\mu}\delta_{0}A^{\dagger}{}^{\text{p}\mu}{}_{\mu}\right]\right.\] \[\left.+2\mathfrak{D}^{\dagger}_{c}\left[\left(\frac{\partial L}{ \partial\mathscr{T}^{\dagger}_{abc}}b_{a\mu}-\tfrac{2}{3}\eta_{pq}\frac{ \partial L}{\partial\mathscr{T}^{\dagger}_{pq[c}}h^{b]}{}_{\mu}\right)\delta_{0} h_{b}{}^{\mu}\right],\] \[\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{ab}}\,\delta_ {0}\mathscr{T}^{\dagger}_{ab} = 2\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{ab}}\left[H^{ \dagger}_{\mu\nu}h_{b}{}^{\nu}\,\delta_{0}h_{a}{}^{\mu}+h_{b}{}^{\nu} \mathfrak{D}^{\dagger}_{a}(\delta_{0}B_{\nu}-\tfrac{1}{3}\delta_{0}T_{\nu}) \right],\] (71) \[= 2\frac{\partial L}{\partial\mathscr{T}^{\dagger}_{ab}}H^{\dagger}_{ \mu\nu}h_{b}{}^{\nu}\,\delta_{0}h_{a}{}^{\mu}+\tfrac{2}{3}{}^{b}{}_{[\mu} \mathfrak{D}^{\dagger}_{c]}\left[(\mathscr{S}{}^{\dagger}{}_{pq}+2\delta_{p}^{c} \mathfrak{D}^{\dagger}_{q})\left(\frac{\partial L}{\partial\mathscr{T}^{ \dagger}_{pq}}\right)\right]\delta_{0}h_{a}{}^{\mu}\] \[+\tfrac{2}{3}\eta_{c[a}h_{b]}{}^{\mu}(\delta_{p}^{c}\mathfrak{D}^{ \dagger}_{q}+\tfrac{1}{2}\mathscr{S}^{\dagger}{}_{pq})\left(\frac{\partial L}{ \partial\mathscr{T}^{\dagger}_{pq}}\right)\delta_{0}A^{\dagger ab}{}_{\mu}\] \[-\tfrac{2}{3}\mathfrak{D}^{\dagger}_{c}\left\{\left[\frac{\partial L}{ \partial\mathscr{R}^{\dagger}_{pq}}\mathscr{S}{}^{\dagger}{}^{[c}{}_{pq}b^{a]}{}_{ \mu}+2\delta_{p}^{[c}\mathfrak{D}^{\dagger}_{q}\left(\frac{\partial L}{ \partial\mathscr{R}^{\dagger}_{pq}}\right)b^{a]}{}_{\mu}\right]\delta_{0}h_{a}{} ^{\mu}\right\}.\]
In the above expressions it is again assumed that the appropriate antisymmetrisations, arising from the symmetries of the field strength tensors, are performed when the RHS are evaluated. It is also easily shown that the quantity in brackets in each of the last terms in (69-71) has Weyl weight \(w=-3\), so according to (67) each such term contributes a surface term to the variation of the action (9).
Following an analogous approach to that adopted for WGT, one may then substitute the expressions (68-71) into (66), which may itself subsequently be substituted into (9) to obtain an expression of the general form (12) for Noether's first theorem. This may be written as
\[\delta S=\int\left[\upsilon^{A}\,\delta_{0}\varphi_{A}+\tau^{\dagger a}{}_{\mu} \,\delta_{0}h_{a}{}^{\mu}+\sigma_{ab}{}^{\mu}\,\delta_{0}A^{\dagger ab}{}_{\mu} +h^{-1}\mathbb{Z}_{p}^{\dagger}(h\not{\cal F}^{p})\right]\,d^{4}x=0, \tag{72}\]
where the current \(h\not{\cal F}^{p}\) is given by
\[h\not{\cal F}^{p} = \frac{\partial L}{\partial(\mathfrak{D}_{p}^{\dagger}\varphi_{A} )}\delta_{0}\varphi_{A}\] (73) \[+2\left[\frac{1}{3}\frac{\partial L}{\partial(\mathfrak{D}_{p}^{ \dagger}\varphi_{A})}w_{A}\varphi_{A}b^{b\parallel}{}_{\mu}+\frac{\partial L} {\partial\mathcal{F}_{abp}^{\dagger}}b_{a\mu}-\frac{2}{3}\eta_{rs}\frac{ \partial L}{\partial\mathcal{F}_{rs\left[p\right.}^{\dagger}}b^{b\parallel}{ }_{\mu}-\frac{1}{3}\frac{\partial L}{\partial\mathcal{F}_{rs}^{\dagger}} \mathcal{F}^{\dagger\left[p}{}_{rs}b^{b\right]}{}_{\mu}-\frac{2}{3}\delta_{r }^{\left[p\right.}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(L\) that does not contain the gauge field strength tensors, but depends only on the matter fields and their covariant derivatives, the variational derivatives with respect to the gauge fields do _not_ reduce to the _covariant canonical currents_[11; 20] of the matter fields. Indeed, there exist additional terms proportional to the dilational generator \(\Delta=w_{A}I\) for the matter fields \(\varphi_{A}\), so that any matter field with non-zero Weyl weight \(w_{A}\) contributes additionally both to the modified energy-momentum tensor and to the spin-angular-momentum tensor, irrespective of its spin. Second, for Lagrangians that do depend on the gauge field strengths, there are additional terms capable of producing a dependence on the covariant derivatives of the field strength tensors, and in each case these terms depend on the covariant derivatives of field strength tensors for different gauge fields than those with respect to which the variational derivative is taken. Moreover, the final term on the RHS of (74b) contains _second_ covariant derivatives of \(\partial L/\partial\mathscr{R}^{\dagger}_{ab}\).
From (60), it appears at first sight that \(\mathscr{R}^{\dagger}_{ab}\) is linear in second-order derivatives of \(h_{a}{}^{\mu}\) and first-order derivatives of \(h_{a}{}^{\mu}\) and \(A^{\dagger ab}{}_{\mu}\) (and hence of \(A^{ab}{}_{\mu}\) and \(B_{\mu}\)). In that case, if the Lagrangian contains a term proportional to \(\mathscr{R}^{\dagger}_{ab}\mathscr{R}^{\dagger\,ab}\) (which has the required Weyl weight \(w=-4\) to be scale-invariant) it would follow that the final term on the RHS of (74b) is linear in fourth-order derivatives of \(h_{a}{}^{\mu}\) and third-order derivatives of all three gauge fields \(h_{a}{}^{\mu}\), \(A^{ab}{}_{\mu}\) and \(B_{\mu}\). Similarly, the final term in (74c) would be linear in third-order derivatives of \(h_{a}{}^{\mu}\). Moreover, if the Lagrangian contains a term proportional to \(\mathscr{R}^{\dagger}_{[ab]}\mathscr{R}^{\dagger\,ab}\), the final term on the RHS of (74b) would be linear in third-order derivatives of \(h_{a}{}^{\mu}\), \(A^{ab}{}_{\mu}\) and \(B_{\mu}\). These considerations would seem to indicate that eWGTs containing either term in the Lagrangian suffer from Ostrogradsky's instability [37; 38]. As noted in [19], however, this conclusion is not clear cut, since in applying such theories to particular physical systems or in the general linearised case, one finds that the resulting field equations always organise themselves into combinations of coupled second-order equations in the gauge fields [19]. Specifically, one finds the terms containing higher-order derivatives correspond to the derivative of already known expressions, and so contain no new information. Having now identified the gauge symmetry (56) and obtained the general expressions (74b) and (74c) for the variational derivatives, one may indeed show that this always occurs in the general non-linear case. First, one may use the gauge transformation (56) to set \(T_{\mu}=0\), so that \(\mathscr{R}^{\dagger}_{ab}\) is merely linear in first-order derivatives of \(B_{\mu}\). Nonetheless, if the Lagrangian contains a term proportional to \(\mathscr{R}^{\dagger}_{ab}\mathscr{R}^{\dagger\,ab}\), the final term in (74b), specifically the part that arises from the final term in (75), still contains third-order derivatives of \(B_{\mu}\). This is unproblematic, however, since this term is the covariant derivative of an expression that is already known from the field equation \(h\sigma_{ab}{}^{c}=0\). Hence, in the final field equations one encounters field derivatives of only second-order or lower, thereby avoiding Ostrogradsky's instability.
It is also worth pointing out that, as for WGT, we have not assumed the equations of motion to be satisfied in deriving (74a-74c). Thus, one may calculate the corresponding variational derivatives for _any subset_ of terms in \(L\) that is a scalar density of weight \(w=-4\). Individually, however, such quantities do _not_ vanish, in general. Rather, each equation of motion requires only the vanishing of the sum of such quantities, when derived from disjoint subsets that exhaust the total Lagrangian \(L\).
### Relationship between first- and second-order variational principles in eWGT
As we did for WGT, we now demonstrate how the approach outlined above is well suited to comparing first- and second-order variational derivatives. We again focus on the example of the variational derivatives obtained by setting the (eWGT) torsion to zero _after_ the variation is performed (first-order approach) with those obtained by setting the torsion to zero in the action _before_ carrying out the variation (second-order approach). As mentioned in the Introduction, however, in eWGT one faces an additional complication relative to WGT, since setting the torsion to zero does not lead to an explicit expression for the rotational gauge field in terms the other gauge fields, but instead an implicit constraint relating all the gauge fields.
We again begin by considering the simpler case of the first-order approach, where one merely sets \(\mathscr{T}^{\dagger a}{}_{bc}=0\) (which is a properly eWGT-covariant condition) in the expressions (74a-74c). In eWGT, however, the condition \(\mathscr{T}^{\dagger a}{}_{bc}=0\) results in an _implicit_ constraint between the gauge fields \(h_{a}{}^{\mu}\), \(A^{ab}{}_{\mu}\) and \(B_{\mu}\). Once again, it proves useful in eWGT to work in terms of the modified rotational gauge field, or rather its'reduced' form in the case \(\mathscr{T}^{\dagger a}{}_{bc}=0\)[19; 20]. From (63), this is given by \({}^{0}\!A^{\dagger}_{ab\mu}=b^{c}{}_{\mu}{}^{0}\mathscr{A}^{\dagger}_{abc}\), where14
Footnote 14: It is important to note that there is a fundamental difference with WGT here, since \({}^{0}\!A^{\dagger ab}{}_{\mu}\) depends on the rotational gauge field \(A^{ab}{}_{\mu}\) through the terms containing \(\mathscr{T}_{a}\), and hence cannot be written entirely in terms of the other gauge fields \(h_{a}{}^{\mu}\) and \(B_{\mu}\).
\[{}^{0}\!\mathscr{A}^{\dagger}_{abc}=\tfrac{1}{2}(c_{abc}+c_{bca}-c_{cab})+\eta_ {ac}(\mathscr{B}_{b}-\tfrac{1}{3}\mathscr{T}_{b})-\eta_{bc}(\mathscr{B}_{a}- \tfrac{1}{3}\mathscr{T}_{a}). \tag{76}\]
In an analogous manner to WGT, under a local extended Weyl transformation, the quantities \({}^{0}\!A^{\dagger ab}{}_{\mu}\) transform in the same way as \(A^{\dagger ab}{}_{\mu}\), and so one may construct the'reduced' eWGT covariant derivative \({}^{0}\!\mathscr{D}^{\dagger}_{a}\varphi=h_{a}{}^{\mu}{}^{0}D^{\ast}_{\mu}\varphi=\)
\(h_{a}{}^{\mu}(\partial_{\mu}+\frac{1}{2}{}^{0}A^{\dagger ab}{}_{\mu}\Sigma_{ab}+wB _{\mu})\varphi\), which transforms in the same way as \(\mathscr{D}_{a}^{\dagger}\varphi\). Thus, the corresponding quantities to (74a-74c) are obtained simply by evaluating the RHS with \(\mathcal{F}^{\dagger a}{}_{bc}\) set to zero, which also implies \(\mathscr{Q}_{a}^{\dagger}\to{}^{0}\mathscr{Q}_{a}^{\dagger}\). This yields
\[h\,{}^{0}\!{}^{A} = \frac{\bar{\partial}L}{\partial\varphi_{A}}\bigg{|}_{0}-{}^{0} \!\mathscr{Q}_{a}^{\dagger}\,\frac{\partial L}{\partial(\mathscr{Q}_{a}^{ \dagger}\varphi_{A})}\bigg{|}_{0}\,, \tag{77a}\] \[h\,{}^{0}\!{}^{\tau\!|a}{}_{b} = \frac{\partial L}{\partial(\mathscr{Q}_{a}^{\dagger}\varphi_{A} )}\bigg{|}_{0}{}^{0}\!\mathscr{Q}_{b}^{\dagger}\varphi_{A}+2\,\frac{\partial L }{\partial\mathscr{Q}_{pqra}^{\dagger}}\bigg{|}_{0}{}^{0}\!\mathscr{R}_{pq}^{ \dagger}+2\,\frac{\partial L}{\partial\mathscr{R}_{pa}^{\dagger}}\bigg{|}_{0 }{}^{0}\!\!\mathscr{R}_{pq}^{\dagger}+2\,{}^{0}\!\mathscr{Q}_{r}^{\dagger}\, \frac{\partial L}{\partial\mathscr{S}^{\dagger b}{}_{ar}}\bigg{|}_{0}-\delta_ {a}^{b}\,L\,|_{0}-2\,{}^{0}\!\mathscr{Q}_{c}^{\dagger}(h\,{}^{0}\!\mathscr{Q }_{c}{}_{b}),\] (77b) \[h\,{}^{0}\!\mathscr{G}_{ab}{}^{c} = \frac{1}{2}\,\left.\frac{\partial L}{\partial(\mathscr{Q}_{c}^{ \dagger}\varphi_{A})}\right|_{0}\Sigma_{ab}\varphi_{A}+2\delta_{r}^{c}\,{}^{0} \!\mathscr{Q}_{s}^{\dagger}\,\frac{\partial L}{\partial\mathscr{R}^{\dagger ab }{}_{rs}}\bigg{|}_{0}-2\,\frac{\partial L}{\partial\mathcal{S}^{\dagger[ab]}{} _{c}}\bigg{|}_{0}+h\,{}^{0}\!\hat{\sigma}_{ab}{}^{c}, \tag{77c}\]
where by analogy with (75) we have defined the quantity
\[h\,{}^{0}\!\hat{\sigma}_{ab}{}^{c}=\frac{1}{3}\delta_{[a}^{c}\eta_{b]r}\,\left. \frac{\partial L}{\partial(\mathscr{Q}_{r}^{\dagger}\varphi_{A})}\right|_{0}w _{A}\varphi_{A}+\frac{2}{3}\delta_{[a}^{c}\eta_{b]q}\eta_{pr}\,\left.\frac{ \partial L}{\partial\mathcal{S}_{pq}^{\dagger}}\right|_{0}-\frac{2}{3}\delta_ {[a}^{c}\eta_{b]p}\,{}^{0}\!\mathscr{Q}_{q}^{\dagger}\,\left.\frac{\partial L} {\partial\mathscr{P}_{pq}^{\dagger}}\right|_{0}. \tag{78}\]
Once again, it is worth noting that we have not assumed any equations of motion to be satisfied in deriving the quantities (77a- 77c). Thus, one may derive corresponding quantities for _any subset_ of terms in \(L\) that are a scalar density with weight \(w=-4\), and these quantities do not vanish, in general.
We now consider the second-order approach, where one imposes \(\mathcal{S}_{abc}^{\dagger}=0\) at the level of the action, prior to evaluating the variational derivatives. In this case, \(A^{\dagger ab}{}_{\mu}\) is again given by (76), in which case one may show that the following constraint must be satisfied while performing the variation:
\[C_{ab\mu}\equiv A^{\dagger}_{ab\mu}-\frac{2}{3}h_{d}{}^{\nu}b_{[a|\mu}A^{ \dagger d}{}_{|b|\nu}-{}^{0}\!A_{ab\mu}+\frac{2}{3}h_{d}{}^{\nu}b_{[a|\mu}{}^{0 }A^{d}{}_{|b|\nu}=0, \tag{79}\]
where \({}^{0}\!A_{ab\mu}=\frac{1}{2}b^{c}{}_{\mu}(c_{abc}+c_{bca}-c_{cab})\). It is worth noting that \(C_{ab\mu}\) depends on all the gauge fields; moreover, since \({}^{0}\!A_{ab\mu}\) depends both on the \(h\)-field and its derivatives, the expression (79) constitutes a non-holonomic constraint. We therefore consider the _augmented_ total Lagrangian density \(\hat{\mathscr{L}}\equiv\mathscr{L}+\lambda^{ab\mu}C_{ab\mu}\), where \(\lambda^{ab\mu}\) is a field of weight \(w=0\) with the same symmetries as \(C_{ab\mu}\) that acts as a Lagrange multiplier. Thus, up to terms that are the divergence of a quantity that vanishes on the boundary of the integration region, the integrand in the expression (2) for the variation of the action is given by
\[\left(\frac{\delta\hat{\mathscr{L}}}{\delta\chi_{A}}\right)_{\dagger}\delta_{0} \chi_{A}=v^{A}\,\delta_{0}\varphi_{A}+\tau^{\dagger a}{}_{\mu}\,\delta_{0}h_{a}{} ^{\mu}+\sigma_{ab}{}^{\mu}\,\delta_{0}A^{\dagger ab}{}_{\mu}+\lambda^{ab\mu}\, \delta_{0}C_{ab\mu}+C_{ab\mu}\,\delta_{0}\lambda^{ab\mu}, \tag{80}\]
From (79), one finds after some calculation that
\[\delta_{0}C_{ab\mu} = \delta_{0}A^{\dagger}_{ab\mu}-\frac{2}{3}b_{[a|\mu}h_{q}{}^{\sigma }\delta_{0}A^{\dagger q}{}_{|b|\sigma}-b^{c}{}_{\mu}\left(h_{[c}{}^{\nu}\,{}^{0} \!\mathscr{Q}_{b]}^{\dagger}\delta_{0}b_{a\nu}+h_{[a}{}^{\nu}\,{}^{0}\! \mathscr{Q}_{c]}^{\dagger}\delta_{0}b_{b\nu}-h_{[b}{}^{\nu}\,{}^{0}\!\mathscr{Q }_{a]}^{\dagger}\delta_{0}b_{c\nu}\right) \tag{81}\] \[+\frac{2}{3}b^{c}{}_{\mu}\left(\eta_{cba}h_{[q}{}^{\sigma}\,{}^{0} \!\mathscr{Q}_{b]}^{\dagger}-\eta_{cb}h_{[q}{}^{\sigma}\,{}^{0}\!\mathscr{Q}_{a ]}^{\dagger}\right)\delta_{0}b^{q}{}_{\sigma},\]
from which one may show that (80) becomes (up to a total divergence)
\[\left(\frac{\delta\hat{\mathscr{L}}}{\delta\chi_{A}}\right)_{\dagger}\!\!\!\!\! \delta_{0}\chi_{A} = {}^{0}\!\!{}^{A}\,\delta_{0}\varphi_{A}+{}^{0}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
From (83), one sees immediately that the equation of motion for the Lagrange multiplier field \(\lambda^{ab\mu}\) is simply \(C_{ab\mu}=0\), which enforces the original constraint (79), as required. By comparing (82) and (83), and converting all indices to Roman, one further finds that the second-order variational derivatives are related to the first-order ones by
\[hv^{A} = h\,^{0}\!\!\!_{V}{}^{A}, \tag{84}\] \[ht_{ab}^{\dagger} = h\,^{0}\!\!\!_{ab}^{\dagger}+\,^{0}\!\!\!_{c}^{\dagger}\,(h \lambda^{c}\!\!\!_{ab}+h\lambda^{c}\!\!\!_{ba}-h\lambda_{ab}{}^{c})-\tfrac{2}{3 }\eta_{ab}{}^{0}\!\!\!_{c}^{\dagger}(h\lambda^{cd}\!\!\!_{d})+\tfrac{2}{3}\,{ }^{0}\!\!\!_{b}^{\dagger}(h\lambda_{ad}{}^{d}),\] (85) \[hs_{abc} = h\,\Big{(}^{0}\tilde{\sigma}_{abc}+\lambda_{abc}+\tfrac{2}{3} \eta_{c[a}\lambda_{b]d}{}^{d}\Big{)}\,. \tag{86}\]
To proceed further, one must eliminate the dependence of (85-86) on the Lagrange multiplier field \(\lambda_{abc}\). This is achieved by enforcing the \(A\)-field equation of motion, so that \(hs_{abc}=0\), which now merely determinines \(\lambda_{abc}\) under the constraint \(C_{ab\mu}=0\). Using the resulting condition \(\,{}^{0}\tilde{\sigma}_{abc}+\lambda_{abc}+\tfrac{2}{3}\eta_{c[a}\lambda_{b] d}{}^{d}=0\), one may now eliminate the Lagrange multiplier field from (85), and one finally obtains
\[hv^{A} = h\,^{0}\!\!\!_{V}{}^{A}, \tag{87}\] \[ht_{ab}^{\dagger} = h\,^{0}\tilde{\tau}_{ab}^{\dagger}+\,^{0}\!\!\!_{c}^{\dagger}\, \!\!_{b}^{\dagger}\,\!\!_{c}^{\dagger}\,\Big{(}h\,^{0}\tilde{\sigma}_{ab}{}^{ c}-h\,^{0}\tilde{\sigma}^{c}{}_{ab}-h\,^{0}\tilde{\sigma}^{c}{}_{ba}\Big{)}\,. \tag{88}\]
As was the case for WGT, the forms of the matter variational derivatives are identical in the first- and second-order approaches, and the form for the modified energy-momentum tensor in ths second-order approach is reminiscent of the Belinfante tensor. Since, one has not used the equations of motion for the matter fields and the gauge field \(h_{a}{}^{\mu}\) in deriving the expressions (87-88), they remain valid for any subset of the terms in \(\mathscr{L}\) that are a scalar density of weight \(w=-4\). If one does consider the total Lagrangian \(L\), however, then the second-order equations of motion for the matter and gauge fields are obtained simply by setting the expressions (87-88) to zero. In this case, provided the terms of the form \(\partial L/\partial\mathcal{I}_{abc}^{\dagger}\!\!\!_{0}\) vanish in the first-order equations of motion obtained by setting (37-37d) to zero, then this implies that the second-order equations of motion obtained by setting (87-88) to zero are also satisfied, but the contrary does not necessarily hold.
### Manifestly covariant conservation laws in eWGT
We now derive the conservation laws for eWGT in a manner that maintains manifest covariance throughout, by applying the general method outlined in Section III in a similar way to that performed in Section IV.3 for WGT. Once again, we begin by considering the general form of the conservations laws given in (15). As in the previous section, we work in the new set of variables \(\varphi_{A}\), \(h_{a}^{\mu}\), \(A^{\dagger ab}{}_{\mu}\), in which the Lagrangian does not depend explicitly on the gauge field \(B_{\mu}\). In this case, under infinitesimal local Weyl transformations consisting of GCTs, rotations of the local Lorentz frames and dilations, parameterised by \(\xi^{\mu}(x)\), \(\omega^{ab}(x)\) and \(\rho(x)\), the form variations (53) are replaced by
\[\delta_{0}\varphi = -\xi^{\nu}\partial_{\nu}\varphi+(\tfrac{1}{2}\omega^{ab}\Sigma_{ ab}+w\rho)\varphi, \tag{89a}\] \[\delta_{0}h_{a}{}^{\mu} = -\xi^{\nu}\partial_{\nu}h_{a}{}^{\mu}+h_{a}{}^{\nu}\partial_{\nu} \xi^{\mu}-(\omega^{b}{}_{a}+\rho\,\delta^{b}_{a})h_{b}{}^{\mu},\] (89b) \[\delta_{0}A^{\dagger\,ab}{}_{\mu} = -\xi^{\nu}\partial_{\nu}A^{\dagger\,ab}{}_{\mu}-A^{\dagger ab}{}_{ \nu}\partial_{\mu}\xi^{\nu}-2\omega^{[a}{}_{c}A^{\dagger\,b]c}{}_{\mu}- \partial_{\mu}\omega^{ab}, \tag{89c}\]
By comparing these transformation laws with the generic form (6), one may read off the functions \(f_{AC}\) and \(f_{AC}^{\mu}\) in the latter from the coefficients of \(\{\lambda^{C}\}=\{\lambda^{1},\lambda^{2},\lambda^{3}\}=\{\xi^{\alpha},\omega^{ ab},\rho\}\) and their partial derivatives, respectively. As anticipated, one immediately finds that many of the functions \(f_{AC}\) and \(f_{AC}^{\mu}\) are not covariant quantities. One therefore again employs the Bessel-Hagen method to obtain new form variations of the fields in which the functions \(f_{AC}^{\mu}\) are manifestly covariant, as required, although many of the functions \(f_{AC}\) may also be made so. Following the general methodology outlined in Appendix A, we consider separately the conservation laws that result from the invariance of the eWGT action under infinitesimal GCTs, local rotations and local dilations, respectively.
Considering first the infinitesimal GCTs characterised by \(\xi^{\alpha}(x)\) (which we take to correspond to \(C=1\)), one may make use of the invariance of the action under the transformations (89) for arbitrary functions \(\omega^{ab}(x)\) and \(\rho(x)\) by choosing them in a way that yields covariant forms for the new functions \(f_{A1}^{\mu}\) (and also \(f_{A1}\) in this case) in the resulting form variations. This is achieved by setting \(\omega^{ab}=-A^{\dagger\,ab}{}_{\nu}\xi^{\nu}\) and \(\rho=-(B_{\nu}-\tfrac{1}{3}T_{\mu})\xi^{\nu}\) (where the minus signs are included for later convenience), which yields transformation laws of a much simpler form than in (89), given by
\[\delta_{0}\varphi = -\xi^{\nu}D_{\nu}^{\dagger}\varphi, \tag{90a}\] \[\delta_{0}h_{a}{}^{\mu} = -\xi^{\nu}D_{\nu}^{\dagger}h_{a}{}^{\mu}+h_{a}{}^{\nu}\partial_{ \nu}\xi^{\mu},\] (90b) \[\delta_{0}A^{\dagger\,ab}{}_{\mu} = \xi^{\nu}R^{\dagger\,ab}{}_{\mu\nu}, \tag{90c}\]
From these form variations, one may immediately read off the new forms of the functions \(f_{A1}\) and \(f_{A1}^{\mu}\), all of which are now manifestly covariant. Inserting these expressions into the general form (15), one directly obtains the manifestly covariant conservation law
\[\mathscr{D}_{c}^{\dagger}(h\tau^{\dagger\,c}{}_{\nu})-h(\sigma_{ab}{}^{\mu}R^{ \dagger\,ab}{}_{\mu\nu}-\tau^{\dagger\,a}{}_{\mu}D_{\nu}^{\dagger}h_{a}^{\mu}- \upsilon^{A}D_{\nu}^{\dagger}\varphi_{A})=0, \tag{91}\]
where \(h\upsilon^{A}=(\delta L/\delta\varphi_{A})_{\dagger}=\delta L/\delta\varphi_{A}\). On multiplying through by \(h_{d}{}^{\nu}\), one may rewrite the conservation law wholly in term of quantities possessing only Roman indices as
\[\mathscr{D}_{c}^{\dagger}(h\tau^{\dagger\,c}{}_{d})-h(\sigma_{ab}{}^{c} \mathscr{R}^{\dagger\,ab}{}_{cd}-\tau^{\dagger\,c}{}_{b}\mathcal{F}^{\dagger \,b}{}_{cd}-\upsilon^{A}\mathscr{D}_{d}^{\dagger}\varphi_{A})=0. \tag{92}\]
We next consider invariance of the action under infinitesimal local Lorentz rotations characterised by \(\omega^{ab}(x)\) (which we take to correspond to \(C=2\)). In this case, the functions \(f_{A2}^{\mu}\) in the set of transformation laws (89) are already manifestly covariant. One may thus insert the functions \(f_{A2}^{\mu}\) and \(f_{A2}\) read off from (89) directly into the general form (15), without employing the Bessel-Hagen method. On recalling that \(\Gamma_{\beta}^{\dagger}\sigma_{pq}{}^{\beta}=-A^{\dagger\,r}{}_{p\beta} \sigma_{rq}{}^{\beta}-A^{\dagger\,r}{}_{q\beta}\sigma_{pr}{}^{\beta}\) (since \(\sigma_{ab}{}^{\mu}\) has Weyl weight \(w=0\)) one finds that the final set of terms on the LHS of (15) vanish when \(\gamma^{A}\) corresponds to \(h\sigma_{ab}{}^{\mu}\), and one immediately obtains the manifestly covariant conservation law
\[\mathscr{D}_{c}^{\dagger}(h\sigma_{ab}{}^{c})+h\tau_{[ab]}^{\dagger}+\tfrac{1} {2}h\upsilon^{A}\Sigma_{ab}\varphi_{A}=0. \tag{93}\]
Finally, we consider invariance of the action under infinitesimal local dilations characterised by \(\rho(x)\) (which we take to correspond to \(C=3\)). Once again, the relevant functions \(f_{A3}^{\mu}\) in the set of transformation laws (89) are already manifestly covariant. One may thus insert \(f_{A3}^{\mu}\) and \(f_{A3}\) read off from (89) directly into the general form (15), which immediately yields the manifestly covariant algebraic conservation law
\[h\tau^{\dagger\,c}{}_{c}-h\upsilon^{A}w_{A}\varphi_{A}=0. \tag{94}\]
It is straightforward to verify that the manifestly covariant conservations WGT laws (92-94) have the correct forms [19; 20] and match those derived (albeit at considerably greater length) using the standard form of Noether's second theorem (8a).
Before moving on to consider the further condition (8b) arising from Noether's second theorem, in the context of eWGT, we note that the conservation law (93) may be used to simplify the expression (88) for the second-order variational derivative with respect to \(h_{a}{}^{\mu}\) in terms of first-order variational derivatives. Imposing the condition \(\mathscr{G}_{abc}^{\dagger}=0\), the conservation law (93) becomes
\[{}^{0}\mathscr{D}_{c}^{\dagger}(h\,{}^{0}\tilde{\sigma}_{ab}{}^{c})+h\,{}^{0} \tilde{\tau}_{[ab]}^{\dagger}+\tfrac{1}{2}h\,{}^{0}\tilde{\upsilon}^{A}\Sigma_ {ab}\varphi_{A}=0. \tag{95}\]
If one assumes the _matter_ equations of motion \({}^{0}\tilde{\upsilon}^{A}=0\) are satisfied (or, equivalently, that the Lagrangian \(L\) does not depend on matter fields), the expression (88) can thus be written in the simpler and manifestly symmetric form
\[ht_{ab}^{\dagger}\overset{\simeq}{\simeq}h\,{}^{0}\tilde{\tau}_{(ab)}^{\dagger }-2\,{}^{0}\mathscr{D}_{c}^{\dagger}(h\,{}^{0}\tilde{\sigma}^{c}{}_{(ab)}). \tag{96}\]
### Relationship between currents in Noether's second theorem in eWGT
We conclude this section by considering the relationship in WGT between the two currents that appear in Noether's second theorem (8b). As discussed in Section III.3, this equation may be re-written as \(\mathscr{D}_{a}^{\dagger}[h(\mathcal{J}^{a}-\mathcal{S}^{a})]=0\), where \(h\mathcal{J}^{a}\) for eWGT is given by (73) and the expression for \(h\mathcal{S}^{a}\) may be obtained from the general form (16), which on using the eWGT field variations (89) yields
\[h\mathcal{S}^{p}=h\left[-\xi^{\mu}(\tau^{\dagger\,p}{}_{\mu}-\sigma_{ab}{}^{p }A^{\dagger\,ab}{}_{\mu})+\omega^{ab}\sigma_{ab}{}^{p}\right]. \tag{97}\]
As was the case for WGT, this expression does not depend on the variational derivatives \(\upsilon^{A}\equiv\delta\mathscr{Z}/\delta\psi_{A}\) with respect to the matter fields since, as expected, the functions \(f_{AC}^{\mu}\) vanish in this case, as can be read off from the form variations (89) of the new set of fields. Thus, in order for \(h\mathcal{S}^{p}\) to vanish, it is sufficient that just the equations of motion of the gauge fields are satisfied. Moreover, in eWGT, the current (97) also does not depend on the dilation \(\rho(x)\).
If one substitutes the form variations (89) of the new set of fields into the expression (73) for \(h\mathcal{J}^{p}\), one finds after a long calculation of a similar nature to that required in WGT, which makes careful use of the definition (58) of the
field strength tensors, the contracted Bianchi identity (65c) and the manifestly covariant expressions (74b-74c) for the variational derivatives with respect to the gravitational gauge fields, that
\[\mathscr{D}_{p}^{\dagger}(h\mathscr{J}^{p})=\mathscr{D}_{p}^{\dagger}\left[-\xi^ {\mu}h(\tensor{\tau}{{}^{p}}_{q}b^{q}{}_{\mu}-\sigma_{ab}{}^{p}A^{ab}{}_{\mu})+ \omega^{ab}h\sigma_{ab}{}^{p}\right]=\mathscr{D}_{p}^{\dagger}(h\mathscr{S}^{ p}), \tag{98}\]
thereby verifying explicitly the relationship between the two currents that is implied by Noether's second theorem (8b), as was the case in WGT. Thus, as expected for an action that is invariant under a set of local symmetries, this relationship contains no further information, but nonetheless provides a useful check of the derivation of the expressions (74b-74c). Indeed, in a similar way to WGT, the requirement \(\mathscr{D}_{a}^{\dagger}[h(\mathscr{J}^{a}-\mathscr{S}^{a})]=0\) from Noether's second theorem can thus be used as an alternative (albeit rather longer) means of deriving the expressions (35b-35d) for the variational derivatives with respect to the gravitational gauge fields.
## VI Conclusions
We have presented a variational principle that maintains manifest covariance throughout when applied to the actions of gauge theories of gravity. In particular, it directly yields field equations and conservation laws that are manifestly covariant under the symmetries to which the action is invariant. This is achieved by deriving explicit manifestly covariant forms for the Euler-Lagrange variational derivatives and Noether's theorems for a generic action of the form typically assumed in gauge theories of gravity.
The manifestly covariant form of Noether's first theorem and the expressions for the variational derivatives derived therefrom not only provide a significant calculational saving relative to the traditional method of evaluation, but also yield useful insights into their general forms. In particular, these expressions enable one easily to establish the relationship between the forms of variational derivatives, and hence the field equations, obtained by applying first- and second-order variational principles, respectively. An interesting case is provided by comparing the variational derivatives obtained by setting the torsion to zero after the variation is performed (first-order approach) with those obtained by setting the torsion to zero in the action before carrying out the variation (second-order approach).
The re-expression of Noether's second theorem in terms of manifestly covariant quantities provides further utility and insights. In particular, one may use it to derive the conservation laws obeyed by the matter and gravitational gauge fields in a manifestly covariant manner. This also relies on being able to express the form variations of these fields such that at least the coefficient functions of the derivatives of the parameters of the symmetry transformations are manifestly covariant. This may be achieved by generalising the approach introduced by Bessel-Hagen for electromagnetism, which is discussed in Appendix A. The re-expression of Noether's second theorem further allows one straightforwardly to verify the relationship between the two currents on which it depends. Indeed, one may use Noether's second theorem as an alternative (albeit somewhat longer) means of deriving manifestly covariant forms for the variational derivatives.
The manifestly covariant variational principle is illustrated by application to the scale-invariant Weyl gauge theory (WGT) and its recently proposed 'extended' version (eWGT), but can be straightforwardly applied to other gravitational gauge theories with smaller or larger symmetry groups. For WGT and eWGT, the fields in the theory consist of a translational gauge field \(h_{a}{}^{\mu}\) (with inverse \(b^{a}{}_{\mu}\)), a rotational gauge field \(A^{ab}{}_{\mu}\) and a dilational gauge field \(B_{\mu}\), together with some set of matter fields \(\varphi_{A}\), which may include a scalar compensator field. In eWGT, however, it is more natural to work in terms of the alternative set of variables \(\varphi_{A}\), \(h_{a}^{\mu}\), \(A^{[ab}{}_{\mu}\) and \(B_{\mu}\), where the modified rotational gauge field \(A^{[ab}{}_{\mu}\equiv A^{ab}{}_{\mu}+2b^{[a}{}_{\mu}\mathscr{S}^{b]}\) and \(\mathscr{B}_{a}=h_{a}{}^{\mu}B_{\mu}\). Moreover, eWGT may be shown to be invariant under the simultaneous 'torsion-scale' gauge transformations \(A^{ab}{}_{\mu}\to A^{ab}{}_{\mu}+2b^{[a}{}_{\mu}\mathscr{Y}^{b]}\) and \(B_{\mu}\to B_{\mu}-Y_{\mu}\), where \(\mathscr{Y}_{a}=h_{a}{}^{\mu}Y_{\mu}\) and \(Y_{\mu}\) is an arbitrary vector field; this may be used to set either \(B_{\mu}\) or \(T_{\mu}\) to zero, which can considerably simplify subsequent calculations. The scale-invariant actions for WGT and eWGT are further assumed to depend only on the matter fields, their covariant derivatives and the field strength tensors of the gravitational gauge fields. In this case, the eWGT action in the alternative set of variables does not depend explicitly on \(B_{\mu}\), hence reducing by one the number of independent variational derivatives. As might be expected from the above considerations, one finds a number of similarities between WGT and eWGT, and also some important and novel differences.
Considering first the manifestly covariant expressions for the variational derivatives in WGT, one finds that these reduce to the corresponding covariant canonical currents of the matter fields if the Lagrangian does not depend on the gravitational gauge field strengths. For Lagrangians that do depend on the gauge field strengths, one finds that the only terms that contain the covariant derivative of a field strength tensor depend on the field strength tensor of the gauge field with respect to which the variational derivative is taken. By contrast, in eWGT one finds that the variational derivatives with respect to the translational and modified rotational gauge fields contain additional terms beyond those obtained by'replacing asterisks with daggers' in their WGT counterparts. Moreover, the additional terms in the translational variational derivative are given by the covariant derivative of the additional terms (with
permuted indices) in the expression for the rotational variational derivative; this has some novel consequences. First, for a Lagrangian that depends only on the matter fields and their covariant derivatives, the variational derivatives with respect to the gauge fields do not reduce to the covariant canonical currents of the matter fields, but comtain additional terms proportional to the dilational generator \(\Delta=w_{A}I\) for the matter fields \(\varphi_{A}\). Thus, any matter field with non-zero Weyl weight \(w_{A}\) contributes additionally both to the modified energy-momentum tensor and to the spin-angular-momentum tensor, irrespective of its spin. Second, for Lagrangians \(L\) that depend on the gauge field strengths, there are additional terms capable of producing a dependence on the covariant derivatives of the field strength tensors, and in each case these terms depend on the covariant derivatives of field strength tensors for different gauge fields than those with respect to which the variational derivative is taken. Moreover, there exist terms containing covariant derivatives of \(\partial L/\partial\mathcal{F}_{ab}^{\dagger}\). By using the 'torsion-scale' gauge symmetry and the manifestly covariant forms of the variational derivatives, however, one may show that the final eWGT field equations contain field derivatives of only second-order or lower, thereby avoiding Ostrogradsky's instability.
On comparing the variational derivatives obtained by setting the torsion to zero after the variation is performed (first-order approach) with those obtained by setting the torsion to zero in the action before carrying out the variation (second-order approach), one finds important differences between WGT and eWGT. In both cases, the rotational gauge field is no longer an independent field, but in WGT it may be written explicitly in terms of the other gauge fields, whereas in eWGT there exists an implicit constraint relating all the gauge fields. In both cases, however, one may arrive at simple expressions for the variational derivatives in the second-order approach in terms of those from the first-order approach. In particular, the translational variational derivative in the second-order approach for WGT and eWGT is the gauge theory equivalent of the Belinfante tensor. Moreover, in WGT the second-order dilational variational derivative may be considered to define an associated Belinfante dilation current, which is clearly related to the 'field virial' that is relevant to the invariance of an action under special conformal transformations.
Turning to the re-expression of Noether's second theorem, the resulting derivations of manifestly covariant forms of the conservation laws satisfied by the fields in WGT and eWGT, yield similar forms in both cases for the laws corresponding to invariance under local translations and rotations, respectively. For invariance under local dilations, however, one finds the resulting conservation law is differential in WGT, but algebraic in eWGT. In both WGT and eWGT, one may also use the re-expression of Noether's second theorem to verify the relationship between the two currents on which it depends, although in both cases this verification requires a calculation of considerable length. Alternatively, in each case, one may use Noether's second theorem as an alternative (albeit considerably longer) means of deriving manifestly covariant forms for the variational derivatives.
Whilst this paper has focussed heavily on the _Lagrangian_ prescription of field theory, and the associated field equations and conservation laws, we note that the techniques developed here may impart even stronger benefits in the _Hamiltonian_ formulation. Hamiltonian gauge field theory is characterised by the presence of field-valued _constraints_, which encode not only the gauge symmetries but also the whole nonlinear dynamics, as elucidated by the consistency algorithm of Dirac and Bergmann [39; 40; 41]. The fundamental currency of the consistency algorithm is the Poisson bracket15, which is a bilinear in functional variations with respect to dynamical fields. In the context of gravitational gauge fields, the Hamiltonian formulation is typically realised using the so-called 3+1 or Arnowitt-Deser-Misner (ADM) technique, whereby manifest diffeomorphism covariance is preserved despite the imposition of a spacelike foliation. Accordingly, the ADM Poisson bracket presents a clear opportunity for manifestly covariant variational methods, such as those expressed in equations (35) and (74). The Hamiltonian demand is, if anything, more pronounced than the Lagrangian demand. In the latter case, a countably small collection of field equations (not including indices) must be obtained (e.g. _one_ set of Einstein equations). In the former case and for a gravitational gauge theory, all Poisson brackets between all constraints must be evaluated in order to classify the gauge symmetries: this can in practice correspond to tens or hundreds of brackets [43; 44; 45; 46; 47]. Separately, the variations of a constraint can be more challenging than those of an action because: (i) the constraints are typically indexed and always (quasi-) local, necessitating the use of smearing functions; (ii) they may contain more terms in ADM form than the original Lagrangian; and crucially (iii) they are of _unlimited_16 order in spatial gradients [48] even when the Lagrangian is second order as assumed in (1). The extension of the techniques discussed here to the higher-order, ADM variational derivative, is left to future work.
Footnote 15: More sophisticated _Dirac_ brackets [42] also arise: these are equally relevant to our discussion.
Footnote 16: This is due to cumulative derivatives arising in the course of the Dirac algorithm.
###### Acknowledgements.
WEVB is supported by a Research Fellowship at Girton College, Cambridge.
## Appendix A Bessel-Hagen method for electromagnetism
For classical electromagnetism (EM) in Minkowski spacetime \(\mathscr{M}\) labelled using Cartesian inertial coordinates \(x^{\mu}\), the action is given by \(S=\int\mathscr{L}\,d^{4}x\), where the Lagrangian density \(\mathscr{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\) and the (Faraday) field strength tensor \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\), in which \(A_{\mu}\) is the electromagnetic 4-potential (which is not to be confused with the rotational gravitational gauge field \(A^{ab}{}_{\mu}\) appearing throughout the main text of the paper). As is well known, the most general infinitesimal global coordinate transformations under which the EM action is invariant are the conformal transformations17; these have the form \(x^{\prime\mu}=x^{\mu}+\xi^{\mu}(x)\), where
Footnote 17: The action is also invariant under finite global conformal coordinate transformations [49; 50; 51]; these include conformal inversions \(x^{\prime\mu}=x^{\mu}/x^{2}\) for \(x^{2}\neq 0\), which are not connected to the identity and so are not considered here.
\[\xi^{\mu}(x)=a^{\mu}+\omega^{\mu}{}_{\nu}x^{\nu}+\rho x^{\mu}+c^{\mu}x^{2}-2c \cdot x\,x^{\mu}, \tag{100}\]
in which the 15 infinitesimal parameters \(a^{\mu}\), \(\omega^{\mu\nu}=-\omega^{\nu\mu}\), \(\rho\) and \(c^{\mu}\) are constants, and we use the shorthand notation \(x^{2}\equiv\eta_{\mu\nu}x^{\mu}x^{\nu}\) and \(c\cdot x\equiv\eta_{\mu\nu}c^{\mu}x^{\nu}\). If the four parameters \(c^{\mu}\) defining the so-called special conformal transformation (SCT) vanish, then (100) reduces to an infinitesimal global Weyl transformation. Moreover, if the parameter \(\rho\) defining the dilation (or scale transformation) also vanishes, then (100) further reduces to an infinitesimal global Poincare transformation, consisting of a restricted Lorentz rotation defined by the six parameters \(\omega^{\mu\nu}\) and a spacetime translation defined by the four parameters \(a^{\mu}\).
Under the action of any infinitesimal coordinate transformation \(x^{\prime\mu}=x^{\mu}+\xi^{\mu}(x)\), the 4-potential has the form variation
\[\delta^{(\xi)}_{0}A_{\mu}=\delta^{(\xi)}A_{\mu}-\xi^{\nu}\partial_{\nu}A_{\mu }=-A_{\nu}\partial_{\mu}\xi^{\nu}-\xi^{\nu}\partial_{\nu}A_{\mu}, \tag{101}\]
where we have explicitly denoted the form and total variations as being induced by the infinitesimal coordinate transformation. Thus, the corresponding Noether current (5b) has the form
\[J^{\mu}=\frac{\partial\mathscr{L}}{\partial(\partial_{\mu}A_{\sigma})}\delta^ {(\xi)}_{0}A_{\sigma}+\xi^{\mu}\mathscr{L}=F^{\mu\sigma}(A_{\nu}\partial_{ \sigma}\xi^{\nu}+\xi^{\nu}\partial_{\nu}A_{\sigma})-\tfrac{1}{4}\xi^{\mu}F^{ \rho\sigma}F_{\rho\sigma}. \tag{102}\]
Using the expression (100) for an infinitesimal global conformal coordinate transformation, one finds that (102) may be written as
\[J^{\mu}=-a^{\alpha}t^{\mu}{}_{\alpha}+\tfrac{1}{2}\omega^{\alpha\beta}M^{\mu}{ }_{\alpha\beta}+\rho D^{\mu}+c^{\alpha}K^{\mu}{}_{\alpha}, \tag{103}\]
where the coefficients of the parameters of the conformal transformation are defined by
\[t^{\mu}{}_{\alpha} \equiv \frac{\partial\mathscr{L}}{\partial(\partial_{\mu}A_{\sigma})} \partial_{\alpha}A_{\sigma}-\delta^{\mu}_{\alpha}\mathscr{L}=-F^{\mu\sigma} \partial_{\alpha}A_{\sigma}+\tfrac{1}{4}\delta^{\mu}_{\alpha}F^{\rho\sigma}F_ {\rho\sigma}, \tag{104a}\] \[M^{\mu}{}_{\alpha\beta} \equiv x_{\alpha}t^{\mu}{}_{\beta}-x_{\beta}t^{\mu}{}_{\alpha}+s^{\mu}{ }_{\alpha\beta},\] (104b) \[D^{\mu} \equiv -x^{\alpha}t^{\mu}{}_{\alpha}+j^{\mu},\] (104c) \[K^{\mu}{}_{\alpha} \equiv (2x_{\alpha}x^{\beta}-\delta^{\beta}_{\alpha}x^{2})t^{\mu}{}_{ \beta}+2x^{\beta}(s^{\mu}{}_{\alpha\beta}-\eta_{\alpha\beta}j^{\mu}), \tag{104d}\]
which are the canonical energy-momentum, angular momentum, dilation current and special conformal current, respectively, of the 4-potential \(A_{\mu}\). We have also defined the quantities
\[s^{\mu}{}_{\alpha\beta} \equiv \frac{\partial\mathscr{L}}{\partial(\partial_{\mu}A_{\sigma})}( \Sigma_{\alpha\beta})_{\sigma}{}^{\rho}A_{\rho}=-2F^{\mu}{}_{[\alpha}A_{\beta]}, \tag{105a}\] \[j^{\mu} \equiv \frac{\partial\mathscr{L}}{\partial(\partial_{\mu}A_{\sigma})}wA_ {\sigma}=F^{\mu\sigma}A_{\sigma}, \tag{105b}\]
which are the canonical spin angular momentum and intrinsic dilation current of the 4-potential; here \((\Sigma_{\alpha\beta})_{\sigma}{}^{\rho}=2\eta_{\sigma[\alpha}\delta^{\rho}_{ \beta]}\) are the generators of the vector representation of the Lorentz group and \(w=-1\) is the Weyl weight of \(A_{\mu}\).
If the field equations \(\delta\mathscr{L}/\delta A_{\nu}=\partial_{\mu}F^{\mu\nu}=0\) are satisfied, then invariance of the action implies the conservation law \(\partial_{\mu}J^{\mu}\simeq 0\). Since the parameters of the global conformal coordinate transformation in (103) are constants, one thus
obtains separate conservation laws given by
\[\partial_{\mu}t^{\mu}{}_{\alpha} \simeq 0, \tag{101a}\] \[\partial_{\mu}s^{\mu}{}_{\alpha\beta}+2t_{[\alpha\beta]} \simeq 0,\] (101b) \[\partial_{\mu}j^{\mu}-t^{\mu}{}_{\mu} \simeq 0,\] (101c) \[s^{\mu}{}_{\alpha\mu}-j_{\alpha} \simeq 0, \tag{101d}\]
which hold up to a total divergence of any quantity that vanishes on the boundary of the integration region of the action. It is worth noting that the first condition has been used to derive the second and third conditions, and the first three conditions have all been used to derive the final condition. The conservation laws (101) may be easily verified directly using the expressions (100a) and (101) for \(t^{\mu}{}_{\alpha}\), \(s^{\mu}{}_{\alpha\beta}\) and \(j^{\mu}\), respectively, and the EM equations of motion. It is worth noting that the conservation law (101), which results from invariance of the action under special conformal transformations, requires the 'field virial' to vanish [36].
In addition to being invariant under infinitesimal global conformal coordinate transformations of the form (100), however, the EM action is also well known to be invariant under the gauge transformation \(A_{\mu}\to A^{\prime}_{\mu}=A_{\mu}+\partial_{\mu}\alpha\), where \(\alpha(x)\) is an arbitrary function of spacetime position. Since our considerations thus far have not taken this into account, it is perhaps unsurprising that the canonical quantities \(t^{\mu}{}_{\alpha}\), \(s^{\mu}{}_{\alpha\beta}\) and \(j^{\mu}\) are not invariant under the gauge transformation, as is easily demonstrated. Moreover, it is immediately apparent that the overall Noether current \(J^{\mu}\) in (100) is also not gauge invariant. All these problems originate from the form variation \(\delta^{(\xi)}_{0}A_{\sigma}\) in (100) itself not being gauge invariant. The lack of gauge invariance of the canonical expressions is a severe shortcoming, which means that these quantities must be unphysical. The situation is usually remedied, at least for the energy-momentum tensor in electromagnetism, by using the Belinfante method [35] of adding ad-hoc terms, which do not follow from Noether's theorem, to the canonical energy-momentum in order to construct a'modified' energy-momentum tensor, which is gauge invariant (and symmetric) and can be further 'improved' to be traceless also [52]. One should note, however, that these methods are not guaranteed to yield a gauge-invariant energy-momentum tensor for general gauge field theories when matter fields are coupled to a gauge field [53], although this deficiency is addressed in [54].
An alternative approach, which makes direct use of the gauge invariance of the EM action and Noether's theorem, was first proposed in 1921 by Bessel-Hagen (who acknowledges Noether for suggesting the idea) [32]. This work is not widely known, however, and similar approaches have since been proposed by other authors [55; 56; 57; 58], although Bessel-Hagen's original method arguably remains the most straightforward and intuitive [59]. The key to the method is to recognise that the the form variations \(\delta_{0}\chi_{A}\) of the fields appearing in the general expression (5b) for the Noether current \(J^{\mu}\) may correspond to _any_ transformation that leaves the action invariant. Indeed, it is advantageous to consider the _most general_ such transformation. Applying this notion to EM, one should thus replace the form variation (101) induced solely by the infinitesimal global conformal coordinate transformation by the general form
\[\delta_{0}A_{\mu}=\delta^{(\xi)}A_{\mu}+\partial_{\mu}\alpha-\xi^{\nu} \partial_{\nu}A_{\mu}=-A_{\nu}\partial_{\mu}\xi^{\nu}+\partial_{\mu}\alpha- \xi^{\nu}\partial_{\nu}A_{\mu}, \tag{102}\]
which also includes the contribution induced by the EM gauge transformation. Since the form variation (102) leaves the EM action invariant for \(\xi^{\mu}(x)\) given by (100) and for arbitrary \(\alpha(x)\), one may choose the latter to be as convenient as possible. Given that our goal is to arrive at a gauge-invariant form for the Noether current \(J^{\mu}\), one should therefore choose \(\alpha(x)\) such that the form variation (102) is itself gauge-invariant; this is the central idea underlying the Bessel-Hagen method.
One may easily obtain a gauge-invariant form variation by setting \(\alpha=A_{\nu}\xi^{\nu}\), which immediately yields \(\delta_{0}A_{\mu}=\xi^{\nu}F_{\mu\nu}\). Consequently, the Noether current (100) is replaced by the new form
\[J^{\mu}=\frac{\partial\mathscr{L}}{\partial(\partial_{\mu}A_{\sigma})}\delta_ {0}A_{\sigma}+\xi^{\mu}\mathscr{L}=\xi^{\nu}(F^{\mu\sigma}F_{\nu\sigma}- \tfrac{1}{4}\delta^{\mu}_{\nu}F^{\rho\sigma}F_{\rho\sigma})=-\xi^{\nu}\tau^{ \mu}{}_{\nu}, \tag{103}\]
where in the final equality we have identified the standard physical energy-momentum tensor \(\tau^{\mu}{}_{\nu}=-(F^{\mu\sigma}F_{\nu\sigma}-\tfrac{1}{4}\delta^{\mu}_{\nu}F ^{\rho\sigma}F_{\rho\sigma})\) of the EM field, which is immediately seen to be gauge invariant, symmetric and traceless. Substituting the form (100) for \(\xi^{\mu}\) into (103), one finds that the expression (100) for the Noether current is replaced by the much simpler form
\[J^{\mu}=-a^{\alpha}\tau^{\mu}{}_{\alpha}+\tfrac{1}{2}\omega^{\alpha\beta}(x_{ \alpha}\tau^{\mu}{}_{\beta}-x_{\beta}\tau^{\mu}{}_{\alpha})-\rho x^{\alpha} \tau^{\mu}{}_{\alpha}+c^{\alpha}(2x_{\alpha}x^{\beta}-\delta^{\beta}_{\alpha}x ^{2})\tau^{\mu}{}_{\beta}, \tag{104}\]
from which one can further identify new forms for the angular momentum, dilation current and special conformal current of the EM field, all of which are gauge invariant. If one again assumes the EM field equations to hold and uses the fact that the parameters of the global conformal coordinate transformation are constants, one obtains separate
conservation laws that replace those in (111) and are given by the succinct forms
\[\partial_{\mu}{\tau^{\mu}}_{\alpha} \simeq 0, \tag{112a}\] \[\tau_{[\alpha\beta]} \simeq 0,\] (112b) \[{\tau^{\mu}}_{\mu} \simeq 0, \tag{112c}\]
where, in this case, the conservation law derived from the coefficient of the SCT parameters \(c^{\mu}\) is satisfied automatically given the other three conservation laws above, all of which may be easily verified directly.
Finally, one should also determine the further conservation law that results solely from invariance of the action under EM gauge transformations. This is easily achieved by setting \(\xi^{\mu}=0\), which is equivalent to all of the constant parameters in (109) vanishing. In this case, (110) becomes simply \(\delta_{0}A_{\mu}=\partial_{\mu}\alpha\) and the Noether current is immediately given by
\[J^{\mu}=\frac{\partial\mathscr{L}}{\partial(\partial_{\mu}A_{\sigma})}\delta_ {0}A_{\sigma}=-F^{\mu\sigma}\partial_{\sigma}\alpha. \tag{113}\]
Assuming the EM field equations to hold, the resulting conservation law \(\partial_{\mu}J^{\mu}\simeq 0\) may be written as \(F^{\mu\sigma}\partial_{\mu}\partial_{\sigma}\alpha\simeq 0\), which is satisfied identically because of the antisymmetry of \(F^{\mu\sigma}\).
| ```
重力ゲージ teoríasの変分原理を提示し、運動方程式と保存則の計算を通して、作用に関連する対称性に対する明示的な対称性表現を維持します。これは、オイラー-ラグランジュ変分微分と一般的な作用のノイマー-の定理を導出することにより行われます。このアプローチは、重力ゲージ理論に典型的に想定されるような形をした一般的な作用に基づいて、スケールInvariantの重力ゲージ理論であるウェイルゲージ理論(WGT)と最近提案された拡張ウェイルゲージ理論(eWGT)の適用を通して示されます。後者は、対称群の新しい表現であると見なすことができますが、この方法は、より小さなまたはより大きい対称群を持つ他の理論に対して直感的に適用できます。このアプローチにより、変分が実行される前にまたは後に、ゲージ場強度にゼロを設定した場合、変分微分が |
2310.20551 | On $σ$-classes of modules with applications | In this paper we introduce some lattices of classes of left R-module relative
to a preradical sigma. These lattices are generalizations of the lattices
R-TORS, R-tors, R-nat, R-conat, of torsion theories, hereditary torsion
theories, natural classes and conatural classes, respectively. We define the
lattices $\sigma$-(R-TORS), $\sigma$-(R-tors), $\sigma$-(R-nat),
$\sigma$-(R-conat), which reduce to the lattices mentioned above, when one
takes sigma as the identity. We characterize the equality between these
lattices by means of the ($\sigma$-HH) condition, which we introduce. We also
present some results about $\sigma$-retractable rings, $\sigma$-Max rings
extending results about Mod-retractable rings and Max rings. | Oscar A. Garrido-Jiménez, Hugo A. Rincón-Mejía | 2023-10-31T15:32:55 | http://arxiv.org/abs/2310.20551v1 | # On \(\sigma\)-classes of modules with applications
###### Abstract.
In this paper we introduce some lattices of classes of left R-module relative to a preradical sigma. These lattices are generalizations of the lattices R-TORS, R-tors, R-nat, R-conat, of torsion theories, hereditary torsion theories, natural classes and conatural classes, respectively. We define the lattices \(\sigma\)-(R-TORS), \(\sigma\)-(R-tors), \(\sigma\)-(R-nat), \(\sigma\)-(R-conat), which reduce to the lattices mentioned above, when one takes sigma as the identity. We characterize the equality between these lattices by means of the (\(\sigma\)-HH) condition, which we introduce. We also present some results about \(\sigma\)-retractable rings, \(\sigma\)-Max rings extending results about Mod-retractable rings and Max rings.
\({}^{*}\)Corresponding Author
**Keywords:** Preradical, Lattices of module classes, Left Mod-retractable ring, Left semiartinian ring, Left max ring.
**2020 Mathematics subject classification:** 16D80, 16S90, 16S99, 16W99.
## 1. Introduction
\(R\) will denote an associative ring with \(1\) and \(R\)-Mod will denote the category of left \(R\)-modules and \(R\)-morphisms. Several results of module theory are latticial in nature. The behavior of the lattices associated with a ring determines several properties of the ring. The comparison between several lattices associated to a ring often provides interesting information. We will use this method with some lattices of module classes relative to a preradical \(\sigma\).
Recall that a preradical in \(R\)-Mod is a functor \(\sigma:R\)-Mod \(\to R\)-Mod such that for each \(R\)-module \(M\) one has that \(\sigma(M)\leq M\) and for each \(R\)-morphism \(f:M\to N\) one has that \(f\left(\sigma(M)\right)\leq\sigma(N).\) Note that a preradical \(\sigma\) is a subfunc of the identity functor in \(R\)-Mod. Recall that preradicals n \(R\)-mod conforms the big complete lattice \(R\)-pr, where order is given by \(\sigma\leq\tau\Leftrightarrow\sigma(M)\subseteq\tau(M),\text{ for each }_{R}M.\) A family of preradicals \(\{r_{i}\}_{I}\) has a an infimum \(\wedge r_{i}\) which evaluated in a module \(M\) gives \(\bigcap_{i\in I}r_{i}(M),\) and \(\{r_{i}\}_{I}\) also has a supremun \(\lor r_{i}\) which evaluated in a module \(M\) gives \(\sum_{i\in I}r_{i}(M)\). The identity in \(R\)-mod functor is the greatest preradical and the preradical \(\underline{0}\) which assigns to each module its zero submodule is the least preradical. Recall that there are in \(R\)-pr two binary operations, composition, denoted by \(\sigma\tau\) (defined by \(\sigma\tau(M)=\sigma\left(\tau(M)\right)\) and cocomposition, denoted by \(\sigma:\tau\) (defined by \((\sigma:\tau)(M)/\sigma(M)=\tau\left(M/\sigma(M)\right)\).
We say that a preradical \(\sigma\) is a left exact preradical if it is left exact as a functor. \(\sigma\) is idempotent if \(\sigma\sigma=\sigma\). \(\sigma\) is stable (costable) if for each injective \(R\)-module \(E\) (for each projective \(R\)-module \(P\)) one has that \(E=\sigma(E)\oplus E^{\prime}\) (one has that \(P=\sigma(P)\oplus P^{\prime}\)) for some \(E^{\prime}\leq E\) (for some \(P^{\prime}\leq P\)).
There are two important classes of modules associated with a preradical \(\sigma\), its torsion class \(\mathbb{T}_{\sigma}=\{M\mid\sigma(M)=M\}\) and its torsion free class \(\mathbb{F}_{\sigma}=\{M\mid\sigma(M)=0\}.\)\(\mathbb{T}_{\sigma}\) is a class closed under the taking of coproducts and epimorphic images (i.e. it is a pretorsion class) and \(\mathbb{F}_{\sigma}\) is a class closed under taking submodules and products (i.e is a pretorsion free class).
We say that an ordered pair \((\mathbb{T},\mathbb{F})\) of module classes is a torsion theory if \(\mathbb{T}\) is closed under taking epimorphic images, coproducts and extensions, and \(\mathbb{F}=\{N\mid Hom(M,N)=0,\;\forall M\in\mathbb{T}\}\) or equivalently, if \(\mathbb{F}\) is closed under taking submodules, products, extensions and \(\mathbb{T}=\{M\mid Hom(M,N)=0,\;\forall N\in\mathbb{F}\}\). A torsion theory \((\mathbb{T},\mathbb{F})\) having \(\mathbb{T}\) closed under taking submodules, or equivalently, having \(\mathbb{F}\) closed under taking injective hulls is called a hereditary torsion theory.
The class of hereditary torsion theories is a complete lattice that has been extensively studied (see, for example [9] or [11]).
For further information about lattices of classes of modules see [7].
Comparisons between lattices of classes of modules may characterize types of rings. By example, in [10], the authors prove that every torsion theory is hereditary, if and only if, all modules are retractable. They called \(R\)-Mod-retractable ring to a ring with this property. A module \(M\) is retractable when \(Hom(M,N)\neq 0\) for each nonzero submodule \(N\) of \(M\).
In [6], the lattice \(\sigma\)-(R-TORS) of \(\sigma\)-torsion classes, and the lattice \(\sigma\)-(R-tors) of \(\sigma\)-hereditary torsion classes are defined for a preradical \(\sigma\). We refer to the reader to [11, Chapter VI] and [4] for more information about preradicals on \(R\)-Mod.
In [5], for an exact left and stable preradical \(\sigma\), the lattice \(\sigma\)-(R-nat) of \(\sigma\)-natural classes is defined and, for an exact and costable preradical \(\sigma\), the lattice \(\sigma\)-(R-conat) of \(\sigma\)-conatural classes is defined.
We introduce, for a ring \(R\) and a preradical \(\sigma\), the condition (\(\sigma\)-HH).
We say that \(R\) satisfies the condition (\(\sigma\)-HH) if for any pair of \(R\)-modules \(M,N\) with \(\sigma(M)\neq 0\), and \(\sigma(N)\neq 0\), one has that \(Hom(\sigma(N),M)\neq 0\) if and only if \(Hom(M,\sigma(N))\neq 0\). This condition is related to the paranipective \(R\)-modules of \(\sigma\)-torsion and with the paraprojective \(R\)-modules of \(\sigma\)-torsion. This condition is also important with respect to left \(\sigma\)-semiartinian rings, the left \(\sigma\)-max rings and the \(\sigma\)-retractable rings introduced in [6]. Finally, it turns out that the rings satisfying the (\(\sigma\)-HH)-condition are the rings for which the lattices of \(\sigma\)-torsion classes, of hereditary \(\sigma\)-torsion classes, of \(\sigma\)-natural classes and of \(\sigma\)-conatural classes are all the same.
## 2. Some kinds of preradicals
Recall the following definitions.
**Definition 2.1**.: A preradical \(\sigma\) is
1. left exact if and only if \(\sigma(A)=A\cap\sigma(B)\) whenever \(A\) be a submodule of \(B\) if and only if \(\sigma\) is idempotent and its pretorsion class \(\mathbb{T}_{\sigma}\) is hereditary.
2. cohereditary if \(\sigma\) preserves epimorphisms, this is equivalent to the pretorsion-free class of \(\sigma\), \(\mathbb{F}_{\sigma}\) being closed under quotients and it is also equivalent to \(\sigma(M)=\sigma(R)M\), for each \(M\).
3. splitting in \(M\) if \(\sigma(M)\leq_{\oplus}M\).
4. splitting if it splits in each module.
5. cosplitting if it is hereditary and cohereditary.
6. stable if it splits in each injective module.
7. costable if it splits in each projective module.
8. centrally splitting if it has a complement \(\sigma^{\prime}\) in \(R\)-pr, or equivalently if there exists a central idempotent \(e\in R\) such that \(\sigma(M)=eM\), for each module \(M\).
We include some basic properties of preradicals for the reader's convenience.
**Lemma 2.2**.: _The following statements about a preradical \(\sigma\) are equivalent._
1. \(\sigma\) _is cohereditary._
2. \(\sigma=\sigma(R)\cdot(-)\)_, i.e._ \(\sigma(M)=\sigma(R)\cdot(M)\)_, for each module_ \(M\)_._
3. \(\sigma\) _is a radical and it pretorsion free class_ \(\mathbb{F}_{\sigma}\) _is closed under quotients._
Proof.: \((1)\Rightarrow(2)\) Let \(M\) be a module and let \(f:R\to M\) be a morphism, \(f=(-)\cdot x\) for some \(x\in M\). Then we have that the epimorphism \(R\to Rx\) restricts to an epimorphism \(\sigma(R)\rightarrow\sigma(Rx)\), thus \(\sigma(R)x=\sigma(Rx)\). Now, take the epimorphism \(R^{(M)}\twoheadrightarrow M\) induced by the family of morphisms
\(R\to M\}_{x\in M}\) then we have the epimorphism \(\sigma(R^{(M)})\twoheadrightarrow\sigma(M)\). Thus \(\sigma(M)=\sum_{x\in M}\sigma(R)x=\sigma(R)M\).
(2) \(\Rightarrow\) (1) In general, if \(I\) is an ideal of \(R\) then \(I\cdot(-)\) is a preradical preserving epimorphisms, as it is immediately verified. More over, \(I\left(M/IM\right)=0\) shows that \(I\cdot(-)\) is a radical.
(2) \(\Rightarrow\) (3) Assume that \(f:M\twoheadrightarrow N\) is an epimorphism with \(M\in\mathbb{F}_{\sigma}\) then we have an epimorphism \(0=\sigma(M)\twoheadrightarrow\sigma(N)\). \(\sigma\) is a radical as it was noted in the above argument.
(3) \(\Rightarrow\) (1) Let \(f:M\twoheadrightarrow N\) be an epimorphism. \(f\) induces an epimorphism \(\overline{f}:M/\sigma(M)\twoheadrightarrow N/f\left(\sigma(M)\right).\) Thus we have the situation of the following diagram.
As \(\sigma\) is a radical, it follows that \(\sigma\left(M/\sigma(M)\right)=0\). Now, as \(\mathbb{F}_{\sigma}\) is a class closed under taking quotients, it follows that \(\sigma\left(N/f\left(\sigma(M)\right)\right)=0\). Then, as \(f\left(\sigma(M)\right)\leq N\)and as \(\sigma\) is a radical, it follows that \(\sigma\left(N/f\left(\sigma(M)\right)\right)=\sigma(N)/f\left(\sigma(M)\right)\) (see, [11, Lemma 1.1, Chap. VI]). So, \(\sigma(N)/f\left(\sigma(M)\right)=0\) and hence \(\sigma(M)=f\left(\sigma(M)\right).\) Therefore \(\sigma\) is cohereditary.
**Lemma 2.3**.: _A preradical \(\sigma\) is hereditary and cohereditary if and only if \(\sigma=\sigma(R)\cdot(-)\) and the ideal \(\sigma(R)\) satisfies that \(x\in\sigma(R)x\), for each \(x\in\sigma(R)\)._
Proof.: \(\Rightarrow\)] We have seen that \(\sigma\) cohereditary implies \(\sigma=\sigma(R)\cdot(-)\). On the other hand, as \(\sigma\) is hereditary then it is idempotent thus \(\sigma(R)\) is idempotent and \(\mathbb{T}_{\sigma}\) is hereditary class. So if \(x\in\sigma(R)\) then \(Rx\leq\sigma(R)\in\mathbb{T}_{\sigma}\), applying \(\sigma\), \(x\in Rx=\sigma(Rx)=\sigma(R)Rx=\sigma(R)x\).
\(\Leftarrow\)] Conversely, if \(\sigma\) is a cohereditary radical and \(x\in\sigma(R)x\), for each \(x\in\sigma(R)\), then \(\sigma(R)\leq\sigma(R)\sigma(R)\leq\sigma(R).\) Thus \(\sigma\) is an idempotent radical. We only have to show that \(\mathbb{T}_{\sigma}\) is hereditary class, which is equivalent to \(\mathbb{F}_{\sigma}\) being closed under injective hulls. Suppose that \(M\in\mathbb{F}_{\sigma}\) but \(E(M)\notin\mathbb{F}_{\sigma}\). Then \(\sigma(E(M))\neq 0\) and then \(M\cap\sigma(R)(E(M))\neq 0\). Then there exists \(0\neq m\in\sigma(R)(E(M))\), then \(m=a_{1}x_{1}+\cdots+a_{n}x_{n}\), with \(a_{i}\in\sigma(R)\) and \(x_{i}\in E(M)\). Let us take one such \(m\) where \(n\) be least possible. Notice that \(n\neq 1\), because if \(0\neq m=a_{1}x_{1}\) then \(a_{1}=b_{1}a_{1}\) for some \(b_{1}\in\sigma(R)\), by hypothesis. So, \(0\neq m=b_{1}a_{1}x_{1}=b_{1}m=0\), a contradiction. Now, in \(m=a_{1}x_{1}+\cdots+a_{n}x_{n}\), we can write \(a_{i}=b_{i}a_{i}\) for some \(b_{i}\in\sigma(R)\). Note that \(m=b_{1}a_{1}x_{1}+\cdots+b_{n}a_{n}x_{n}\) and that \(0=b_{1}m=b_{1}a_{1}x_{1}+\cdots+b_{1}a_{n}x_{n}\) thus making a subtraction we have that \(m=(b_{1}-b_{2})a_{2}x_{2}+\cdots+(b_{1}-n_{n})a_{n}x_{n}\) with less than \(n\) summands, contradicting the choice of \(m\) and \(n\). This proves that \(\mathbb{F}_{\sigma}\) is closed under taking injective hulls, and this completes the proof.
**Lemma 2.4**.: _A preradical \(\sigma\) is exact, i.e. \(\sigma\) preserves exact sequences of modules if and only if it is hereditary and cohereditary._
Proof.: If \(\sigma\) is exact, clearly is left exact and right exact, thus it is hereditary and cohereditary. If \(\sigma\) is cohereditary and hereditary then it preserves epimorphisms and it is left exact. Thus \(\sigma\) preserves short exact sequences and it is well known that this implies that \(\sigma\) is an exact functor (See [11, Chap. I, SS5, Lemma 5.1]).
**Lemma 2.5**.: _For a preradical \(\sigma\) the following conditions are equivalent:_
1. \(\sigma\) _is exact and and_ \(\sigma(R)\) _is a direct summand of_ \(R\)_._
2. \(\sigma\) _centrally splits._
Proof.: 1) \(\Rightarrow\) 2) As \(\sigma\) is exact then \(\sigma=\sigma(R)\cdot(-)\) and \(\sigma(R)\) is an idempotent ideal having the property that \(x\in\sigma(R)x,\forall x\in\sigma(R)\). Let us call \(\rho\) the preradical which assigns to a module \(M\) the submodule \(\{x\in M\mid\sigma(R)x=0\}\). \(\rho\) is a left exact radical whose torsion class \(\mathbb{T}_{\rho}\) is closed under taking products. \(\rho(R)\) is a two sided ideal. As \(R=\sigma(R)\oplus J\) for some left ideal \(J\), by hypothesis, and as \(\sigma(R)J\subseteq\sigma(R)\cap J=0\) then, \(J\subseteq\rho(R)\). Thus \(R=\sigma(R)\oplus\rho(R)\). Then \(\sigma(R)=Re\) with \(e\) a central idempotent. Thus \(\sigma=\sigma(R)\cdot(-)=e\cdot(-)\). Hence \(\sigma\) centrally splits.
2)\(\Rightarrow\) 1) This follows from Lemma 2.3
**Lemma 2.6**.: _The following statements are equivalent_
1. \(\sigma\) _is exact and stable._
2. \(\sigma\) _centrally splits._
3. \(\sigma\) _is exact and costable._
Proof.: Is clear that if \(\sigma\) centrally splits, then it is exact. Now, as \(\sigma(M)\) is a direct summand of \(M\), for each module \(M\), then \(\sigma\) is stable and costable.
Conversely, if \(\sigma\) is exact and stable, then \(\sigma(E(R))\hookrightarrow E(R)\) splits, thus \(\sigma(R)\hookrightarrow R\) also splits. And if \(\sigma\) is exact and costable, then \(\sigma(R)\hookrightarrow R\) splits, because \(R\) is a projective module. We conclude using Lemma 2.5.
It is a known fact that the class of left exact preradicals, which we will denote \(R\)-lep (by left exact preradicals) is in bijective correspondence with the set of linear filters for the ring and it is also in bijective correspondence with the class of Wisbauer classes. A Wisbauer class is a class of modules closed under taking submodules, quotients and direct sums. Thus \(R\)-lep is a complete lattice. We denote by \(R\)-pr the big lattice of preradicals in \(R\)-mod.
## 3. \(\sigma\)-classes of modules
In [5], the authors introduced lattices of module classes induced by a preradical \(\sigma\). Namely they introduced \(\sigma\)-hereditary classes, \(\sigma\)-cohereditary classes, \(\sigma\)-natural classes and \(\sigma\)-conatural classes and showed that they conform (big) lattices, which were denoted \(\sigma\)-(\(R\)-her), \(\sigma\)-(\(R\)-coher), \(\sigma\)-(\(R\)-nat) and \(\sigma\)-(\(R\)-conat), respectively. We recall the definitions and some relevant results for the sequel.
**Definition 3.1**.: Let \(\sigma\in R\)-pr. A class \(\mathcal{C}\) of \(R\)-modules is a \(\boldsymbol{\sigma}\)**-hereditary class** if it satisfies the following conditions:
1. \(\mathbb{F}_{\sigma}\subseteq\mathcal{C}\).
2. For each \(M\in\mathcal{C}\) and each \(N\leq M\) it follows that \(\sigma(N)\in\mathcal{C}\).
We denote by \(\boldsymbol{\sigma}\)**-(R-her)** to the collection of all \(\sigma\)-hereditary classes.
Following Golan, we call skeleton of a lattice \(\mathcal{L}\) with a smallest element, to the set of pseudocomplements of \(\mathcal{L}\) and denote it \(\mathrm{Skel}(\mathcal{L})\).
**Lemma 3.2**.: _[_5_, Lemma 1]_ _Let \(\sigma\in R\)-pr. The collection \(\sigma\)-(\(\mathrm{R}\)-her) is a pseudocomplemented big lattice. Moreover, if \(\sigma\) is idempotent, then \(\sigma\)-(\(\mathrm{R}\)-her) is strongly pseudocomplemented and for each \(\mathcal{C}\in\sigma\)-(\(\mathrm{R}\)-her) its pseudocomplement, denoted by \(\mathcal{C}^{\perp_{\leq\sigma}},\) is given by_
\[\mathcal{C}^{\perp_{\leq\sigma}}=\left\{M\in R\text{-Mod }|\ \forall\ N\leq M,\ \sigma(N)\in\mathcal{C}\Rightarrow N\in\mathbb{F}_{\sigma}\right\}.\]
_In addition, \(Skel\left(\sigma\text{-}(\mathrm{R}\text{-her})\right)\) is a boolean lattice._
Given the above lemma, we can make the following definition.
**Definition 3.3**.: Let \(R\) be a ring and let \(\sigma\in R\)-pr be left exact and stable. We define the lattice of \(\sigma\)-natural classes, denoted by \(\boldsymbol{\sigma}\)**-(R-nat)**, as
\[\sigma\text{-}\left(R\text{-}\text{nat}\right)=Skel(\sigma\text{-}(\mathrm{R }\text{-her})).\]
**Definition 3.4**.: Let \(\sigma\in R\)-pr. A class \(\mathcal{C}\) of \(R\)-modules is a \(\boldsymbol{\sigma}\)**-cohereditary class** if it satisfies the following conditions:
1. \(\mathbb{F}_{\sigma}\subseteq\mathcal{C}\).
2. For each \(M\in\mathcal{C}\) and each epimorphism \(M\twoheadrightarrow L\) we have that \(\sigma(L)\in\mathcal{C}\).
We denote the collection of all \(\sigma\)-cohereditary classes by \(\boldsymbol{\sigma}\)**-(R-coher)**.
The following proposition is a generalization of [2, Proposition 3.6].
**Proposition 3.5**.: _[_5_, Proposition 11]_ _Let \(\sigma\in R\)-pr. If \(\sigma\) is idempotent and cohereditary, then \(\sigma\)-(\(\mathrm{R}\)-coher) is a strongly pseudocomplemented big lattice and for each \(\mathcal{C}\in\sigma\)-(\(\mathrm{R}\)-coher) the pseudocomplement, of \(\mathcal{C}\), denoted by \(\mathcal{C}^{\perp_{/\sigma}},\) is given by_
\[\mathcal{C}^{\perp_{/\sigma}}=\left\{M\in R\text{-Mod }|\ \forall\ M \twoheadrightarrow L,\ \sigma(L)\neq 0\Rightarrow\sigma(L)\notin\mathcal{A}\right\}\cup \mathbb{F}_{\sigma}.\]
_In addition, \(Skel\left(\sigma\text{-}(\mathrm{R}\text{-coher})\right)\) is a boolean lattice._
**Definition 3.6**.: Let \(R\) be a ring and let \(\sigma\in R\)-pr be exact and costable. We define the lattice of \(\sigma\)-conatural classes, denoted by \(\boldsymbol{\sigma}\)**-(R-conat)** as
\[\sigma\text{-(R-conat)}=Skel(\sigma\text{-(R-coher)}).\]
Later, in [6], the authors introduced the lattices \(\sigma\)-(R-TORS) and \(\sigma\)-(R-tors) as follows:
**Definition 3.7**.: Let \(\sigma\in R\)-pr. A class \(\mathcal{C}\) of \(R\)-modules \(\boldsymbol{\sigma}\)**-torsion class** if \(\mathcal{C}\) is a \(\sigma\)-cohereditary class, closed under taking arbitrary direct sums and under taking extensions, if it is also a \(\sigma\)-hereditary class, then we will say that \(\mathcal{C}\) is an **hereditary \(\sigma\)-torsion class**.
**Definition 3.8**.: Let \(\sigma\in R\)-pr. We denote the collection of all \(\sigma\)-torsion classes by \(\boldsymbol{\sigma}\)**-(R-TORS)** and by \(\boldsymbol{\sigma}\)**-(R-tors)** the collection of all hereditary \(\sigma\)-torsion classes.
In [6], the authors define the following assignations relative to a preradical \(\sigma\):
1. \(\sigma^{*}:\mathcal{P}(R\text{-Mod})\rightarrow\mathcal{P}(R\text{-Mod})\) given by \(\sigma^{*}(\mathcal{C})=\{\sigma(M)\ |\ M\in\mathcal{C}\}\) and
2. \(\overleftarrow{\sigma}:\mathcal{P}(R\text{-Mod})\rightarrow\mathcal{P}(R \text{-Mod})\) given \(\overleftarrow{\sigma}(\mathcal{C})=\{M\ |\ \sigma(M)\in\mathcal{C}\}\).
In addition, it holds that \(\sigma^{*}\left(\overleftarrow{\sigma}\left(\sigma^{*}\left(\mathcal{C}\right) \right)\right)=\sigma^{*}(\mathcal{C})\) and \(\overleftarrow{\sigma}\left(\sigma^{*}\left(\overleftarrow{\sigma}\left( \mathcal{C}\right)\right)\right)=\overleftarrow{\sigma}(\mathcal{C})\), for each \(\mathcal{C}\in\mathcal{P}(R\text{-Mod})\).
The authors showed the following results.
**Proposition 3.9**.: _[_6_, Corollary 3.10]_ _If \(\sigma\) is a radical, then_
\[\sigma\text{-(R-\emph{tors})}=\{\mathcal{C}\in R\text{-\emph{tors}}\ |\ \mathbb{F}_{\sigma}\subseteq\mathcal{C}\}.\]
**Theorem 3.10**.: _[_6_, Theorem 3.13]_ _If \(\sigma\) is an exact preradical, then_
\[\sigma\text{-(R-\emph{TORS})}=\{\overleftarrow{\sigma}\left(\mathcal{C} \right)\ |\ \mathcal{C}\in R\text{-\emph{TORS}}\ \}.\]
\(\sigma\)-retractable modules and \(\sigma\text{-(R-Mod)-retractable rings also introduced in [6].
**Definition 3.11**.: Let \(R\) be a ring and \(\sigma\in R\text{-}pr\). We say that an \(R\)-module \(M\) is \(\boldsymbol{\sigma}\)**-retractable** if for every submodule \(N\leq M\) with \(\sigma(N)\neq 0\) we have that \(Hom(M,\sigma(N))\neq 0\) and we will say that \(R\) is a \(\boldsymbol{\sigma}\)**-(R-Mod)-retractable ring** if every \(R\)-module is \(\sigma\)-retractable.
In [8], the authors introduced the condition \((HH)\) and characterized the rings satisfying it, via the retractability and properties of lattices of module classes, namely, the lattice of natural classes, the lattice of conatural classes, and the lattice of hereditary torsion theories. In the following, we extend some of these concepts and some of these results.
**Definition 3.12**.: Let \(R\) be a ring and \(\sigma\in R\text{-}pr\). We will say that \(R\) satisfies the **condition (\(\boldsymbol{\sigma}\)-HH)** if for any modules \(M,N\) with \(\sigma(M)\neq 0,\ \sigma(N)\neq 0,\) it holds that \(Hom(\sigma(N),M)\neq 0\Leftrightarrow Hom(M,\sigma(N))\neq 0.\)
_Remark 3.13_.: Note that if \(\sigma=1_{R\text{-}Mod}\), then we have condition \((HH)\) (see [8]), i.e. for any \(R\)-modules \(M\) and \(N\) we have that
\[Hom(N,M)\neq 0\Leftrightarrow Hom(M,N)\neq 0.\]
Recall that a module \(M\) is a **paranipective module** if for each module \(N\), whenever there is some monomorphism \(f\in\operatorname{Hom}(M,N)\) there exists an epimorphism \(g\in\operatorname{Hom}(N,M)\). A module \(M\) is a **paraprojective module** if for each module \(N\), whenever there is some epimorphism \(f\in\operatorname{Hom}(N,M)\) there exists a monomorphism \(g\in\operatorname{Hom}(M,N)\).
\(R\)-\(Simp\) will denote a family of representatives of isomorphism classes of left simple modules.
_Remark 3.14_.: Let \(R\) be a ring and \(\sigma\in R\)-pr. If \(R\) satisfies the (\(\sigma\)-HH), condition, then:
1. \(R\) is a \(\sigma\)- (R-Mod)-retractable ring.
2. For any \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\), it follows that \(S\) is paranipective.
3. For any \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\), it follows that \(S\) is paraprojective.
Proof.:
1. It is clear.
2. Let \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\) and let \(M\) be an \(R\)-module such that \(S\) embeds in \(M\). Since \(S\in\mathbb{T}_{\sigma}\), it follows that \(\sigma(S)=S\). Since \(S\in\mathbb{T}_{\sigma}\), then \(\sigma(S)=S\), so \((Hom(\sigma(S),M)\neq 0\). Then, \(Hom(M,\sigma(S))\neq 0\), since \(R\) satisfies the \(\sigma\)-\(HH\) condition. Thus, \(Hom(M,S)\neq 0\), i.e., \(S\) is a quotient of \(M\), whence S is paranipective.
3. Let \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\) and let \(M\) be an \(R\)-module such that \(S\) is a quotient of \(M\). It follows that \(Hom(M,\sigma(S))\neq 0\), since \(S=\sigma(S)\) because \(S\in\mathbb{T}_{\sigma}\). Then, from the (\(\sigma\)-HH) condition, it follows that \(Hom(\sigma(S),M)\neq 0\), i.e. \(Hom(S,M)\neq 0\), whence \(S\) embeds in \(M\). Therefore, \(S\) is paraprojective.
**Definition 3.15**.: Let \(R\) be a ring and \(\sigma\in R\text{-}lep.\) We will say that an \(R\)-module \(M\neq 0\) is \(\sigma\)-atomic if any nonzero submodule \(N\) of \(M\) with \(\sigma(N)\neq 0\) contains a submodule \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\), and we will say that \(R\) is a left \(\sigma\)-semiartinian ring if any \(R\)-module is \(\sigma\)-atomic.
Recall the following definition which were introduced in [6].
**Definition 3.16**.: Let \(R\) be a ring and take an idempotent preradical \(\sigma\). We say that a non zero \(R\)-module \(M\) is \(\sigma\)-coatomic if for each non zero quotient \(L\) of \(M\) with \(\sigma(L)\neq 0\), \(L\) has a quotient \(S\), \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\). We will say that \(R\) is a left \(\sigma\)-max ring, if each \(R\)-module is \(\sigma\)-coatomic.
**Theorem 3.17**.: _Let \(R\) be a ring and let \(\sigma\) be a left exact preradical. If each \(S\), such that \(S\in R\)-\(Simp\cap\mathbb{T}_{\sigma}\) is paranipective, then \(R\) is a left \(\sigma\)-max ring._
Proof.: Let \(M\neq 0\) be an \(R\)-module and \(L\) be a non zero quotient of \(M,\) with \(\sigma(L)\neq 0.\) Let us consider a cyclic submodule \(Rx\) of \(\sigma(L)\). As \(\sigma\) is left exact, then \(\sigma\) is idempotent and \(\mathbb{T}_{\sigma}\) is a hereditary class. So \(\sigma(L)\in\mathbb{T}_{\sigma}\), and thus \(Rx\in\mathbb{T}_{\sigma}.\) Now, if we consider a simple quotient of \(Rx\), \(S\), say, we have that \(S\in\mathbb{T}_{\sigma}\), thus \(S\in R\)-\(Simp\cap\mathbb{T}_{\sigma}\). Taking the inclusion of \(S\) in its injective hull \(E(S),\) we obtain a non zero morphism from \(Rx\) to \(E(S).\) This morphism has a non zero extension \(f:L\to E(S).\) Note that \(S\leq f(L)\) because \(S\) is essential in \(E(S).\) As \(S\) is paranipective, then it is a quotient of \(f(L)\) and thus \(S\) is a quotient of \(L.\) See the below diagram.
**Theorem 3.18**.: _Let \(R\) be a ring and let \(\sigma\) be a left exact preradical. If each simple module \(S\) in \(\mathbb{T}_{\sigma}\) is paraprojective, then \(R\) is a left \(\sigma\)-semiartinian ring._
Proof.: Let \(M\) be an \(R\)-module and let \(N\) be a non zero submodule of \(M\) with \(\sigma(N)\neq 0\). Let us take a non zero cyclic submodule \(Rx\) of \(\sigma(N).\) As \(\sigma\) is left exact, then \(\sigma(N)\in\mathbb{T}_{\sigma}\), thus \(Rx\in\mathbb{T}_{\sigma}\).
Now, if \(S\) is a simple quotient of \(Rx\), then we have that \(S\in\mathbb{T}_{\sigma}\). By hypothesis, \(S\) is paraprojective, thus \(S\) embeds in \(Rx\) and from this, \(S\) embeds in \(M.\)
**Corollary 3.19**.: _Let \(R\) be a ring and let \(\sigma\in R\) be a left exact preradical. If \(R\) is a ring satisfying the \((\sigma\)-HH\()\)-condition, then \(R\) is a left \(\sigma\)-max and a left \(\sigma\)-semiartinian ring._
Proof.: It follows from (2) and (3) of Remark 3.14 and from Theorems 3.17 and 3.18.
**Theorem 3.20**.: _Let \(R\) be a ring and let \(\sigma\) be a left exact preradical. The following statements are equivalent:_
1. \(R\) _has the_ \(\left(\sigma\text{-HH}\right)\)_-condition._
2. \(R\) _is a_ \(\sigma\)_-_ \(\left(\text{R-Mod}\right)\)_-retractable ring and each_ \(S\in R\text{-Simp}\cap\mathbb{T}_{\sigma}\) _is paraprojective._
3. _Any_ \(S\in R\text{-Simp}\cap\mathbb{T}_{\sigma}\) _is paraprojective and parainjective._
Proof.: \(\left(1\right)\Rightarrow\left(2\right)\) It follows from \(\left(1\right)\) and \(\left(3\right)\) of Remark 3.14.
\(\left(2\right)\Rightarrow\left(3\right)\) This is clear.
\(\left(3\right)\Rightarrow\left(1\right)\) Let \(M\) and \(N\) be two \(R\)-modules such that \(\sigma(M),\sigma(N)\neq 0\) and let us suppose that \(f\in Hom\left(\sigma(N),M\right)\neq 0\). We have the following diagram
As \(\sigma\) is left exact, we have that \(\sigma(N)\in\mathbb{T}_{\sigma}\), and thus \(0\neq f\left(\sigma(N)\right)\in\mathbb{T}_{\sigma}\). Now, from Theorem 3.18, there exists \(S\in R\text{-Simp}\cap\mathbb{T}_{\sigma}\), a submodule of \(f\left(\sigma(N)\right)\), which is parainjective. Thus \(S\) is a quotient of \(M\). Besides, \(S\) is a quotient of \(f^{-1}(S)\), hence \(S\) embeds in \(f^{-1}(S)\), because \(S\) is paraprojective. Thus we have that \(Hom\left(M,\sigma(N)\right)\neq 0\).
Let us now see that if \(Hom\left(M,\sigma(N)\right)\neq 0\), then \(Hom\left(\sigma(N),M\right)\neq 0.\) To do so, let us consider \(0\neq f\in Hom\left(M,\sigma(N)\right).\) Given that \(\sigma\in R\)-lep and \(0\neq f(M)\leq\sigma(N)\), then \(f(M)\in\mathbb{T}_{\sigma}\), i.e., \(\sigma\left(f(M)\right)=f(M)\neq 0.\) Thus, by Theorem 3.18, there exists \(S\in R\text{-Simp}\cap\mathbb{T}_{\sigma}\) such that \(S\) embeds in \(\sigma\left(f(M)\right).\)
Then, taking into account that every \(\sigma\)-torsion simple module is parainjective, we have that is a quotient of \(\sigma(N).\) Finally, using the fact that every \(\sigma\)-torsion simple module is paraprojective, we have that \(S\) embeds in \(f^{-1}(S)\),Thus we obtain a nonzero morphism from \(\sigma(N)\) to \(M.\)
A ring \(R\) is called a \(\boldsymbol{BKN}\)**-ring** if between any two nonzero modules there exists a nonzero morphism. Such rings are introduced and characterized in [4]. In the following, we generalize these rings.
**Definition 3.21**.: Let \(R\) be a ring and let \(\sigma\in R\)-pr. We say that \(R\) is a \(\boldsymbol{\sigma}\)**- (BKN)-ring** if for any \(R\)-modules \(M\) and \(N\) with \(\sigma(M),\sigma(N)\neq 0,\) it holds that
\[Hom\left(\sigma(N),\sigma(M)\right)\neq 0.\]
Recall that a ring \(R\) is called **left local ring** if there is only one isomorphism type of simple \(R\)-modules. In the following definition we generalize this concept.
**Definition 3.22**.: Let \(R\) be a ring and \(\sigma\in R\)-pr. We say that a ring \(R\) is **left \(\sigma\)-local ring** if
\[|R\text{-}\mathrm{Simp}\cap\mathbb{T}_{\sigma}|=1.\]
**Theorem 3.23**.: _Let \(R\) be a ring and let \(\sigma\) be an exact and costable preradical. The following statements are equivalent:_
1. \(R\) _is a_ \(\sigma\)_-_ (BKN)_-ring._
2. \(R\) _satisfies the_ \((\sigma\text{-}\mathrm{HH})\)_-condition and it is a left_ \(\sigma\)_-local ring._
Proof.: \((1)\Rightarrow(2)\) Let \(M\) and \(N\) be two \(R\)-modules such that \(\sigma(M),\sigma(N)\neq 0\) and with \(Hom\left(\sigma(N),M\right)\neq 0.\) As \(R\) is a \(\sigma\)- (BKN)-ring, we have that \(Hom(\sigma(M),\sigma(N))\neq 0\). As \(\sigma\) is exact and costable, then \(\sigma(M)\) is a quotient of \(M.\) Hence \(Hom(M,\sigma(N))\neq 0.\)
Now, if \(S_{1},S_{2}\in R\text{-}\mathrm{Simp}\cap\mathbb{T}_{\sigma},\) we have that \(Hom\left(\sigma(S_{1}),\sigma(S_{2})\right)\neq 0,\) thus \(Hom(S_{1},S_{2})\neq 0\). Then \(S_{1}\cong S_{2}.\) Hence \(R\) is a left \(\sigma\)-local ring.
\((2)\Rightarrow(1)\) Let \(M\) and \(N\) be two \(R\)-modules such that \(\sigma(M),\sigma(N)\neq 0.\) From Corollary 3.19, we have that \(R\) is \(\sigma\)-semiartinian, so there exist \(S_{1},S_{2}\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\) such that \(S_{1}\) embeds in \(\sigma(M)\) and \(S_{2}\) embeds in \(\sigma(N).\) As \(R\) satisfies the (\(\sigma\)-HH) condition, it follows that \(Hom\left(\sigma(M),S_{1}\right)\neq 0.\) Finally, as \(R\) is a left \(\sigma\)-local ring, we see that \(S_{1}\cong S_{2}\) and from this, we get \(Hom\left(\sigma(M),\sigma(N)\right)\neq 0.\)
The following definition appears in [3, Definition 22].
**Definition 3.24**.: A module class \(\mathcal{C}\) satisfies the condition _(\(CN\))_ if whenever each nonzero quotient of an arbitrary module \(M\) shares a nonzero quotient with some element of \(\mathcal{C}\), it happens that \(M\) belongs to \(\mathcal{C}\).
In the following definition we generalize the \((CN)\) condition.
**Definition 3.25**.: Assume \(\mathcal{C}\subseteq R\text{-}\mathrm{Mod.}\)\(\mathcal{C}\) satisfies the **(\(\sigma\)-\(CN\))-condition** if
\[\left(\begin{array}{c}\text{for each epimorphism}\ \ M\twoheadrightarrow L\text{ with }\sigma(L)\neq 0,\\ \text{there exist }C\in\mathcal{C},N\in R\text{-}\mathrm{Mod,\ with }\sigma(N)\neq 0,\\ \text{and epimorphims}\ \ L\twoheadrightarrow N\twoheadleftarrow C\end{array} \right)\Longrightarrow M\in\mathcal{C}.\]
**Lemma 3.26**.: _Let \(\mathcal{C}\) be the class of \(R\)-modules satisfying the \((\sigma\text{-}CN)\)-condition and let \(C\in\mathcal{C}.\) If \(L\) is a quotient of \(C\) with \(\sigma(L)\neq 0\), then \(L\in\mathcal{C}.\)_
Proof.: Let \(H\) be a quotient of \(L\) with \(\sigma(H)\neq 0.\) Note that \(H\) is also a quotient of \(C\). As \(\mathcal{C}\) satisfies (\(\sigma\text{-}CN\)), it follows that \(L\in\mathcal{C}.\)
The following theorem is a generalization of [3, Theorem 22].
**Theorem 3.27**.: _Let \(\sigma\)-pr be an exact and costable preradical and \(\mathcal{C}\subseteq R\)-Mod. The following statements are equivalent for \(\mathcal{C}\):_
1. \(\mathcal{C}\in\sigma\)_-(_R_-conat)._
2. \(\mathcal{C}\) _satisfies_ \((\sigma\)_-_\(CN)._
3. \(\mathcal{C}\in\mathcal{L}_{/_{\sigma}}\) _and_ \(\mathcal{C}=\left(\mathcal{C}^{\perp_{/_{\sigma}}}\right)^{\perp_{/_{\sigma}}}.\)__
Proof.: \((1)\Rightarrow(2)\) Let \(\mathcal{C}\in\sigma\)-\(\,(R\)-conat) and \(M\) be an \(R\)-module. As \(\mathcal{C}\in\sigma\)-\(\,(R\)-conat), then \(\mathcal{C}=\mathcal{A}^{\perp_{/_{\sigma}}},\) for some \(\mathcal{A}\in\mathcal{L}_{/_{\sigma}},\) thus by [5, Proposition 11], we have
\[\mathcal{A}^{\perp_{/_{\sigma}}}=\{M\in R\text{-Mod}\ |\ \forall\ M\twoheadrightarrow L,\ \sigma(L)\neq 0\Rightarrow\sigma(L)\notin\mathcal{A}\}\cup \mathbb{F}_{\sigma}.\]
Note that if for each non zero quotient \(L\) of \(M\), with \(\sigma(L)\neq 0\) it holds that \(\sigma(L)\notin\mathcal{A},\) then \(M\in\mathcal{A}^{\perp_{/_{\sigma}}},\)\(M\in\mathcal{C}.\) Let us assume that \(L\) is a non zero quotient of \(M\), with \(\sigma(L)\neq 0,\) and such that \(\sigma(L)\in\mathcal{A}\) and that there exist \(C\in\mathcal{C}\) and \(N\in R\)-Mod, with \(\sigma(N)\neq 0,\) with epimorphisms \(L\twoheadrightarrow N\twoheadrightarrow C.\) As \(C\in\mathcal{C}\) and \(C\twoheadrightarrow N,\) is an epimorphism with \(\sigma(N)\neq 0,\) we get that \(\sigma(N)\notin\mathcal{A}.\)
Besides for the epimorphism \(f:L\twoheadrightarrow N,\) as \(\sigma\) is exact, then \(\sigma\) is idempotent and cohereditary, thus \(\sigma(N)=\sigma\left(\sigma(N)\right)\) and \(f_{|}:\sigma(L)\twoheadrightarrow\sigma(N)\) is also an epimorphism. As \(\mathcal{A}\in\mathcal{L}_{/_{\sigma}}\) it follows that \(\sigma(N)\in\mathcal{A},\) a contradiction. Thus \(\sigma(L)\notin\mathcal{A}\) and hence \(M\in\mathcal{C}.\)
\((2)\Rightarrow(3)\) Let \(\mathcal{C}\) be a class of \(R\)-modules satisfying \((\sigma\)-\(CN).\) Firstly, we show that \(\mathcal{C}\in\mathcal{L}_{/_{\sigma}}.\) For this, let us consider \(C\in\mathcal{C}\) and let \(L\in R\)-Mod be a quotient of \(M\) with \(\sigma(L)\neq 0.\) As \(\sigma\) is an exact costable preradical, we have that
\[C=\sigma(R)C\oplus C^{\prime}=\sigma(C)\oplus C^{\prime}, \tag{3.1}\]
where \(C^{\prime}=\{c\in C\ |\ \sigma(R)c=0\}\). It follows that \(\sigma(C)\) is a quotient of \(C\). Then, by Lemma 3.26, \(\sigma(L)\in\mathcal{C}\).
Now, to show that \(\mathcal{C}=\left(\mathcal{C}^{\perp_{/_{\sigma}}}\right)^{\perp_{/_{\sigma}}},\) let us recall the following description of the double pseudocomplement of \(\mathcal{C},\)
\[\begin{split}\left(\mathcal{C}^{\perp_{/_{\sigma}}}\right)^{ \perp_{/_{\sigma}}}=\{M\in R\text{-Mod}\ |\ \forall M\twoheadrightarrow K,\ \sigma(K)\neq 0\\ \Rightarrow\exists\sigma(K)\twoheadrightarrow L,\ \sigma(L)\neq 0 \text{ and }\sigma(L)\in\mathcal{C}\}\cup\mathbb{F}_{\sigma},\end{split} \tag{3.2}\]
see [5, Definition 4]. So, if \(C\in\mathcal{C}\) and \(K\) is a quotient of \(C\) with \(\sigma(K)\neq 0,\) then as \(\sigma\) is idempotent and cohereditary, we get an epimorphism
\[\sigma(C)\twoheadrightarrow\sigma(K).\]
In the same way that in (3.1), \(\sigma(C)\) is a quotient of \(C,\) thus from Lemma 3.26\(\sigma(C)\in\mathcal{C}.\) Applying Lemma 3.26 one more time, we obtain that \(\sigma(K)\in\mathcal{C}.\) From (3.2) we get that \(C\in\left(\mathcal{C}^{\perp_{/_{\sigma}}}\right)^{\perp_{/_{\sigma}}}.\) This shows that \(\mathcal{C}\subseteq\left(\mathcal{C}^{\perp_{/_{\sigma}}}\right)^{\perp_{/_{ \sigma}}}.\)
For the other inclusion, let us consider \(M\in\left(\mathcal{C}^{\perp_{/\sigma}}\right)^{\perp_{/\sigma}}\) and a quotient \(L\) of \(M\) with \(\sigma(L)\neq 0.\) As \(\sigma(L)\) is a quotient of \(L\), then \(\sigma(L)\) is a quotient of \(M.\) Also \(\sigma\left(\sigma(L)\right)=\sigma(L)\neq 0\). So as \(M\in\left(\mathcal{C}^{\perp_{/\sigma}}\right)^{\perp_{/\sigma}},\) there exists a quotient \(T\) of \(\sigma(L)\) with \(\sigma(T)\neq 0\) and \(\sigma(T)\in\mathcal{C}.\) Finally, as \(\mathcal{C}\) satisfies (\(\sigma\)-\(CN\)), we get that \(M\in\mathcal{C}.\) So, \(\left(\mathcal{C}^{\perp_{/\sigma}}\right)^{\perp_{/\sigma}}\subseteq\mathcal{C},\) hence \(\mathcal{C}=\left(\mathcal{C}^{\perp_{/\sigma}}\right)^{\perp_{/\sigma}}.\)
\((3)\Rightarrow(1)\) This is clear.
_Remark 3.28_.: Let \(\sigma\) be an exact and costable preradical and let \(\mathcal{C}\subseteq R\)-Mod. If \(\xi_{\sigma\text{-}\text{\rm cont}}\left(\mathcal{C}\right)\) denotes the least \(\sigma\)-conatural class containing \(\mathcal{C},\) then
\[\xi_{\sigma\text{-}\text{\rm cont}}\left(\mathcal{C}\right)= \{M\in R\text{-}\text{\rm Mod}\ |\ \forall M\twoheadrightarrow L,\ \sigma(L)\neq 0\text{ there exist }\ C\in\mathcal{C},\] \[N\in R\text{-}\text{\rm Mod},\text{with }\sigma(N)\neq 0,\text{ such that }\ L \twoheadrightarrow N\twoheadrightarrow C\}\cup\mathbb{F}_{\sigma}\]
In [3, Theorem 42] left max rings are characterized via conatural classes. The following theorem generalizes this result.
**Proposition 3.29**.: _Let \(\sigma\) be an exact and costable preradical. The following statements are equivalent:_
1. \(R\) _is a left_ \(\sigma\)_-max ring._
2. _Each_ \(\sigma\)_-conatural class is generated by a family of simple_ \(\sigma\)_-torsion modules._
Proof.: \((1)\Rightarrow(2)\) Let \(\mathcal{C}\) be a non trivial \(\sigma\)-conatural class and let \(\mathcal{S}\subseteq\mathcal{C}\) be the class of all simple \(\sigma\)-torsion modules in \(\mathcal{C}.\) Notice that \(\mathcal{S}\) is non empty because \(R\) is a \(\sigma\)-max ring. Now, if \(0\neq M\in\mathcal{C}\) and \(L\) is a non zero quotient of \(M\) with \(\sigma(L)\neq 0,\) as \(R\) is \(\sigma\)-max, then \(L\) has a simple quotient \(S\in R\text{-}\text{\rm Simp}\cap\mathbb{T}_{\sigma}.\) By Lemma 3.26, \(S\in\mathcal{C},\) by Remark 3.28, we have that \(M\in\xi_{\sigma\text{-}\text{\rm cont}}(\mathcal{S}).\) It follows that \(\mathcal{C}=\xi_{\sigma\text{-}\text{\rm cont}}(\mathcal{S}).\)
\((2)\Rightarrow(1)\) Let \(M\in R\)-Mod and \(L\) be a non zero quotient of \(M\) with \(\sigma(L)\neq 0.\) By hypothesis, there exists \(\mathcal{S}\subseteq R\text{-}\text{\rm Simp}\cap\mathbb{T}_{\sigma}\) such that \(\xi_{\sigma\text{-}\text{\rm cont}}(M)=\xi_{\sigma\text{-}\text{\rm cont}}( \mathcal{S}).\) As \(L\in\xi_{\sigma\text{-}\text{\rm cont}}(M),\)\(L\) has a quotient \(S\in\mathcal{S},\) Thus \(R\) is a left \(\sigma\)-max ring.
The following theorem generalizes [1, Proposition 2.11].
**Theorem 3.30**.: _Let \(\sigma\) be an exact and costable preradical. The following statements are equivalent:_
1. _Each_ \(S\in R\text{-}\text{\rm Simp}\cap\mathbb{T}_{\sigma}\) _is paranijective._
2. \(\sigma\text{-}(\text{\rm R-}\text{\rm cont})\subseteq\sigma\text{-}(\text{\rm R -}\text{\rm tors}).\)__
3. \(\sigma\text{-}(\text{\rm R-}\text{\rm cont})\subseteq\mathcal{L}_{\leq_{ \sigma}}.\)__
Proof.: \((1)\Rightarrow(2)\) In [6, Proposition 6.5] it is shown that for an exact and costable preradical \(\sigma,\) and for a left \(\sigma\)-max ring, each \(\sigma\)-conatural class is closed under taking direct sums. Now, by hypothesis, each \(S\in R\text{-}\text{\rm Simp}\cap\mathbb{T}_{\sigma}\) is
paranipective, from Theorem 3.17, we obtain that \(R\) is a left \(\sigma\)-max ring. Thus it suffices to show that any \(\sigma\)-conatural class is a \(\sigma\)-hereditary class.
For this, let us consider \(\mathcal{C}\in\sigma\)-(R-conat), \(M\in\mathcal{C}\) and \(N\leq M\) with \(\sigma(N)\neq 0.\) Let \(S\in\mathrm{Simp}\cap\mathbb{T}_{\sigma}\) be a quotient of \(\sigma(N),\) which exists because \(R\) is a left \(\sigma\)-max ring. If \(T\) be the push-out of \(\sigma(N)\twoheadrightarrow S\) and \(\sigma(N)\hookrightarrow M,\) as in the following diagram
then \(S\) embeds in \(T.\) Thus, as \(S\) is paranipective by hypothesis, then \(S\) is a simple \(\sigma\)-torsion quotient of \(T.\) Then \(S\) is also a simple \(\sigma\)-torsion quotient of \(M.\) Then any simple \(\sigma\)-torsion quotient of \(\sigma(N)\) belongs to \(\mathcal{C}\). Thus, by Proposition 3.29, \(\sigma(N)\in\mathcal{C}.\) Hence \(\mathcal{C}\) is a \(\sigma\)-hereditary class.
\((2)\Rightarrow(3)\) This is clear.
\((3)\Rightarrow(1)\) Let \(S\in\mathrm{Simp}\cap\mathbb{T}_{\sigma}\) and let \(M\) be an \(R\)-module such that \(S\) embeds in \(M.\) By hypothesis, we have that \(S\in\xi_{\sigma\text{-conat}}(M).\) From Remark 3.28 it follows that \(S\) is a quotient of \(M,\) then \(S\) is paranipective.
_Remark 3.31_.: If \(\sigma\) is an exact and costable preradical, then for each \(R\)-module \(M\), we have a decomposition \(M=\sigma(M)\oplus M^{\prime},\) where \(M^{\prime}=\{m\in M\mid\sigma(R)m=0\}.\) Notice than \(\sigma\) has a complement \(\sigma^{\prime},\) in \(R\)-pr satisfying
\[\sigma^{\prime}(M)=M^{\prime},\]
for each \(M\in R\)-Mod.
**Theorem 3.32**.: _Let \(\sigma\) be an exact and costable preradical. If \(R\) satisfies \(\left(\sigma\text{-HH}\right)\) and \(\left(\sigma^{\prime}\text{-HH}\right),\) then \(R\) satisfies \(\left(HH\right)\)._
Proof.: Let \(M\) and \(N\) be two \(R\)-modules and suppose that \(Hom(N,M)\neq 0.\) Let us take a non zero \(f\in Hom(N,M)\). If \(\nu:M=\sigma(M)\oplus\sigma^{\prime}(M)\rightarrow\sigma(M)\) denotes the natural projection on \(\sigma(M)\) and \(\nu f\neq 0,\) then \(Hom(N,\sigma(M))\neq 0.\) So, by \(\left(\sigma\text{-HH}\right)\),there exists a non zero morphism \(g:\sigma(M)\to N,\) thus \(g\nu:M\to N\) is a non zero morphism, \(Hom(M,N)\neq 0.\)
Now, if \(\nu f=0,\) then taking the co-restriction of \(f\) to its image, we have that \(f\restriction:N\rightarrow\sigma^{\prime}(M)\) is a non zero morphism. From condition \(\left(\sigma^{\prime}\text{-HH}\right),\) there exists a non zero morphism \(h:\sigma^{\prime}(M)\to N.\) Finally, \(h\nu^{\prime}:M\to N\) is non zero morphism, where \(\nu^{\prime}:M\rightarrow\sigma^{\prime}(M)\) is the natural projection over \(\sigma^{\prime}(M).\) Thus, \(Hom(M,N)\neq 0.\)
In [8, Theorem 1.13] it is shown that if \(R\) satisfies condition \(\left(HH\right)\), then each torsion class is a conatural class. The following corollary is a consequence.
**Corollary 3.33**.: _Let \(\sigma\in R\)-pr be exact and costable. If \(R\) satisfies condition \(\left(\sigma\text{-HH}\right)\) and condition \(\left(\sigma^{\prime}\text{-HH}\right),\) then each torsion class is a conatural class._
**Theorem 3.34**.: _Let \(\sigma\in R\)-pr be exact and costable. If \(R\) satisfies condition \(\left(\sigma\text{-}\text{\rm HH}\right)\) and condition \(\left(\sigma^{\prime}\text{-}HH\right)\), then_
\[\sigma\text{-}\text{\rm(R-TORS)}\subseteq\sigma\text{-}\text{\rm(R-conat)}\,.\]
Proof.: Let \(\mathcal{C}\in\sigma\text{-}\text{\rm(R-TORS)}\,.\) As \(\sigma\) is exact, there exists \(\mathcal{D}\in R\text{-}\text{\rm TORS}\) such that \(\mathcal{C}=\overleftarrow{\sigma}(\mathcal{D}),\) see [6, Theorem 3.13]. Now, from Corollary 3.33, we have that \(\mathcal{D}\in R\text{-}\text{\rm conat}\). Then, as \(\sigma\) is an exact and costable preradical, we have that \(\overleftarrow{\sigma}(\mathcal{D})\in R\text{-}\text{\rm(}\sigma\text{-} \text{\rm conat}\text{)},\) see [5, Proposition 15], i.e., \(\mathcal{C}\in\sigma\text{-}\text{\rm(R-conat)}.\)
**Theorem 3.35**.: _Let \(\sigma\in R\text{-}\text{pr}\) be exact and costable. If \(R\) satisfies condition \(\left(\sigma\text{-}\text{\rm HH}\right)\) and \(\left(\sigma^{\prime}\text{-}HH\right)\), then_
\[\sigma\text{-}\text{\rm(R-TORS)}=\sigma\text{-}\text{\rm(R-conat)}=\sigma \text{-}\text{\rm(R-tors)}\,.\]
Proof.: It follows from Theorem 3.34, from Remark 3.14, from Theorem 3.30 and from the fact that \(\sigma\text{-}\text{\rm(R-tors)}\subseteq\sigma\text{-}\text{\rm(R-TORS)}\,.\)
**Lemma 3.36**.: _Let \(\sigma\) be an exact and costable preradical, let \(R\) be a ring with \(\left(\sigma\text{-}HH\right)\), \(\mathcal{C}\in\sigma\text{-}\text{\rm(R-TORS)}\) and \(M\in R\text{-Mod}\). If \(M\) has a quotient \(L\) with \(L\in\mathcal{C}\) and \(\sigma(L)\neq 0,\) then \(M\) has a non zero submodule \(N,\) such that \(N\in\mathcal{C}\cap\mathbb{T}_{\sigma}.\) Moreover, there exist a largest submodule \(K\) of \(M\) with the property that \(K\in\mathcal{C}\cap\mathbb{T}_{\sigma}.\)_
Proof.: As \(\sigma\) is an exact and costable preradical, then \(\sigma(L)\) is a quotient of \(L.\) Now, as \(\mathcal{C}\) is a \(\sigma\)-torsion class and \(\sigma(L)\neq 0,\) it follows that \(\sigma(L)\in\mathcal{C}.\)
As \(\sigma(L)\) is a nonzero quotient of \(M,\) then \(Hom(M,\sigma(L))\neq 0.\) Thus, from the \(\left(\sigma\text{-}HH\right)\)-condition, there exists an morphism \(0\neq h:\sigma(L)\to M.\) Let \(N=h\left(\sigma(L)\right),\) and note that \(N\) is a nonzero submodule of \(M\). Moreover, as \(N\) is a quotient of \(\sigma(L),\) then \(\sigma(N)\) is a quotient of \(\sigma\left(\sigma(L)\right),\) because \(\sigma\) is cohereditary, as \(\sigma\left(\sigma(L)\right)=\sigma(L),\) then \(\sigma(N)=N\) with \(\sigma(N)\in\mathcal{C}.\) Hence, \(N\in\mathcal{C}\cap\mathbb{T}_{\sigma}.\)
Now, as \(\mathcal{C}\) is a class closed under taking direct sums, then \(\mathcal{C}_{M}=\{N\leq M\ |\ 0\neq N\in\mathcal{C}\cap\mathbb{T}_{\sigma}\}\neq\emptyset\) and as \(\sigma\) is left exact, we get that
\[\bigoplus_{C\in\mathcal{C}_{M}}C\in\mathcal{C}\cap\mathbb{T}_{\sigma}.\]
As \(\sum_{C\in\mathcal{C}_{M}}C\) is a quotient of \(\bigoplus_{C\in\mathcal{C}_{M}}C\) and as \(\sigma\) is a cohereditary preradical, it follows that \(\sum_{C\in\mathcal{C}_{M}}C\in\mathcal{C}\cap\mathbb{T}_{\sigma}.\) Hence, \(K=\sum_{C\in\mathcal{C}_{M}}C\) is the largest \(\sigma\)-torsion submodule of \(M\) belonging to \(\mathcal{C}.\)
**Lemma 3.37**.: _Let \(\sigma\) be an exact and costable preradical, let \(R\) be a ring with the \(\left(\sigma\text{-}\text{\rm HH}\right)\)-condition and let \(\mathcal{C}\subseteq R\text{-Mod}\). If \(\mathcal{C}\in\sigma\text{-}\text{\rm(R-TORS)}\), then \(\sigma^{*}(\mathcal{C})\in R\text{-conat}.\)_
Proof.: Let us show that \(\sigma^{*}(C)\) has \((CN).\) For this, let us take an \(R\)-module \(M\) such that for any non zero quotient \(N\) there exist \(C\in\mathcal{C}\) and \(L\in R\text{-}\text{Mod}\) as in
the following diagram:
and we will see that \(M\in\sigma^{*}(\mathcal{C})\).
In this situation, \(\sigma(C)\in\mathcal{C}\), because \(\sigma(C)\) is a quotient of \(C\), as \(\sigma\) is exact and costable, and \(\mathcal{C}\) is a cohereditary class. Now, note that, \(L=\sigma(L)\) because \(\sigma\) is cohereditary and \(\sigma(C)=\sigma\left(\sigma(C)\right),\) thus \(L\in\mathcal{C}.\) So, by Lemma 3.36, we consider the largest submodule, \(K\) of \(M\) such that \(K\in\mathcal{C}\cap\mathbb{T}_{\sigma}.\) Note that \(K\neq 0\) and that if \(K=M,\) then we are done, because \(M=\sigma(K)\) with \(K\in\mathcal{C},\) that is, \(M\in\sigma^{*}(\mathcal{C}).\) Assume to the contrary that \(K\leq M,\) then \(M/K\neq_{R}0.\) By hypothesis, \(M/K\) also has a non zero submodule \(U\in\mathcal{C}\cap\mathbb{T}_{\sigma},\) as in the following commutative diagram
But as \(\mathcal{C}\cap\mathbb{T}_{\sigma}\) is a class closed under extensions, then \(\pi^{-1}\left(U\right)\) is a submodule of \(M\) in \(\mathcal{C}\cap\mathbb{T}_{\sigma},\) properly containing \(K,\) a contradiction. Then \(K=M.\)
Let us recall that for any class \(\mathcal{D}\subseteq R\)-Mod, we have that \(\overleftarrow{\sigma}(\mathcal{D})=\overleftarrow{\sigma}\sigma^{*} \overleftarrow{\sigma}(\mathcal{D});\) and that if \(\sigma\) is an exact preradical, then a class \(\mathcal{C}\) is \(\sigma\)-torsion if and only if \(\mathcal{C}=\overleftarrow{\sigma}(\mathcal{D}),\) for a torsion class \(\mathcal{D}\). Besides, if \(\sigma\) is an exact and costable preradical and \(\mathcal{D}\) is a conatural class, then \(\overleftarrow{\sigma}(\mathcal{D})\) is a \(\sigma\)-conatural class, see [5] and [6].
**Theorem 3.38**.: _Let \(\sigma\) be exact and costable. If \(R\) satisfies the \(\left(\sigma\text{-}\mathrm{HH}\right)\) condition, then_
\[\sigma\text{-}\text{\emph{(R-TORS)}}\subseteq\sigma\text{-}\text{\emph{(R- conat)}}\,.\]
Proof.: Let \(\mathcal{C}\in\sigma\text{-}\text{\emph{(R-TORS)}}\,.\) There exists a torsion class \(\mathcal{D}\) such that \(\mathcal{C}=\overleftarrow{\sigma}(\mathcal{D}).\) Thus, we have,
\[\mathcal{C}=\overleftarrow{\sigma}(\mathcal{D})=\overleftarrow{\sigma}\sigma^ {*}\overleftarrow{\sigma}(\mathcal{D}).\]
It suffices to show that \(\sigma^{*}\overleftarrow{\sigma}(\mathcal{D})\) is a conatural class. To see this, by Lemma 3.37 it suffices to show that \(\overleftarrow{\sigma}(\mathcal{D})\) is a \(\sigma\)-torsion class. But this holds by hypothesis.
**Theorem 3.39**.: _Let \(\sigma\) be an exact and costable preradical. If \(R\) satisfies the \(\left(\sigma\text{-}\mathrm{HH}\right)\) condition, then_
\[\sigma\text{-}\text{\emph{(R-TORS)}}=\sigma\text{-}\text{\emph{(R-conat)}}= \sigma\text{-}\text{\emph{(R-tors)}}\,.\]
Proof.: It follows from Theorem 3.30, from Theorem 3.38 and the fact that \(\sigma\text{-}\text{\emph{(R-tors)}}\subseteq\sigma\text{-}\text{\emph{(R-TORS)}}\,.\)
The following definition is dual to Definition 3.25.
**Definition 3.40**.: Let \(\mathcal{C}\subseteq R\)-Mod. We will say that \(\mathcal{C}\) satisfies **condition**\((\boldsymbol{\sigma}\)-\(\boldsymbol{N})\) if
\[\left(\begin{array}{c}\text{for each monomorphism $L\leadsto M$ with $\sigma(L)\neq 0$,}\\ \text{there exist $C\in\mathcal{C},N\in R\text{-Mod},$ with $\sigma(N)\neq 0$}\\ \text{and monomorphisms $L\hookhook N\leadsto C$}\end{array}\right) \Longrightarrow M\in\mathcal{C}.\]
**Lemma 3.41**.: _Let \(\mathcal{C}\) be a class of \(R\)-modules satisfying the \((\sigma\)-\(N)\) condition and let \(C\in\mathcal{C}.\) If \(L\) embeds in \(C\), with \(\sigma(L)\neq 0\), then \(L\in\mathcal{C}.\)_
Proof.: Let \(K\) be an \(R\)-module embedded in \(L\), with \(\sigma(K)\neq 0.\) Then \(K\) embeds in \(C\) and as \(C\in\mathcal{C}\), we conclude that \(K\in\mathcal{C}.\)
_Remark 3.42_.: In [5, Corollary 2] it is shown that
\[\sigma\text{-}\left(R\text{-nat}\right)=\mathcal{L}_{\{\leq_{\sigma},\oplus, \sigma(E()),ext\}}.\]
**Theorem 3.43**.: _Let \(\mathcal{C}\subseteq R\)-Mod and let \(\sigma\in R\)-pr be a left exact and stable preradical. Then \(\mathcal{C}\in\sigma\text{-}\left(R\text{-nat}\right)\) if and only if \(\mathcal{C}\) satisfies \((\sigma\text{-}N).\)_
Proof.: Suppose that \(\mathcal{C}\subseteq R\)-Mod is a \(\sigma\)-natural class. By definition of \(\sigma\)- (\(R\)-_nat_), one has that \(\mathcal{C}=\mathcal{A}^{\perp_{\leq_{\sigma}}},\) for some class \(\mathcal{A}\in\mathcal{L}_{\leq_{\sigma}}.\) Then, by [5, Lemma 1], we have that
\[\mathcal{A}^{\perp^{/\sigma}}=\left\{M\in R\text{-Mod }|\ \forall\ L\leadsto M,\ \sigma(L)\in\mathcal{A}\Rightarrow L\in\mathbb{F}_{\sigma}\right\}.\]
Let \(M\) be an \(R\)-module. Note that, if for any \(L\leadsto M\) it holds that \(\sigma(L)\in\mathcal{A}\Rightarrow L\in\mathbb{F}_{\sigma},\) then \(M\in\mathcal{A}^{\perp_{\leq_{\sigma}}}=\mathcal{C}.\) Suppose then that there exists \(L\leadsto M\) with \(\sigma(L)\in\mathcal{A},\) but that \(L\notin\mathbb{F}_{\sigma}.\) Additionally, suppose that there exists \(N\in R\)-Mod, with \(\sigma(N)\neq 0\) and \(C\in\mathcal{C}\) such that \(C\hook N\leadsto L.\)
Note that \(\sigma(N)\in\mathcal{C},\) since \(\mathcal{C}\in\mathcal{L}_{\leq_{\sigma}}\) and \(C\in\mathcal{C}.\) It follows that \(\sigma(N)\notin\mathcal{A},\) since otherwise \(N\in\mathbb{F}_{\sigma}.\)
On the other hand, since \(\sigma\) is left exact, and hence it is idempotent, it follows that. \(\sigma(N)=\sigma\left(\sigma(N)\right)\leadsto\sigma\left(\sigma(L)\right)= \sigma(L).\) Finally, since \(\mathcal{A}\in\mathcal{L}_{\leq_{\sigma}}\) and \(\sigma(L)\in\mathcal{A}\) it follows that \(\sigma(N)\in\mathcal{A},\) which is a contradiction. Thus, there does not exist \(L\leadsto M\) with \(\sigma(L)\in\mathcal{A}\) and such that \(L\notin\mathbb{F}_{\sigma}.\) Therefore \(M\in\mathcal{C}.\)
Suppose now that \(\mathcal{C}\subseteq R\)-Mod is a class satisfying condition \((\sigma\text{-}N)\) and let us see that, indeed, \(\mathcal{C}\) is a \(\sigma\)-natural class.
Let us begin by showing that \(\mathcal{C}\in\mathcal{L}_{\leq_{\sigma}}.\) Let \(M\in\mathcal{C}\) and \(N\leq M.\) Note that if \(\sigma(N)=0,\) then \(N\in\mathcal{C},\) since \(\mathbb{F}_{\sigma}\subseteq\mathcal{C}.\) Suppose then that \(\sigma(N)\neq 0\) and let \(L\leadsto\sigma(N),\) with \(\sigma(L)\neq 0.\) Since \(L\leadsto M\), there exist \(H\in R\)-Mod, with \(\sigma(H)\neq 0,\) and \(C\in\mathcal{C}\) such that \(L\hookhook H\leadsto C.\)
Now let us show that \(\mathcal{C}\in\mathcal{L}_{\oplus}.\) Let \(\{M_{i}\}_{i\in I}\subseteq\mathcal{C}\) and \(L\leadsto\oplus_{i\in I}M_{i},\) with \(\sigma(L)\neq 0.\) Since \(\sigma\) is left exact, one has that \(\sigma(L)\leadsto\sigma\left(\oplus_{i\in I}M_{i}\right)=\oplus_{i\in I} \sigma(M_{i}).\)
Now, by the projection argument, we have that there exist \(l\in\sigma(L)\) and \(0\neq m_{i}\in\sigma(M_{i})\), for some \(i\in I\), such that \(Rl\cong Rm_{i}\). Note also that \(\sigma\) is idempotent and \(\mathbb{T}_{\sigma}\) is a hereditary class, since \(\sigma\) is left exact. It follows that \(\sigma(Rl)=Rl\neq 0\), since \(Rl\cong Rm_{i}\leq\sigma(M_{i})\) and \(\sigma(M_{i})\in\mathbb{T}_{\sigma}\). Now, since \(M_{i}\in\mathcal{C}\) and \(Rm_{i}\to M_{i}\), with \(\sigma(Rm_{i})\neq 0\), then there exist \(C\in\mathcal{C}\) and \(N\in R\)-Mod, with \(\sigma(N)\neq 0\), such that
That is, \(\oplus_{i\in I}M_{i}\in\mathcal{C}\).
Now, let us see that \(\mathcal{C}\in\mathcal{L}_{\sigma(E())}\). Let \(M\in\mathcal{C}\) and \(L\backrightarrow\sigma\left(E(M)\right).\) Since \(\sigma\) is left exact and stable, one has that \(\sigma\left(E(M)\right)=E\left(\sigma(M)\right).\) Now, given that \(\sigma(M)\leq_{e}E\left(\sigma(M)\right)\) and \(\sigma(L)\neq 0\), it follows that \(\sigma(L)\cap\sigma(M)\neq 0\). But, note that \(\sigma\left(\sigma(L)\cap\sigma(M)\right)=\sigma\left(E(M)\right)\cap\sigma( L)\cap\sigma(M)=\sigma(L)\cap\sigma(M)\neq 0\), since \(\sigma\) is left exact. Therefore, exist \(C\in\mathcal{C}\) and \(N\in R\)-Mod, with \(\sigma(N)\neq 0\), such that
Whereupon, \(\sigma\left(E(M)\right)\in\mathcal{C}\).
Finally, let us see that \(\mathcal{C}\in\mathcal{L}_{ext}\). Let
be an exact sequence, with \(M^{\prime},M^{\prime\prime}\in\mathcal{C}\), and \(L\backrightarrow M\) with \(\sigma(L)\neq 0\). Without loss of generality, suppose that \(f\) is the inclusion and that \(L\leq M\). Note that, if \(\sigma(L)\cap M^{\prime}\neq 0\), then \(\sigma\left(\sigma(L)\cap M^{\prime}\right)=\sigma(L)\cap\sigma(L)\cap M^{ \prime}=\sigma(L)\cap M^{\prime}\neq 0\), since \(\sigma\) is left exact. In this case, we have that there exists \(C\in\mathcal{C}\) and \(N\in R\)-Mod, such that \(L\hook\sigma(L)\cap M^{\prime}\hook\hook N\rightsquigarrow C\), since \(M^{\prime}\in\mathcal{C}\).
Now, if \(\sigma(L)\cap M^{\prime}=0\), then \(g\rfloor:\sigma(L)\rightsquigarrow M^{\prime\prime}\) is a monomorphism, whence, there exist \(C\in\mathcal{C}\) and \(N\in R\)-Mod, such that \(L\hook\sigma(L)\cong g\left(\sigma(L)\right)\hook N\rightsquigarrow C\), since \(M^{\prime\prime}\in\mathcal{C}\).
_Remark 3.44_.: Let \(\sigma\in R\)-pr be left exact and stable, and \(\mathcal{C}\subseteq R\)-Mod. If \(\xi_{\sigma\text{-nat}}\left(\mathcal{C}\right)\) denotes the smallest class \(\sigma\)-natural containing \(\mathcal{C}\), then
\[\xi_{\sigma\text{-nat}}\left(\mathcal{C}\right)=\left\{M\in R\text{-Mod}\ |\ \forall L \backrightarrow M,\ \sigma(L)\neq 0\text{ there exist }C\in\mathcal{C},N\in R\text{-Mod},\right.\]
\[\left.\text{con }\sigma(N)\neq 0,\text{ such that }L\hook N\rightsquigarrow C\right\}\cup\mathbb{F}_{\sigma}\]
**Proposition 3.45**.: _Let \(\sigma\in R\)-pr be left exact and stable. The following statements are equivalent:_
1. \(R\) _is a left_ \(\sigma\)_-semiartinian ring._
2. _Each_ \(\sigma\)_-natural class is generated by a family_ \(\sigma\)_-torsion simple modules._
Proof.: \((1)\Rightarrow(2)\) Let \(\mathcal{C}\) be a non trivial \(\sigma\)-natural class and \(\mathcal{S}\subseteq\mathcal{C}\) be the class of all simple modules of \(\sigma\)-torsion of \(\mathcal{C}\). The class \(\mathcal{S}\) is non empty since \(R\) is a left \(\sigma\)-semiartinian ring. Consider \(0\neq M\in\mathcal{C}\), and \({}_{R}L\neq 0\) embedding in \(M\), with \(\sigma(L)\neq 0\). Since \(R\) is a left \(\sigma\)-semiartinian ring, there exists \(S\in R\)-Simp \(\cap\mathbb{T}_{\sigma}\) a
submodule of \(L.\) By Lemma 3.41, the submodules of modules in \(\mathcal{C}\) belong to \(\mathcal{C},\) so \(S\in\mathcal{C}.\) Then, by Remark 3.44, one has that \(M\in\xi_{\sigma\text{-nat}}(\mathcal{S}).\) It follows that \(\mathcal{C}=\xi_{\sigma\text{-nat}}(\mathcal{S}).\)
\((2)\Rightarrow(1)\) Let \(M\in R\)-Mod and \(L\) be a nonzero submodule of \(M\) with \(\sigma(L)\neq 0.\) By hypothesis one has that there exists \(\mathcal{S}\subseteq R\text{-Simp}\cap\mathbb{T}_{\sigma}\) such that \(\xi_{\sigma\text{-nat}}(M)=\xi_{\sigma\text{-nat}}(\mathcal{S}).\) Now, since \(L\in\xi_{\sigma\text{-nat}}(M),\) one has that there exists \(S\in\mathcal{S}\) such that \(S\) embeds in \(L.\) That is, \(R\) is a left \(\sigma-\)semiartinian ring.
The following theorem generalizes [1, Proposition 2.12].
**Theorem 3.46**.: _Let \(\sigma\in R\)-pr be exact and costable. The following statements are equivalent::_
1. _Each_ \(S\in R\)_-Simp_ \(\cap\mathbb{T}_{\sigma}\) _is paraprojective._
2. \(\sigma\text{-}(\text{R-nat})\subseteq\sigma\text{-}(\text{R-TORS}).\)__
3. \(\sigma\text{-}(\text{R-nat})\subseteq\mathcal{L}_{/_{\sigma}}.\)__
Proof.: \((1)\Rightarrow(2)\) Let \(\mathcal{C}\in R\)-(\(\sigma\)-nat). Since \(\sigma\text{-}(\text{R-nat})=\mathcal{L}_{\{\leq_{\sigma},\oplus,\sigma(E()), ext\}}\) and \(\sigma\text{-}(\text{R-TORS})\)
\(=\mathcal{L}_{\{/_{\sigma},\oplus,ext\}}\) it only remains to show that \(\mathcal{C}\in\mathcal{L}_{/_{\sigma}}.\) To do so, let us consider \(M\in\mathcal{C}\) and \(N\leq M\) with \(\sigma(N)\neq 0.\) Let \(S\in\text{Simp}\cap\mathbb{T}_{\sigma}\) be a submodule of \(\sigma(N),\) which exists since \(R\) is a left \(\sigma\)-semiartinian ring. If in the following diagram \(P\) is the pull-back of \(M\twoheadrightarrow\sigma(N)\) and \(S\nrightarrowtail\sigma(N),\)
Then \(S\) is a quotient of \(P,\) but by hypothesis \(S\) is paraprojective, so \(S\) is a simple module of \(\sigma\)-torsion, which embeds in \(P.\) Then \(S\) is a \(\sigma\)-torsion simple module which embeds in \(M.\) The above shows that any \(\sigma\)-torsion simple submodule of \(\sigma(N)\) belongs to \(\mathcal{C}\). As well, by Proposition 3.45, \(\sigma(N)\in\mathcal{C}.\) Therefore \(\mathcal{C}\) is a \(\sigma\)-hereditary class.
\((2)\Rightarrow(3)\) It is clear.
\((3)\Rightarrow(1)\) Let \(M\) be an \(R\)-module and \(S\in\text{Simp}\cap\mathbb{T}_{\sigma}\) be a quotient of \(M.\) By hypothesis, one has that \(S\in\xi_{\sigma\text{-nat}}(M).\) It follows from Remark 3.44 that \(S\) embeds in \(M.\) That is, \(S\) is paraprojective.
**Corollary 3.47**.: _Let \(\sigma\in R\)-pr be exact and costable. If \(R\) satisfies condition \((\sigma\text{-HH}),\) then_
\[\sigma\text{-}(\text{R-nat})\subseteq\sigma\text{-}(\text{R-TORS})\,.\]
Proof.: It follows from statement 3 of Remark 3.14, and from Theorem 3.46.
**Definition 3.48**.: Let \(\sigma\in R\)-pr and let \({}_{R}M\) be an \(R\)-module. We will say that a submodule \(N\) of \(M\) is \(\sigma\)-essential in \(M\) if \(N\cap L\neq 0\) for any \(L\) of \(M\) with \(\sigma(L)\neq 0\)
_Remark 3.49_.: Let \(\sigma\in R\)-pr and let \({}_{R}M\) be an \(R\)-module. Let us denote \(Soc_{\sigma}(M)\) the sum of the simple \(\sigma\)-torsion submodules of \(M\). If \(R\) is a left \(\sigma\)-semiartinian ring then \(Soc_{\sigma}(M)\) is essential in \(\sigma(M)\), thus it is \(\sigma\)-essential in \({}_{R}M.\) Thus we have for a left \(\sigma\)-semiartinian ring that
\[\sigma(M)\neq 0\Leftrightarrow Soc_{\sigma}(M)\neq 0.\]
**Theorem 3.50**.: _Let \(\sigma\in R\)-pr be exact and costable. If \(R\) satisfies condition \(\left(\sigma\text{-}\text{\rm HH}\right)\), then \(\sigma\text{-}\text{\rm(R-tors)}\subseteq\sigma\text{-}\text{\rm(R-nat)}\,.\)_
Proof.: Let \(\mathcal{C}\in\sigma\text{-}\text{\rm(R-tors)}\,.\) To see that \(\mathcal{C}\in\sigma\text{-}\text{\rm(R-nat)}\) recall that \(\sigma\text{-}\text{\rm(R-nat)}=\mathcal{L}_{\leq_{\sigma},\oplus,\sigma E}\) and that \(\sigma\text{-}\text{\rm(R-tors)}=\mathcal{L}_{\leq_{\sigma},\oplus,/_{\sigma},ext}\). Thus, it suffices to show that if \(M\in\mathcal{C}\), then \(\sigma E(M)\in\mathcal{C}.\) Let us first note that for any modulo \({}_{R}N\) one has that
\[Soc_{\sigma}(N)=Soc_{\sigma}(E(N)),\]
since \(N\) is an essential submodule of \(E(N)\). On the other hand, as \(\sigma\) is an exact and stable preradical, and in particular it is left exact and stable, we have that
\[\sigma(E(M))=\sigma(E(M)).\]
From Lemma 1.42 and the equations one has that
\[M\in\mathcal{C} \Longleftrightarrow Soc_{\sigma}(M)\in\mathcal{C}\] \[\Longleftrightarrow Soc_{\sigma}(E\left(M\right))\in\mathcal{C}\] \[\Longleftrightarrow Soc_{\sigma}(\sigma\left(E\left(M\right) \right))\in C\] \[\Longleftrightarrow\sigma\left(E\left(M\right)\right)\in \mathcal{C}.\]
Thus, if \(M\in\mathcal{C}\), then \(E(M)\in\mathcal{C}\).
**Definition 3.51**.: Let \(\sigma\in R-pr\) exact an stable and assume that \(R\) has \(\left(\sigma\text{-}\text{\rm HH}\right).\) We are going to define for each ordinal \(\alpha\), a preradical \(Soc_{\sigma}^{\alpha}\). Let us define \(Soc_{\sigma}^{0}\) as the zero preradical \(\underline{0}\). Now, let us define \(Soc_{\sigma}^{\alpha+1}\left(M\right)/Soc_{\sigma}^{\alpha}\left(M\right)= Soc_{\sigma}\left(M/Soc_{\sigma}^{\alpha}\left(M\right)\right),\) as in the diagram
(3.3)
Let us define \(Soc_{\sigma}^{\alpha}\left(M\right)=\sum_{\beta<\alpha}Soc_{\sigma}^{\beta} \left(M\right)\) if \(\alpha\) is a limit ordinal.
**Lemma 3.52**.: _Let \(\sigma\in\)\(R\)-pr be exact and stable and let \(\mathcal{C}\) be a \(\sigma\)-hereditary torsion class. If \(R\) has the \(\left(\sigma\text{-}\text{\rm HH}\right)\) condition, then \(M\in\mathcal{C}\) if and only if each of its simple \(\sigma\)-torsion subquotients belong to \(\mathcal{C}.\)_
Proof.: Let us first assume that \(M\in\mathcal{C}.\) If \(\sigma(M)\neq 0\) and \(S\in R-simp\cap\mathbb{T}_{\sigma}\) is a subquotient of \(\sigma\left(M\right)\), then by Theorem 1,10, \(S\) embeds in \(\sigma\left(M\right).\) As \(\mathcal{C}\) is a \(\sigma\)-hereditary class, one has that \(S\in\mathcal{C}.\)
On the other hand, if \(\sigma(M)=0\), then \(M\in\mathcal{C}\), because \(\mathbb{F}_{\sigma}\subseteq\mathcal{C}\), for any \(\sigma\)-hereditary torsion class \(\mathcal{C}\).
Now suppose that any simple \(\sigma\)-torsion subquotient of \(\sigma(M)\) belongs to \(\mathcal{C}\) and let us show that \(M\) in \(\mathcal{C}\).
We can assume that \(\sigma(M)\neq 0.\) Since \(\sigma\) is exact and stable, then it centrally splits, so that \(M=\sigma(M)\oplus M^{\prime}\), with \(M^{\prime}\in\mathbb{F}_{\sigma}.\) Thus, it suffices to show that \(\sigma(M)\in\mathcal{C}.\) As \(R\) has (\(\sigma\)-HH), then by Corollary 3.19, we have that \(R\) is left \(\sigma\)-semiartinian. Then by Remark 3.49, \(0\neq Soc_{\sigma}\leqslant_{ess}\sigma(M).\) Note also that \(Soc_{\sigma}(M)\in\mathcal{C}\), since \(\mathcal{C}\) is a \(\sigma\)-hereditary torsion class. Therefore, if it happens that \(\sigma(M)=Soc_{\sigma}(M)\), then \(\sigma(M)\) belongs to \(\mathcal{C}.\) In this case we are done. Suppose then that \(\sigma(M)/Soc_{\sigma}(M)\neq 0.\) Note that we can consider two cases: \(\sigma(M)/Soc_{\sigma}(M)\)\(\in\mathbb{F}_{\sigma}\) and \(\sigma(M)/Soc_{\sigma}(M)\)\(\notin\mathbb{F}_{\sigma}\).
In the first case, \(\sigma(M)/Soc_{\sigma}(M)\in\mathcal{C}\) because \(\mathbb{F}_{\sigma}\subseteq\mathcal{C}.\) Since \(\mathcal{C}\) is closed under extensions, the exact sequence \(0\to Soc_{\sigma}(M)\rightarrow\sigma(M)\rightarrow\sigma(M)/Soc_{\sigma}(M)\to 0\) implies that \(\sigma(M)\in\mathcal{C}.\)
In the second case, we are going to show by induction that for any ordinal \(\alpha\), the module \(Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)\) belongs to \(\mathcal{C}\).
It is clear that
\[\underline{0}\left(\sigma\left(M\right)\right)=Soc_{\sigma}^{0}\left(\sigma \left(M\right)\right)=_{R}0\]
belongs to \(\mathcal{C}.\) Now suppose that \(Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)\) belongs to \(\mathcal{C}\), then the exact sequence
\[0\to Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)\to Soc _{\sigma}^{\alpha+1}\left(\sigma\left(M\right)\right)\to Soc_{ \sigma}\left(\sigma\left(M\right)/Soc_{\sigma}^{\alpha}\left(\sigma\left(M \right)\right)\right)\to 0\]
has the ends in \(C,\) because any simple submodule of \(Soc_{\sigma}\left(\sigma\left(M\right)/Soc_{\sigma}^{\alpha}\left(\sigma\left( M\right)\right)\right)\) is a \(\sigma\)-torsion simple subquotient of \(\sigma\left(M\right)\), which is in \(\mathcal{C}\), by hypothesis.
So, \(Soc_{\sigma}\left(\sigma\left(M\right)/Soc_{\sigma}^{\alpha}\left(\sigma\left( M\right)\right)\right)\) belongs to \(\mathcal{C},\) because \(\mathcal{C}\) is closed under direct sums and quotients of \(\sigma\)-torsion modules. If \(\alpha\) is a limit ordinal, suppose that \(Soc_{\sigma}^{\beta}\left(\sigma\left(M\right)\right)\in\mathcal{C}\) for each \(\beta<\alpha,\) then \(Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)=\sum_{\beta<\alpha} Soc_{\sigma}^{\beta}\left(\sigma\left(M\right)\right)\) is also in \(\mathcal{C},\)because this sum is a quotient of \(\oplus_{\beta<\alpha}Soc_{\sigma}^{\beta}\left(\sigma\left(M\right)\right).\) As \(\left\{Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)\right\}\) is an ascending chain of submodules of \(\sigma\left(M\right)\) which can not be all distinct, thus there exists a first ordinal \(\gamma\) such that \(Soc_{\sigma}^{\gamma}\left(\sigma\left(M\right)\right)=Soc_{\sigma}^{\gamma+1 }\left(\sigma\left(M\right)\right)\}.\) This means that \(0=Soc_{\sigma}^{\gamma+1}\left(\sigma\left(M\right)\right)/Soc_{\sigma}^{ \gamma}\left(\sigma\left(M\right)\right)=Soc_{\sigma}(M/Soc_{\sigma}^{\gamma} \left(\sigma\left(M\right)\right)).\) This can only occur if \(M/Soc_{\sigma}^{\gamma}\left(\sigma\left(M\right)\right)=0.\) Therefore \(M=Soc_{\sigma}^{\gamma}\left(\sigma\left(M\right)\right),\) which belongs to \(\mathcal{C}.\) It is clear that \(\underline{0}\left(\sigma\left(M\right)\right)=Soc_{\sigma}^{0}\left(\sigma \left(M\right)\right)=_{R}0\) belongs to \(\mathcal{C}.\) Now suppose that \(Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right),\) belongs to \(\mathcal{C},\) then the exact sequence \(0\to Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)\to Soc_{\sigma}^{ \alpha+1}\left(\sigma\left(M\right)\right)\to Soc_{\sigma}\left( \sigma\left(M\right)/Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right) \right)\to 0\) has the ends in \(\mathcal{C},\) because any simple submodule of \(Soc_{\sigma}\left(\sigma\left(M\right)/Soc_{\sigma}^{\alpha}\left(\sigma\left(M \right)\right)\right)\) is a \(\sigma\)-torsion simple subquotient of \(\left(M\right),\) which is in \(\mathcal{C},\) by hypothesis. So \(Soc_{\sigma}\left(\sigma\left(M\right)/Soc_{\sigma}^{\alpha}\left(\sigma\left( M\right)\right)\right)\) belongs to \(\mathcal{C},\) because \(\mathcal{C}\) is closed under direct sums and quotients of \(\sigma\)-torsion modules. If \(\alpha\) is a limit ordinal, suppose that \(Soc_{\sigma}^{\beta}\left(\sigma\left(M\right)\right)\in\mathcal{C}\) for each \(\beta<\alpha,\) then \(Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)=\sum_{\beta<\alpha}Soc_{ \sigma}^{\beta}\left(\sigma\left(M\right)\right)\) is also in \(\mathcal{C},\) because the sum is a quotient of \(\oplus_{\beta<\alpha}Soc_{\sigma}^{\beta}\left(\sigma\left(M\right)\right).\)
As \(\left\{Soc_{\sigma}^{\alpha}\left(\sigma\left(M\right)\right)\right\}\) is an ascending chain of submodules of \(\sigma\left(M\right)\) which can not all be distinct there exists a first ordinal \(\gamma\) such that \(Soc_{\sigma}^{\gamma}\left(\sigma\left(M\right)\right)=Soc_{\sigma}^{\gamma+1} \left(\sigma\left(M\right)\right).\) This means that
\[0=Soc_{\sigma}^{\gamma+1}\left(\sigma\left(M\right)\right)/Soc_{\sigma}^{ \gamma}\left(\sigma\left(M\right)\right)=Soc_{\sigma}(M/Soc_{\sigma}^{\gamma} \left(\sigma\left(M\right)\right)).\]
This can only occur if \(M/Soc_{\sigma}^{\gamma}\left(\sigma\left(M\right)\right)=0.\) Therefore \(M=Soc_{\sigma}^{\gamma}\left(\sigma\left(M\right)\right),\) which belongs to \(\mathcal{C}.\)
**Theorem 3.53**.: _Let \(\sigma\in R\)-pr be an exact costable preradical. If \(R\) satisfies condition \(\left(\sigma\text{-HH}\right)\), then_
\[\sigma\text{-}\left(R\text{-nat}\right)=\sigma\text{-}\left(R\text{-conat} \right).\]
Proof.: \(\subseteq]\) Let us take a \(\sigma\)-natural class \(\mathcal{C},\) i.e. a module class satisfying \(\left(\sigma\text{-}N\right).\) We will show that \(\mathcal{C}\) satisfies \(\left(\sigma\text{-}CN\right),\) that is, we are going to show that if for an \(R\)-module \({}_{R}M\) and for each non zero epimorphism \(M\twoheadrightarrow L,\) with \(\sigma(L)\neq 0,\) there exist \(C\in\mathcal{C}\) and \(N\in R\)-Mod, with \(\sigma(N)\neq 0,\) and two epimorphisms \(L\twoheadrightarrow N\twoheadrightarrow C,\) then \(M\in\mathcal{C}.\)
Let \(f:K\rightarrowtail M\) be a monomorphism with \(\sigma(K)\neq 0.\) Since \(R\) satisfies condition \(\left(\sigma\text{-HH}\right)\), then by Corollary 3.19, one has that \(R\) is a left \(\sigma\)-semiartinian ring, so that there exists \(S\in R\)-\(Simp\cap\mathbb{T}_{\sigma}\) such that \(S\) embeds in \(K\) and hence embeds in \(M.\) By Remark 3.14, one has that \(S\) is paranipjective, so \(S\) is a quotient of \(M.\) Now, by hypothesis, we have that there exists \(C\in\mathcal{C}\) such that \(S\) is a quotient of \(C.\) By Remark 3.14, we have that \(S\) is paraprojective, so \(S\) embeds in \(C.\) Note that we find ourselves in the following situation
(3.4)
Since \(\mathcal{C}\) satisfies the condition \(\left(\sigma\text{-}N\right),\) from (3.4), it follows that \(M\in\mathcal{C}.\) That is to say, \(\mathcal{C}\in\sigma\text{-}\left(R\text{-conat}\right).\)
\(\supseteq]\) Let \(\mathcal{C}\) be a \(\sigma\)-conatural class. We will show that \(\mathcal{C}\) satisfies the \(\left(\sigma\text{-}N\right)\) condition, i.e, we will show that if \(M\) is a \(R\)-module such that for any monomorphism \(L\twoheadrightarrow M,\) with \(\sigma(L)\neq 0\) there exist \(C\in\mathcal{C}\) and \(N\in R\)-Mod, with \(\sigma(N)\neq 0,\) which satisfy that \(L\hookhook N\twoheadrightarrow C,\) then \(M\in\mathcal{C}.\)
Consider then an epimorphism \(f:M\twoheadrightarrow K\), with \(\sigma(K)\neq 0.\) Since \(R\) satisfies condition \(\left(\sigma\text{-HH}\right)\), by Corollary 3.19, \(R\) is a left \(\sigma\)-max ring, so there exists \(S\in R\)-\(Simp\cap\mathbb{T}_{\sigma}\) such that \(S\) is a quotient of \(K\) and hence a quotient of \(M.\) Now, by Remark 3.14, one has that \(S\) is paraprojective, so \(S\) embeds in \(M.\) Then, by hypothesis, we have that there exists \(C\in\mathcal{C}\) such that \(S\) embeds in \(C.\) Note that, from Remark 3.14, one has that \(S\) is paranipjective, so \(S\) is a quotient
of \(C.\) Thus, we have the following situation
(3.5)
Now, since \(\mathcal{C}\in\sigma\)-\(\left(R\text{-}\text{\rm{onat}}\right),\)\(\mathcal{C}\) satisfies condition \(\left(\sigma\text{-}CN\right),\) so, from (3.5), one has that \(M\in\mathcal{C}.\) Thus, \(\mathcal{C}\in\sigma\)-\(\left(R\text{-}\text{\rm{nat}}\right).\)
**Theorem 3.54**.: _Let \(\sigma\in R\)-pr be exact and costable. If \(\sigma\)-\(\left(R\text{-}\text{\rm{nat}}\right)=\sigma\)-\(\left(R\text{-}\text{\rm{onat}}\right),\) then any \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\) is paranijective and parapprojective._
Proof.: Let \(M\) be a \(R\)-module and \(S\in R\text{-}Simp\cap\mathbb{T}_{\sigma}\). If \(S\) embeds in \(M,\) since \(S=\sigma(S),\) then \(S\in\xi_{\sigma\text{-}\text{\rm{nat}}}(M).\) Now, since \(\sigma\)-\(\left(R\text{-}\text{\rm{nat}}\right)=\sigma\)-\(\left(R\text{-}\text{\rm{onat}}\right),\) it follows that \(\xi_{\sigma\text{-}\text{\rm{nat}}}(M)=\xi_{\sigma\text{-}\text{\rm{conat}}} (M).\) Thus, \(S\in\xi_{\sigma\text{-}\text{\rm{conat}}}(M),\) then, by 3.28, \(S\) is a quotient of \(M,\) i.e., \(S\) is paranijective.
On the other hand, if \(S\) is a quotient of \(M,\) then \(S\in\xi_{\sigma\text{-}\text{\rm{conat}}}(M),\) since \(S=\sigma(S).\) It follows from the fact that \(\sigma\)-\(\left(R\text{-}\text{\rm{nat}}\right)=\sigma\)-\(\left(R\text{-}\text{\rm{conat}}\right),\) that \(S\in\xi_{\sigma\text{-}\text{\rm{nat}}}(M).\) By 3.44, one has that \(S\) embeds in \(M,\) i.e., \(S\) is paraprojective. If
_Remark 3.55_.: Sea \(\sigma\in R\)-pr. If \(\sigma\) centrally splits, then there exists a central idempotent \(e\in R,\) such that \(\sigma(\_)=e\cdot\_\) Now, as \(R=Re\oplus R(1-e),\) it follows that \(\sigma(R)\cong R/(1-e)R,\) so a \(\sigma(R)\)-module is an \(R\)-module annulled by \(1-e.\) As well, if \(M\) i s a \(\sigma(R)\)-module, then \(M=\sigma(M).\)
**Lemma 3.56**.: _Let \(\sigma\in R\)-pr. If \(\sigma\) centrally splits, then the following statements are equivalent:_
1. \(R\) _satisfies the condition_ \(\left(\sigma\text{-}\text{\rm{HH}}\right)\)_._
2. \(\sigma(R)\) _satisfies the condition_ \(\left(HH\right)\)_._
Proof.: \(\left(1\right)\Rightarrow\left(2\right)\) Let \(M,N\) be two \(\sigma(R)\)-modules and let \(0\neq f\in Hom_{\sigma(R)}(M,N).\) Since \(M\) and \(N\) are also \(R\)-modules, it follows that \(0\neq f\in Hom_{R}(M,N).\) Now, by Remark 3.55, it follows that \(M=\sigma(M),\) so that \(0\neq f\in Hom_{R}(\sigma(M),N).\) Since \(R\) satisfies the condition \(\left(\sigma\text{-}\text{\rm{HH}}\right)\), it follows that there exists \(0\neq g\in Hom_{R}(N,\sigma(M)),\) whence \(0\neq g\in Hom_{\sigma(R)}(N,M).\)That is, \(\sigma(R)\) satisfies the condition \(\left(\sigma\text{-}\text{\rm{HH}}\right)\).
\(\left(2\right)\Rightarrow\left(1\right)\) Let \(M,N\) be two \(R\)-modules such that \(\sigma(M),\sigma(N)\neq 0\) and \(0\neq f\in Hom_{R}(\sigma(M),N).\) Note that \(\sigma(M)\) and \(\sigma(N)\) are two \(\sigma(R)\)-modules. Now, since \(\sigma\) is a preradical, then \(f\) can be corestricted to \(\sigma(N),\) i.e., \(f\lceil:\sigma(M)\rightarrow\sigma(N)\) is a \(\sigma(R)\) nonzero morphism. It follows that there exists \(0\neq g\in Hom_{\sigma(R)}(\sigma(N),\sigma(M)),\) since \(\sigma(R)\) satisfies the condition \(\left(HH\right)\). Thus, we
find ourselves in the following situation.
Then, if \(\overline{0}\) denotes the zero morphism, then we have that \(h\oplus\overline{0}\) is a nonzero morphism from \(N\) to \(\sigma(M)\). Therefore, \(R\) satisfies the condition (\(\sigma\)-HH).
The next theorem generalizes [8, Theorem 2.18].
**Theorem 3.57**.: _Let \(R\) be a ring and \(\sigma\in R\)-\(pr\) be exact and costable. The following statements are equivalent:_
1. \(R\) _satisfies condition_ \(\left(\sigma\text{-}\text{HH}\right).\)__
2. \(R\) _is a_ \(\sigma\)_-_ (R_-Mod)-retractable ring and each_ \(S\in R\)_-_\(Simp\cap\mathbb{T}_{\sigma}\) _is paraprojective._
3. _Each_ \(S\in R\)_-_\(Simp\cap\mathbb{T}_{\sigma}\) _is paraprojective and parainjective._
4. \(\sigma\)_-_(R_-TORS_) \(=\sigma\)_-_(R_-conat_) \(=\sigma\)_-_(R_-tors_) \(=\sigma\)_-_(R_-nat_) _._
5. \(\sigma\)_-_(R_-nat_) \(=\sigma\)_-_(R_-conat_) _._
6. \(R=\sigma(R)\times R_{2},\) _where_ \(\sigma(R)\) _is a ring that satisfies the_ \((HH)\) _condition._
7. \(R=\sigma(R)\times R_{2},\) _where_ \(\sigma(R)\) _is isomorphic to a full matrix ring over a local left and right perfect ring._
Proof.: \((1)\Leftrightarrow(2)\Leftrightarrow(3)\) is Theorem 3.20. \((1)\Rightarrow(5)\) is Theorem 3.53. \((5)\Rightarrow(1)\) It follows from Theorem 3.20. \((1)\Rightarrow(4)\) is Theorem 3.39. \((4)\Rightarrow(5)\) This is clear. \((1)\Leftrightarrow(6)\) This follows from Lemma 3.56 and because \(\sigma\) centrally splits. \((6)\Leftrightarrow(7)\) It follows from [8, Theorem 2.18].
| この論文では、左Rモジュールのクラスのラティスをいくつか導入します。これらのラティスは、R-TORS、R-tors、R-nat、R-conatのラティスに概説され、それぞれ Torsの理論、代数的な torsion理論、自然なクラス、conaturalなクラスです。σ(R-TORS)、σ(R-tors)、σ(R-nat)、σ(R-conat)のラティスを定義します。σを恒等関数とした場合、これらのラティスは、上部にあるラティスに帰着します。これらのラティス間の等式を$\sigma$-HH()条件によって特徴付けます。また、$\sigma$-retractable ringsについて、$\sigma$-Max ringの拡張結果について、Mod-retractable ringsとMax ringに関する結果も提示します。 |
2309.15489 | Temperature fluctuations in mesoscopic systems | We study temperature fluctuations in mesoscopic $N$-body systems undergoing
non-equilibrium processes from the perspective of stochastic thermodynamics. By
introducing a stochastic differential equation, we describe the evolution of
the system's temperature during an isothermal process, with the noise term
accounting for finite-size effects arising from random energy transfer between
the system and the reservoir. Our analysis reveals that these fluctuations make
the extensive quantities (in the thermodynamic limit) deviate from being
extensive for consistency with the theory of equilibrium fluctuation. Moreover,
we derive finite-size corrections to the Jarzynski equality, providing insights
into how heat capacity influences such corrections. Also, our results indicate
a possible violation of the principle of maximum work by an amount proportional
to $N^{-1}$. Additionally, we examine the impact of temperature fluctuations in
a finite-size quasi-static Carnot engine. We show that irreversible entropy
production resulting from the temperature fluctuations of the working substance
diminishes the average efficiency of the cycle as $\eta_{\rm{C}}-\left\langle
\eta\right\rangle \sim N^{-1}$, highlighting the unattainability of the Carnot
efficiency $\eta_{\rm{C}}$ for mesoscopic-scale heat engines even under the
quasi-static limit | Zhaoyu Fei, Yu-Han Ma | 2023-09-27T08:38:52 | http://arxiv.org/abs/2309.15489v1 | # Temperature fluctuations in mesoscopic systems
###### Abstract
We study temperature fluctuations in mesoscopic \(N\)-body systems undergoing non-equilibrium processes from the perspective of stochastic thermodynamics. By introducing a stochastic differential equation, we describe the evolution of the system's temperature during an isothermal process, with the noise term accounting for finite-size effects arising from random energy transfer between the system and the reservoir. Our analysis reveals that these fluctuations make the extensive quantities (in the thermodynamic limit) deviate from being extensive for consistency with the theory of equilibrium fluctuation. Moreover, we derive finite-size corrections to the Jarzynski equality, providing insights into how heat capacity influences such corrections. Also, our results indicate a possible violation of the principle of maximum work by an amount proportional to \(N^{-1}\). Additionally, we examine the impact of temperature fluctuations in a finite-size quasi-static Carnot engine. We show that irreversible entropy production resulting from the temperature fluctuations of the working substance diminishes the average efficiency of the cycle as \(\eta_{\rm C}-\langle\eta\rangle\sim N^{-1}\), highlighting the unattainability of the Carnot efficiency \(\eta_{\rm C}\) for mesoscopic-scale heat engines even under the quasi-static limit
## I Introduction
In the field of nonequilibrium thermodynamics, stochastic thermodynamics have attracted much attention recently. Notable among its accomplishments are the fluctuation theorems, which provide a quantitative framework for understanding the statistical behavior of nonequilibrium processes and offer a generalization of the second law. They are widely found and proved in time-dependent driving processes [1; 2; 3; 4; 5], nonequilibrium steady states [6; 7; 8; 9; 10], and quantum systems [11; 12; 13; 14; 15; 16; 17; 18].
In addition to the aforementioned nonequilibrium processes, the departure of a many-body system from thermal equilibrium can be attributed to finite-size effects. In such cases, the system fluctuates around the-maximum-entropy (minimum-free-energy) state, with probabilities governed by the exponential of entropy (free energy) as per Einstein's interpretation of the reverse form of the Boltzmann entropy. Consequently, the system resides not in a full equilibrium state but rather in a quasi-equilibrium state, as postulated by the theory of equilibrium fluctuations [19; 20; 21].
These studies primarily focused on equilibrium fluctuations, providing limited insights into characterizing the fluctuations and thermodynamic behaviors of mesoscopic systems (whose finite size effects is significant) undergoing nonequilibrium processes. However, the lack of clarity regarding mesoscopic nonequilibrium thermodynamics has created a gap in connecting microscopic dynamics and macroscopic thermodynamics. This motivates us to establish a general framework from the perspective of stochastic thermodynamics to bridge this gap.
Given that fluctuations in a finite-size system exist at thermal equilibrium, they are also supposed to manifest in a nonequilibrium process involving such a system. In this study, we delve into the finite-size effects of a many-body system undergoing an isothermal process, viewing it through the lens of stochastic thermodynamics. For a system in a canonical ensemble, we propose that the finite-size effect manifests in temperature fluctuations. We formulate a stochastic differential equation (SDE) to describe the evolution of a generic system's temperature during the isothermal process, where the noise term accounts for finite-size effects stemming from random energy exchanges between the system and its reservoir (also see Refs. [22; 23] for specific systems). The temperature fluctuation, characterized by the temperature distribution, will lead to modifications of the Jarzynski equation and the average efficiency of a quasi-static Carnot cycle at the mesoscopic scale. These modifications result in deviations from the well-established results typically applicable to systems in the thermodynamic limit.
This paper is organized as follows: In Sec. II, the stochastic differential equation for the system's temperature is derived. We further obtain the Fokker-Planck equation for the system's temperature in Sec. III. In Sec. IV, the stochastic thermodynamics in terms of the fluctuating temperature is developed and the finite-size correction to the Jarzynski equality is obtained. As a demonstration of our theory, we study the efficiency of a finite-size heat engine in a quasi-static Carnot cycle in Sec. V. The conclusions and discussion of the study are given in Sec. VI.
Temperature fluctuations in isothermal processes at mesoscopic scale
In this section, we begin by providing a brief introduction to the stochastic Fokker-Planck equation [24], where the noise term characterizes the fluctuation of the flux density arising from the random collisions between the system and the reservoir at the mesoscopic level. We regard the equation as the generalization of the theory of equilibrium fluctuations in nonequilibrium processes.
Subsequently, we present the corresponding stochastic differential equation governing the system's temperature under the ergodic approximation. As a result, the stationary solution represents a quasi-equilibrium state in a canonical system, in alignment with the theory of equilibrium fluctuation (see Sec. III). It is noteworthy that although the derivation of the stochastic differential equation for the system's temperature relies on the stochastic Fokker-Planck equation, we contend that our findings remain independent of the specific details concerning the system's evolution in nonequilibrium processes. These results can be applied to study discrete-state systems or other systems not described by the stochastic Fokker-Planck equation.
### Stochastic Fokker-Planck equation
Let \(\rho(\mathbf{z}),\mathbf{z}=(\mathbf{x},\mathbf{p})\), \(\mathbf{x}=(x_{1},\cdots,x_{d})\), \(\mathbf{p}=(p_{1},\cdots,p_{d})\) denote the one-particle phase-space distribution of the system (\(d\) denotes the dimension of the system). The stochastic Fokker-Planck equation is given by [24] (also see [25; 26])
\[\frac{\partial\rho}{\partial t}=L_{\rm st}\rho+\frac{\partial}{\partial\mathbf{p }}\cdot\mathbf{j}, \tag{1}\]
with a flux density in phase space \(\mathbf{j}\) originating from collisions between the system and the reservoir. It reads
\[\mathbf{j}=\mathbf{\gamma}\mathbf{p}\rho(1+\epsilon\rho)+\gamma mk_{\rm B}T_{\rm r}\frac{ \partial\rho}{\partial\mathbf{p}}+\mathbf{\zeta}, \tag{2}\]
where \(L_{\rm st}=-\frac{\mathbf{p}}{m}\cdot\frac{\partial}{\partial\mathbf{x}}+\frac{ \partial U}{\partial\mathbf{x}}\cdot\frac{\partial}{\partial\mathbf{p}}\) denotes the streaming operator, \(m\) the mass of the particle, \(\gamma\) the damping coefficient, \(U\) the potential energy, \(k_{\rm B}\) the Boltzmann constant, and \(T_{\rm r}\) the temperature of the reservoir. And, \(\epsilon=1,-1,0\) for (non-condensed) bosons, fermions and distinguishable particles, respectively. Here and in the following paper, we do not show the time dependence of the functions explicitly without ambiguity.
Due to the discreteness of particle number and the randomness of collisions between the system and the reservoir, the noise term characterizes the finite-size effects of the dynamics. Here, \(\mathbf{\zeta}\) is a \(d\)-dimensional Gaussian white noise satisfying \(\langle\zeta_{i}(\mathbf{z},t)\rangle=0\), \(\langle\zeta_{i}(\mathbf{z},t)\zeta_{j}(\mathbf{z}^{\prime},t^{\prime})\rangle=2h^{d} m\gamma k_{\rm B}T_{\rm r}\rho(\mathbf{z},t)[1+\epsilon\rho(\mathbf{z},t)]\delta_{ij} \delta(\mathbf{z}-\mathbf{z}^{\prime})\delta(t-t^{\prime})\) (\(h\) denotes the Planck constant). In the thermodynamic limit, the suppression of the noise \(\zeta\) is shown in Refs. [24; 25].
Equation (1) is conservative in particle number
\[N=\int\rho\mathrm{d}\mathbf{z}, \tag{3}\]
where \(\mathrm{d}\mathbf{z}=\prod_{i=1}^{d}\mathrm{d}x_{i}\mathrm{d}p_{i}/h^{d}\). The internal energy \(E\) and the Boltzmann entropy \(S\) of the system are respectively given by
\[E=\int\left(\frac{p^{2}}{2m}+U\right)\rho\mathrm{d}\mathbf{z}, \tag{4}\]
and
\[S=k_{\rm B}\int[-\rho\ln\rho+\epsilon^{-1}(1+\epsilon\rho)\ln(1+\epsilon\rho)] \mathrm{d}\mathbf{z}. \tag{5}\]
Also, we define \(F=E-T_{\rm r}S\) as the nonequilibrium free energy of the system.
In the absence of the noise term \(\mathbf{\zeta}\), Eq. (1) determines a steady state (a semiclassical equilibrium state in phase space)
\[\rho_{\rm eq}(\mathbf{z})=\frac{1}{\mathrm{e}^{\beta_{\rm r}[p^{2}/(2m)+U(\mathbf{x})- \mu_{\rm r}]}-\epsilon}, \tag{6}\]
where \(\beta_{\rm r}=1/(k_{\rm B}T_{\rm r})\) is the inverse temperature, and \(\mu_{\rm r}\) the chemical potential, \(p^{2}=\sum_{i=1}^{d}p_{i}^{2}\). In fact, the equilibrium state \(\rho_{\rm eq}\) is the minimum point of the nonequilibrium free energy \(F\) with constant \(N\).
### Ergodic Approximation
Let \(\tau_{\rm p}\) denote the characteristic time of the motion due to the potential (e. g., the oscillating period of the harmonic trap). When \(\tau_{\rm p}\) is much smaller than the relaxation time \(\gamma^{-1}\), the variation of \(\rho\) along the equienergy surface in the phase space is relatively small. The distribution function therefore only depends on the phase-space variables through the energy variable \(\varepsilon(z)=p^{2}/(2m)+U(\mathbf{x})\). Such an approximation is called ergodic approximation, which has been widely used in the literatures on kinetic theory [27; 28; 29; 30; 31; 32] (in Refs. [23; 22], it is called highly underdamped regime).
Moreover, we assume that the potential energy \(U\) explicitly depends on a time-dependent parameter \(\lambda(t)\), called work parameter. The ergodic approximation requires that \(\tau_{\rm p}\ll\tau_{\rm d}\), where \(\tau_{\rm d}\) is the driving time of \(\lambda\). Then, following the similar procedure in Ref. [32] and using Eq. (1), we obtain the evolution equation of the mean occupation number \(\tilde{\rho}(\varepsilon)\) at the single-particle energy \(\varepsilon\):
\[\begin{split}&\frac{\partial}{\partial t}(g\tilde{\rho})+\frac{ \mathrm{d}\lambda}{\mathrm{d}t}\frac{\partial}{\partial\varepsilon}\left(\frac{ \overline{\partial U}}{\partial\lambda}g\tilde{\rho}\right)\\ &=\frac{\partial}{\partial\varepsilon}\left[\frac{\gamma g \overline{p^{2}}}{m}\left(\tilde{\rho}+\epsilon\tilde{\rho^{2}}+k_{\rm B}T_{ \rm r}\frac{\partial\tilde{\rho}}{\partial\varepsilon}\right)+\tilde{\zeta} \right],\end{split} \tag{7}\]
where
\[g(\varepsilon,t)=\int\delta\left[\varepsilon-\frac{p^{2}}{2m}-U(\mathbf{x},\lambda_{t} )\right]\mathrm{d}\mathbf{z} \tag{8}\]
denotes the density of states (\(\lambda_{t}\equiv\lambda(t)\)), and
\[\tilde{\rho}(\varepsilon,t)=g(\varepsilon,t)^{-1}\int\delta\left[\varepsilon- \frac{p^{2}}{2m}-U(\mathbf{x},\lambda_{t})\right]\rho(\mathbf{z},t)\mathrm{d}\mathbf{z}, \tag{9}\]
with \(\tilde{\rho}(\varepsilon(z),t)=\rho(z,t)\) under the ergodic approximation. Here, we have used the abbreviation
\[\overline{O(\mathbf{z})}\equiv\frac{1}{g(\varepsilon,t)}\int\delta\left[ \varepsilon-\frac{p^{2}}{2m}-U(\mathbf{x},\lambda_{t})\right]O(\mathbf{z})\mathrm{d} \mathbf{z}, \tag{10}\]
and
\[\tilde{\zeta}(\varepsilon,t)\equiv\int\delta\left[\varepsilon-\frac{p^{2}}{2 m}-U(\mathbf{x},\lambda_{t})\right]\frac{\mathbf{p}}{m}\cdot\mathbf{\zeta}(\mathbf{z},t) \mathrm{d}\mathbf{z} \tag{11}\]
is a Gaussian white noise satisfying \(\langle\tilde{\zeta}(\varepsilon,t)\rangle=0\), \(\langle\tilde{\zeta}(\varepsilon,t)\tilde{\zeta}(\varepsilon^{\prime},t^{ \prime})\rangle=2m^{-1}\gamma k_{\mathrm{B}}T_{\mathrm{r}}\tilde{\rho}( \varepsilon,t)[1+\epsilon\tilde{\rho}(\varepsilon,t)]g(\varepsilon,t) \overline{p^{2}}(\varepsilon,t)\delta(\varepsilon-\varepsilon^{\prime}) \delta(t-t^{\prime})\). Such a noise characterizes the fluctuation of the density distribution of the system in the single-particle energy space.
### SDE for the System's Temperature
The RHS of Eq. (7) describes random collisions between the particles and the reservoir. Besides, there are also collisions among the particles. Let \(\tau_{\mathrm{a}}\) denote the relaxation time due to the internal collisions. We assume \(\tau_{\mathrm{a}}\ll\gamma^{-1},\tau_{\mathrm{d}}\) so that the system is approximately a equilibrium state during the time scales \(\gamma^{-1},\tau_{\mathrm{d}}\). Thus, it is characterized by a time-dependent effective temperature \(T\) and a time-dependent effective chemical potential \(\mu\), which is called endoreversibility [33; 34; 35; 23; 36]. Specifically, we have (the mean occupation number at \(\varepsilon\), Eq. 6)
\[\tilde{\rho}=\frac{1}{\mathrm{e}^{\beta(\varepsilon-\mu)}-\epsilon}. \tag{12}\]
Substituting Eq. (12) into Eqs. (3, 4, 5), one finds
\[N=k_{\mathrm{B}}T\left(\frac{\partial\ln\mathcal{Z}}{\partial\mu}\right)_{ \beta}, \tag{13}\]
\[E=-\left(\frac{\partial\ln\mathcal{Z}}{\partial\beta}\right)_{\beta\mu}, \tag{14}\]
and
\[S=k_{\mathrm{B}}\left(\ln\mathcal{Z}-\beta\mu N+\beta E\right), \tag{15}\]
with the partition function
\[\mathcal{Z}=-\epsilon\int\ln\left[1-\epsilon\mathrm{e}^{\beta(\mu-\varepsilon )}\right]g\mathrm{d}\varepsilon. \tag{16}\]
Here, Eqs. (13, 14) determine the value of \(\beta=1/(k_{\mathrm{B}}T)\) and \(\mu\), and \(\mathcal{Z}\) is the grand canonical partition function of the system.
Equations (12-16) connect the dynamical variables \(\tilde{\rho}\) and the thermodynamic variables \(T,\mu,E,S\). Accordingly, the dynamic equation Eq. (1) can be represented by a thermodynamic equation. Taking the time derivative of \(E\) on both sides of Eq. (4) and using Eqs. (7, 12, 16), we obtain a stochastic differential equation for internal energy
\[\frac{\mathrm{d}E}{\mathrm{d}t}=\Lambda\frac{\mathrm{d}\lambda}{\mathrm{d}t}+ \Gamma(T_{\mathrm{r}}-T)+\xi, \tag{17}\]
where
\[\Lambda=-k_{\mathrm{B}}T\frac{\partial\ln\mathcal{Z}}{\partial\lambda} \tag{18}\]
denotes thermodynamic force conjugate to \(\lambda\), \(\Gamma\equiv\gamma dNk_{\mathrm{B}}\), and
\[\xi(t)\equiv-\int\tilde{\zeta}(\varepsilon,t)\mathrm{d}\varepsilon \tag{19}\]
is a Gaussian white noise satisfying \(\langle\xi(t)\rangle=0\), \(\langle\xi(t)\xi(t^{\prime})\rangle=2\Gamma k_{\mathrm{B}}T_{\mathrm{r}}T(t) \delta(t-t^{\prime})\). In the derivation, we have used the identity
\[\int\frac{e^{\beta(\varepsilon-\mu)}\overline{p^{2}}g}{[\mathrm{e}^{\beta( \varepsilon-\mu)}-\epsilon]^{2}}\mathrm{d}\varepsilon=dNmk_{\mathrm{B}}T. \tag{20}\]
The first two terms on the RHS of Eq. (17) correspond to the power and the rate of heat flow respectively. The rate of heat flow satisfies Newton's law of cooling with \(\Gamma\) as the cooling rate. The noise term accounts for the random energy transfer between the system and the reservoir. For a single particle in a single-well potential, similar results have been reported in Refs. [22; 23]. Eq. (17) is thus considered as a generalization to a \(N\)-body system in a general potential.
As a thermodynamic equation, Eq. (17) describes the evolution of the system in a nonequilibrium isothermal process, which should satisfy thermodynamic relations. To proceed, we consider \(\lambda,T,N\) as independent thermodynamic variables and do not show their dependence of functions for simplicity. We introduce \(C\equiv\partial E/\partial T\) as the heat capacity with constant \(\lambda\) and obtain the following thermodynamic relations from Eqs. (14, 15, 18)
\[\frac{\partial S}{\partial T}=\frac{C}{T},\frac{\partial S}{\partial\lambda}=- \frac{\partial\Lambda}{\partial T},\frac{\partial C}{\partial\lambda}=-T\frac {\partial^{2}\Lambda}{\partial T^{2}}. \tag{21}\]
Moreover, taking the time derivative on both sides of Eq. (14), we obtain the first law by using Eqs. (15, 21)
\[\begin{split}\frac{\mathrm{d}E}{\mathrm{d}t}=&\Lambda \frac{\mathrm{d}\lambda}{\mathrm{d}t}+T\circ\frac{\mathrm{d}S}{\mathrm{d}t}\\ =&\left(\Lambda-T\frac{\partial\Lambda}{\partial T }\right)\frac{\mathrm{d}\lambda}{\mathrm{d}t}+C\circ\frac{\mathrm{d}T}{ \mathrm{d}t}\end{split} \tag{22}\]
where \(\circ\) indicates the Stratonovich integral, which enables us to use ordinary calculus. Due to the noise \(\xi\), the Stratonovich integral and the Ito integral are related by
\[C\circ\frac{\mathrm{d}T}{\mathrm{d}t}=C\frac{\mathrm{d}T}{\mathrm{d}t}+\frac{ \Gamma k_{\mathrm{B}}T_{\mathrm{r}}T}{C^{2}}\frac{\partial C}{\partial T}. \tag{23}\]
Comparing Eq. (22) with Eq. (17) and transforming the Stratonovich integral into the Ito integral (Eq. 23), we finally obtain the stochastic differential equation for the system's temperature
\[\begin{split} C\frac{\mathrm{d}T}{\mathrm{d}t}=& T\frac{\partial\Lambda}{\partial T}\frac{\mathrm{d}\lambda}{ \mathrm{d}t}+\Gamma(T_{\mathrm{r}}-T)-\frac{\Gamma k_{\mathrm{B}}T_{\mathrm{r }}T}{C^{2}}\frac{\partial C}{\partial T}+\xi.\end{split} \tag{24}\]
## III Fokker-Planck equation for the system's temperature
The system's temperature fluctuate due to the noise term in Eq. (24). The Fokker-Planck equation for its probability distribution \(P(T,t)=\langle\delta(T-T(t))\rangle\) (\(\langle\cdots\rangle\) denotes the average over the noise \(\xi\)) is
\[\begin{split}\frac{\partial P}{\partial t}=&\frac{ \partial}{\partial T}\left[-\frac{T}{C}\frac{\partial\Lambda}{\partial T}\frac {\mathrm{d}\lambda}{\mathrm{d}t}P+\frac{\Gamma}{C}(T-T_{\mathrm{r}})P+\frac{ \Gamma k_{\mathrm{B}}T_{\mathrm{r}}}{C}\frac{\partial}{\partial T}\left( \frac{TP}{C}\right)\right]\\ =&\frac{\partial}{\partial T}\left[-\frac{T}{C} \frac{\partial\Lambda}{\partial T}\frac{\mathrm{d}\lambda}{\mathrm{d}t}P+ \frac{\Gamma T}{C^{2}}P\frac{\partial}{\partial T}\left(F+k_{\mathrm{B}}T_{ \mathrm{r}}\ln\frac{k_{\mathrm{B}}TP}{C}\right)\right].\end{split} \tag{25}\]
Here \(T\in[0,\infty)\), and we assume that \(P\) quickly goes to zero when \(T\to\infty\) or \(T\to 0\). The second equality in Eq. (25) shows the thermodynamic nature implied in it. When \(t\to\infty\), let \(\lambda\) go to a constant. Then, Eq. (25) determines a stationary solution
\[P_{\mathrm{s}}(T,\lambda)=\frac{C}{\tilde{\mathcal{Z}}k_{\mathrm{B}}T}\mathrm{ e}^{-\beta_{r}F}, \tag{26}\]
where \(\tilde{\mathcal{Z}}\equiv\int C(k_{\mathrm{B}}T)^{-1}\mathrm{e}^{\mathrm{S}/k _{\mathrm{B}}-\beta_{r}E}\mathrm{d}T\) denotes the generalized partition function of the system [24]. In the absence of the factor \(C/(\tilde{\mathcal{Z}}k_{\mathrm{B}}T)\), \(P_{\mathrm{s}}\) is actually the quasi-equilibrium state in a canonical system according to the theory of equilibrium fluctuation [19; 20; 21] and satisfies the large deviation principle.
Similar to the equilibrium free energy in statistical mechanics, we define
\[\mathcal{F}\equiv-k_{\mathrm{B}}T_{\mathrm{r}}\ln\tilde{\mathcal{Z}} \tag{27}\]
as the generalized free energy of the system. Then, we have the relation \(\mathcal{F}=\mathcal{E}-T_{\mathrm{r}}\mathcal{S}\), where
\[\mathcal{E}\equiv\int EP_{\mathrm{s}}\mathrm{d}T=-\frac{\partial\ln\tilde{ \mathcal{Z}}}{\partial\beta_{\mathrm{r}}} \tag{28}\]
is the mean internal energy of the system at the quasi-equilibrium state, and
\[\mathcal{S}\equiv\int\left(S-k_{\mathrm{B}}\ln\frac{k_{\mathrm{B}}TP_{ \mathrm{s}}}{C}\right)P_{\mathrm{s}}\mathrm{d}T=k_{\mathrm{B}}\beta_{ \mathrm{r}}(\mathcal{E}-\mathcal{F}) \tag{29}\]
is the mean entropy of the system at the quasi-equilibrium state. Here, the mean entropy is a sum of the mean Boltzmann entropy (the first term in the integral) and the contribution from the distribution of the system's temperature (the second term in the integral), the latter of which is consistent with the perspective of information theory [37; 38]. Also, we confirm the fundamental relation in thermodynamics
\[\mathrm{d}\mathcal{E}=T_{\mathrm{r}}\mathrm{d}\mathcal{S}+\tilde{\Lambda} \mathrm{d}\lambda, \tag{30}\]
and
\[\mathrm{d}\mathcal{F}=-\mathcal{S}\mathrm{d}T_{\mathrm{r}}+\tilde{\Lambda} \mathrm{d}\lambda, \tag{31}\]
where \(\tilde{\Lambda}\equiv-k_{\mathrm{B}}T_{\mathrm{r}}\partial\ln\tilde{\mathcal{Z }}/\partial\lambda\) denotes the generalized thermodynamic force conjugate to \(\lambda\) at the quasi-equilibrium state. We want to emphasize here that these generalized quantities \(\mathcal{F},\mathcal{E},\mathcal{S}\), which are extensive in the thermodynamic limit, are no longer extensive due to the finite-size effect of the system.
In the thermodynamic limit, we apply the Gaussian approximation of Eq. (26) as (central limit theorem)
\[P_{\mathrm{s}}(T,\lambda)\simeq\sqrt{\frac{C_{\mathrm{r}}}{2\pi k_{\mathrm{B}} T_{\mathrm{r}}^{2}}}\exp\left[-\frac{C_{\mathrm{r}}(T-T_{\mathrm{r}})^{2}}{2k_{ \mathrm{B}}T_{\mathrm{r}}^{2}}\right], \tag{32}\]
where \(C_{\mathrm{r}}\equiv C|_{T=T_{\mathrm{r}}}\). Eq. (32) approximates the system's temperature fluctuation at the equilibrium state. Its mean value \(T_{\mathrm{r}}\) and variance \(k_{\mathrm{B}}T_{\mathrm{r}}^{2}/C_{\mathrm{r}}\) are consistent with the theory of equilibrium fluctuation [19; 20; 21]. Then, substituting Eq. (32) into Eq. (27), we obtain
\[\mathcal{F}=F_{\mathrm{r}}-\frac{k_{\mathrm{B}}T_{\mathrm{r}}}{2}\ln\frac{2\pi C _{\mathrm{r}}}{k_{\mathrm{B}}}+O\left(\frac{1}{N}\right), \tag{33}\]
where \(F_{\mathrm{r}}\equiv F|_{T=T_{\mathrm{r}}}\). The first term on the RHS is the equilibrium free energy of the system at temperature \(T_{\mathrm{r}}\) and the second term on the RHS is the finite-size correction to it.
Stochastic thermodynamics
In the spirit of stochastic thermodynamics [1; 2; 3], we are going to define stochastic thermodynamic quantities corresponding to Eqs. (17, 22, 25) in this section. Firstly, a trajectory of the system's temperature is defined as \(T_{[0,\tau]}:=\{T(t)|t\in[0,\tau]\}\). According to Eqs. (17, 22), the stochastic work \(w[T_{[0,t]}]\) and the stochastic heat \(q[T_{[0,t]}]\) are respectively given by
\[w[T_{[0,\tau]}]=\int_{0}^{\tau}\Lambda\mathrm{d}\lambda, \tag{34}\]
and
\[\begin{split} q[T_{[0,\tau]}]=&\int_{0}^{\tau}T \circ\mathrm{d}S\\ =&\int_{0}^{\tau}\left(-T\frac{\partial\Lambda}{ \partial T}\mathrm{d}\lambda+C\circ\mathrm{d}T\right)\\ =&\int_{0}^{\tau}\left[\Gamma\left(T_{\mathrm{r}}-T \right)+\xi\right]\mathrm{d}t.\end{split} \tag{35}\]
Thus, we have the conservation law of energy
\[E(\tau)-E(0)=w[T_{[0,\tau]}]+q[T_{[0,\tau]}] \tag{36}\]
at the mesoscopic level.
Corresponding to Eq. (25), the stochastic entropy \(s(t)\) and stochastic free energy \(f(t)\) are respectively given by (also see Ref. [24])
\[\begin{split} s(t)=& S(t)-k_{\mathrm{B}}\ln\frac{k_ {\mathrm{B}}T(t)P(T(t),t)}{C(t)}\\ =& k_{\mathrm{B}}\left[\beta_{\mathrm{r}}E(t)-\beta_ {\mathrm{r}}\mathcal{F}(t)-\ln\frac{P(T(t),t)}{P_{\mathrm{s}}(T(t),\lambda(t) )}\right],\end{split} \tag{37}\]
and
\[\begin{split} f(t)=& E(t)-T_{\mathrm{r}}s(t)\\ =& F(t)+k_{\mathrm{B}}T_{\mathrm{r}}\ln\frac{k_{ \mathrm{B}}T(t)P(T(t),t)}{C(t)}\\ =&\mathcal{F}(t)+k_{\mathrm{B}}T_{\mathrm{r}}\ln\frac {P(T(t),t)}{P_{\mathrm{s}}(T(t),\lambda(t))}.\end{split} \tag{38}\]
Here, \(P(T,t)\) is the solution of Eq. (25), \(S\) (\(F\)) denotes the Boltzmann entropy (nonequilibrium free energy) of the system, and the term \(-k_{\mathrm{B}}\ln[k_{\mathrm{B}}TPC^{-1}]\) (\(k_{\mathrm{B}}T_{\mathrm{r}}\ln[k_{\mathrm{B}}TPC^{-1}]\)) denotes the finite-size correction to \(S\) (\(F\)) from the distribution of the system's temperature. The term \(k_{\mathrm{B}}\beta_{\mathrm{r}}(E-\mathcal{F})\) corresponds to the mean entropy at the quasi-equilibrium state \(\mathcal{S}\) in Eq. (29). And the term \(-k_{\mathrm{B}}\ln(P/P_{\mathrm{s}})\) (after taking the average over \(P\)) corresponds to the relative entropy which measures how far the temperature distribution is from the quasi-equilibrium state. Consequently, we have \(\mathcal{E}=\langle E\rangle|_{P=P_{s}}\), \(\mathcal{S}=\langle s\rangle|_{P=P_{s}}\), and \(\mathcal{F}=\langle f\rangle|_{P=P_{s}}\). It is worth mentioning that the stochastic free energy \(f\), the difference between which and the generalized free energy \(\mathcal{F}\) measures how far the system departures from the quasi-equilibrium state, has not been reported in previous papers (but for Ref. [24]).
Moreover, the stochastic total entropy production \(s_{\mathrm{p}}[T_{[0,\tau]}]\) reads
\[s_{\mathrm{p}}[T_{[0,\tau]}]= s(\tau)-s(0)+s_{\mathrm{r}}[T_{[0,\tau]}] \tag{39}\]
where \(s_{\mathrm{r}}[T_{[0,\tau]}]=-q[T_{[0,\tau]}]/T_{\mathrm{r}}\) is the stochastic entropy change of the reservoir.
Then, we prove the fluctuation theorems based these stochastic quantities. According to Eq. (24), the probability distribution of the trajectory \(T_{[0,\tau]}\) conditioned with a fixed initial temperature \(T_{0}\equiv T(0)\) reads [39; 40; 41]
\[P[T_{[0,\tau]}|T_{0}]=\mathrm{e}^{-\tilde{\mathcal{S}}[T_{[0,\tau]}]}, \tag{40}\]
where the integral measure is \(\mathcal{D}T\equiv\prod_{i=1}^{N}\mathrm{d}T_{i}\sqrt{C_{i}^{2}/(2\pi T_{i}^{ *}\Delta t)}\), with the Stratonovich discretization \(0=t_{0}<t_{1}<\cdots<t_{N-1}<t_{N}=\tau\), \(\Delta t\equiv t_{i}-t_{i-1}\), \(T_{i}\equiv T(t_{i})\), \(T_{i}^{*}\equiv(T_{i}+T_{i-1})/2\), \(\lambda_{i}\equiv\lambda(t_{i})\), \(\lambda_{i}^{*}\equiv(\lambda_{i}+\lambda_{i-1})/2\), \(C_{i}\equiv C|_{T=T_{i}^{*},\lambda=\lambda_{i}^{*}}\). Here, the action \(\tilde{\mathcal{S}}\) as a generalized Onsager-Machlup functional is given by
\[\begin{split}\tilde{\mathcal{S}}[T_{[0,\tau]}]=&\frac {1}{4\Gamma k_{\mathrm{B}}T_{\mathrm{r}}}\int_{0}^{\tau}\left[C\frac{ \mathrm{d}T}{\mathrm{d}t}-T\frac{\partial\Lambda}{\partial T}\frac{\mathrm{ d}\lambda}{\mathrm{d}t}-\Gamma(T_{\mathrm{r}}-T)-\frac{\Gamma k_{ \mathrm{B}}T_{\mathrm{r}}T}{C}\frac{\partial}{\partial T}\ln\frac{C}{T} \right]^{2}\frac{\mathrm{d}t}{T}\\ &+\frac{1}{2}\int_{0}^{\tau}\frac{\partial}{\partial T}\left[ \frac{T}{C}\frac{\partial\Lambda}{\partial T}\frac{\mathrm{d}\lambda}{ \mathrm{d}t}+\frac{\Gamma}{C}(T_{\mathrm{r}}-T)-\frac{\Gamma k_{\mathrm{B}}T_{ \mathrm{r}}}{2C^{2}}\right]\mathrm{d}t.\end{split} \tag{41}\]
In Eq. (41), we also have chosen the Stratonovich discretization. Such a choice makes sure that the time re
versal of \(P[T_{[0,\tau]}|T_{0}]\) is also under the Stratonovich discretization [39; 40].
To proceed, let \(P^{\dagger}[T_{[0,\tau]}^{\dagger}|T_{0}^{\dagger}]\) denote the conditional probability distribution of the reverse trajectory \(T_{[0,\tau]}^{\dagger}:=\{T(\tau-t)|t\in[0,\tau]\}\) with another fixed initial temperature \(T_{0}^{\dagger}\equiv T^{\dagger}(0)\) and a reverse protocol \(\lambda^{\dagger}(t):=\lambda(\tau-t)\) (the overscript \(\dagger\) indicates the reverse trajectory). It follows from Eqs. (40, 41) that
\[\begin{split} P^{\dagger}[T_{[0,\tau]}^{\dagger}|T_{0}^{\dagger }]=&{\rm e}^{-\hat{\mathcal{S}}[T_{[0,\tau]}^{\dagger}]}\\ &={\rm e}^{-\hat{\mathcal{S}}[T_{[0,\tau]}]}\Big{|}_{\{\frac{4T}{ \mathrm{d}t},\frac{\mathrm{d}t}{\mathrm{d}t}\}\to\{-\frac{4T}{\mathrm{d}t},- \frac{\mathrm{d}t}{\mathrm{d}t}\}}\,,\end{split} \tag{42}\]
and thus the detailed fluctuation theorem is obtained as
\[\ln\frac{P[T_{[0,\tau]}|T_{0}]}{P^{\dagger}[T_{[0,\tau]}^{\dagger}|T_{0}^{ \dagger}]}=\ln\frac{P(T_{\tau},\tau)}{P(T_{0},0)}+\frac{s_{\mathrm{p}}[T_{[0, \tau]}]}{k_{\mathrm{B}}}, \tag{43}\]
where \(T_{\tau}\equiv T(\tau)\). By adding an arbitrary normalized distribution at the initial time of the reverse process \(P^{\prime}(T_{0}^{\dagger},0)\) and noticing \(\mathcal{D}T=\mathcal{D}T^{\dagger}\), we obtain the integral fluctuation theorem as
\[\left\langle\frac{P^{\prime}(T_{0}^{\dagger},0)}{P(T_{\tau},\tau)}{\rm e}^{- \frac{\tau_{\mathrm{p}}}{k_{\mathrm{B}}}}\right\rangle=1. \tag{44}\]
Such an equality is formally consistent with the integral fluctuation theorems in previous studies [1; 2]. For a choice of \(P^{\prime}(T_{0}^{\dagger},0)=P(T_{\tau},\tau)\), we obtain the integral fluctuation theorem for total entropy production [1; 2; 24]
\[\left\langle e^{-s_{\mathrm{p}}/k_{\mathrm{B}}}\right\rangle=1. \tag{45}\]
As a corollary, the second law \(\left\langle s_{\mathrm{p}}\right\rangle\geq 0\) follows from the fluctuation theorem by using Jensen's inequality.
When both \(P(T_{0},0),P^{\prime}(T_{0}^{\dagger},0)\) are stationary solutions of the Fokker-Planck equation (quansi-equilibrium state in Eq. (26)), i. e., \(P(T_{0},0)=P_{\mathrm{s}}(T_{0},\lambda_{0}),P^{\prime}(T_{0}^{\dagger},0)=P_ {\mathrm{s}}(T_{\tau},\lambda_{\tau})\), we obtain the generalized Jarzynski equality [24]
\[\left\langle{\rm e}^{-\beta_{\mathrm{r}}w}\right\rangle={\rm e}^{-\beta_{ \mathrm{r}}\Delta\mathcal{F}}, \tag{46}\]
and the generalized principle of maximum work \(\left\langle w\right\rangle\geq\Delta\mathcal{F}\) by using Jensen's inequality [24], where \(\Delta A\equiv A(\tau)-A(0)\) for some time-dependent function \(A\). In the thermodynamic limit, it follows from Eq. (33) that
\[\left\langle{\rm e}^{-\beta_{\mathrm{r}}w}\right\rangle=e^{-\beta_{\mathrm{ r}}\Delta F_{\mathrm{r}}}\sqrt{\frac{C_{\mathrm{r}}(\lambda_{\tau})}{C_{ \mathrm{r}}(\lambda_{0})}}+O\left(\frac{1}{N}\right), \tag{47}\]
and
\[\left\langle w\right\rangle\geq\Delta F_{\mathrm{r}}-\frac{k_{\mathrm{B}}T_{ \mathrm{r}}}{2}\ln\left[\frac{C_{\mathrm{r}}(\lambda_{\tau})}{C_{\mathrm{r}}( \lambda_{0})}\right]+O\left(\frac{1}{N}\right). \tag{48}\]
Here, the ratio of the heat capacity is a finite-size correction to the Jarzynski equality and the principle of maximum work. Eq. (48) indicates that when \(C_{\mathrm{r}}(\lambda_{\tau})>C_{\mathrm{r}}(\lambda_{0})\), a possible violation of the principle of maximum work by an amount on the order of \(N^{-1}\) is possible. We further define the following quantity to characterize such a correction
\[\phi\equiv\left|\frac{\ln\left[C_{\mathrm{r}}(\lambda_{\tau})/C_{\mathrm{r}}( \lambda_{0})\right]}{2\beta_{\mathrm{r}}\Delta F_{\mathrm{r}}}\right|. \tag{49}\]
For example, if the system is specified as \(N=1000\) two-level particles with energy spacing \(\lambda\) (See Appendix A for details), we plot \(\phi\) as a function of temperature \(T_{\mathrm{r}}\) in Fig. 1 for different \(\lambda_{\tau}/\lambda_{0}\). In this figure, we see that \(\phi\) significantly increases with the temperature decreases. It is hence possible in principle to observe the finite-size correction to Jarzynski equality in some experimental platforms. In addition, for systems with quantum phase transitions [42; 43], the dependence of heat capacity on parameters near the critical point is remarkable. In such cases, the finite size correction will be particularly important for the Jarzynski equality.
## V Fluctuating Carnot cycle
As an application of our theory, we study a fluctuating Carnot cycle with finite-size working substance by using Eq. (24). As illustrated in Fig. 2, the Carnot cycle consists of four processes: \(1\to 2\), adiabatic compression (\(\gamma=0\)); \(2\to 3\), isothermal expansion (hot reservoir's temperature \(T_{\mathrm{h}}\)); \(3\to 4\), adiabatic expansion (\(\gamma=0\)); \(4\to 1\), isothermal compression (cold reservoir's temperature \(T_{\mathrm{c}}\)). Let \(T_{n},n=1,\cdots,4\) denote the corresponding temperature of the working substance at state \(n\). Then, we have \(\left\langle T_{1}\right\rangle=\left\langle T_{4}\right\rangle=T_{\mathrm{c}}\) and \(\left\langle T_{2}\right\rangle=\left\langle T_{3}\right\rangle=T_{\mathrm{h}}\). Let \(\lambda_{n},E_{n},C_{n},S_{n},s_{n}\), \(n=1,\cdots,4\) denote the corresponding work parameter, internal energy, heat capacity, Boltzmann entropy, stochastic entropy of the working substance respectively. By using Eq. (25) with \(\gamma=0\), it is straightforward to prove that in the adiabatic processes, both the average Boltzmann entropy and stochastic en
tropy are constants, i. e.,
\[\langle S_{1}\rangle=\langle S_{2}\rangle,\quad\langle S_{3}\rangle=\langle S_{4}\rangle, \tag{50}\]
and
\[\langle s_{1}\rangle=\langle s_{2}\rangle,\quad\langle s_{3}\rangle=\langle s_{4}\rangle. \tag{51}\]
Then, let \(P_{n},P_{\text{sn}},\mathcal{F}_{n}\), \(n=1,\cdots,4\) denote the corresponding temperature distribution, quasi-equilibrium state, and generalized free energy of the working substance respectively. In the quasi-static limit, \(\mathrm{d}\lambda/\mathrm{d}t\to 0\) and the working substance is a quasi-equilibrium state with \(T_{\text{r}}=T_{\text{h}}\) (\(T_{\text{r}}=T_{\text{c}}\)) all the time in the isothermal expansion (compression) process according to Eq. (25). Meanwhile according to Appendix B, the working substance at the end of the adiabatic processes is generally not a quasi-equilibrium anymore. That is to say \(P_{1}=P_{\text{s1}},P_{3}=P_{\text{s3}}\), and \(P_{2}\neq P_{\text{s2}},P_{4}\neq P_{\text{s4}}\). Such a result reflects the fact that from state \(2^{-}\to 2^{+}\) (\(4^{-}\to 4^{+}\)), the working substance have quickly thermalized during a vanishing small time scale \(\gamma^{-1}\), where the average temperature of the working substance remain the same but the variance of the temperature of the working substance changes. Consequently, no work is done during the vanishing small time and the finite irreversible entropy production occurs due to the contribution from the relative entropy (see Eqs. 37-39), which is a finite-size effect and firstly reported in Ref. [44] to our best knowledge.
In Fig. 2, we illustrate the finite-size effects of the fluctuating Carnot cycle in terms of the temperature and entropy of the working substance. The end to end solid and dashed lines represent the mean values of them, while the shaded regions signify the fluctuations attributable to finite-size effects. We accentuate the irreversible entropy production depicted within the two cuboids, which diminishes the average efficiency of the Carnot cycle.
In the two isothermal processes, the variances of the input work both vanish (see the example in Ref. [45]). Therefore, the input work in the two isothermal processes is a constant, i. e., \(\mathcal{F}_{3}-\mathcal{F}_{2}+\mathcal{F}_{1}-\mathcal{F}_{4}\) corresponding to the generalized principle of maximum work. In the two adiabatic processes, there is no heat transfer and the input work is equal to the internal energy change according to the conservation law of energy, i. e., \(E_{2}-E_{1}+E_{4}-E_{3}\). Thus using Eq. (37), the total input work reads
\[\begin{split} w_{\text{in}}=& E_{2}-E_{1}+E_{4}-E_{3 }+\mathcal{F}_{3}-\mathcal{F}_{2}+\mathcal{F}_{1}-\mathcal{F}_{4}\\ =& T_{\text{h}}\left(s_{2}-s_{3}+k_{\text{B}}\ln \frac{P_{2}}{P_{\text{s2}}}\right)+T_{\text{c}}\left(s_{4}-s_{1}+k_{\text{B}} \ln\frac{P_{4}}{P_{\text{s4}}}\right).\end{split} \tag{52}\]
Accordingly, the absorbed heat from the hot reservoir reads
\[q_{\text{h}}=T_{\text{h}}\left(s_{3}-s_{2}-k_{\text{B}}\ln\frac{P_{2}}{P_{ \text{s2}}}\right), \tag{53}\]
where, \(s_{1},s_{2}\) are independent of \(s_{3},s_{4}\) due to the thermalization in the two isothermal processes. Note that the connections between \(s_{1},s_{2}\) or \(s_{3},s_{4}\) satisfy the energy-conservation equation in the adiabatic process (Eq. 24 with \(\gamma=0\)). It follows from Eq. (39) that the entropy production of the cycle is (also see Refs. [46; 47])
\[\begin{split} s_{\text{p}}=&\frac{q_{\text{h}}+w_{ \text{in}}}{T_{\text{c}}}-\frac{q_{\text{h}}}{T_{\text{h}}}\\ =& s_{4}-s_{1}-s_{3}+s_{2}+k_{\text{B}}\ln\frac{P_{ 4}}{P_{\text{s4}}}+k_{\text{B}}\ln\frac{P_{2}}{P_{\text{s2}}}.\end{split} \tag{54}\]
To study the efficiency of the cycle, we adopt the definition of the stochastic efficiency in Ref. [48]
\[\eta=-\frac{w_{\text{in}}}{\langle q_{\text{h}}\rangle}, \tag{55}\]
which is called the scaled fluctuating efficiency. The moments of the efficiency always exist, and its mean value is equal to the conventional efficiency of a cycle. Using Eqs. (51-53), the average efficiency of the cycle is
\[\begin{split}\langle\eta\rangle=& 1-\frac{T_{\text{c}} \left[\Delta\overline{s}+D(P_{4}||P_{\text{s4}})\right]}{T_{\text{h}}\left[ \Delta\overline{s}-D(P_{2}||P_{\text{s2}})\right]}\\ =&\eta_{\text{C}}-(1-\eta_{\text{C}})\frac{\langle s _{\text{p}}\rangle}{\Delta\overline{s}}+O\left(\frac{1}{N^{2}}\right).\end{split} \tag{56}\]
Here, \(\eta_{\text{C}}\equiv 1-T_{\text{c}}/T_{\text{h}}\) is the Carnot efficiency, \(\Delta\overline{s}\equiv\langle s_{3}\rangle-\langle s_{2}\rangle=\langle s_{4 }\rangle-\langle s_{1}\rangle\) is the average entropy change of the working substance in the isothermal expansion process,
\[D(P||P_{\text{s}})\equiv\int P\ln\frac{P}{P_{\text{s}}}\mathrm{d}T, \tag{57}\]
is the relative entropy, and \(\langle s_{\text{p}}\rangle=D(P_{2}||P_{\text{s2}})+D(P_{4}||P_{\text{s4}})\) following from Eqs. (51, 54) is the total average entropy production of the cycle according to Eq. (54).
Figure 2: The Carnot cycle in the temperature-entropy diagram. The shaded regions represent the temperature fluctuation due to the finite-size effect of the working substance
Since \(\Delta\overline{s}>0\) and \(\langle s_{\rm p}\rangle\geq 0\), we conclude that the irreversible entropy production due to the temperature fluctuation of the working substance diminishes the average efficiency of the cycle. Consequently, even in the quasi-static limit, the Carnot efficiency remains unattainable. Such an equation is also shown in Ref. [44].
In the thermodynamic limit, we are only concerned about the mean value \(\langle A\rangle\) and the variance \(\sigma_{A}^{2}=\left\langle(A-\langle A\rangle)^{2}\right\rangle\) for some \(T\)-dependent function \(A\). As a result, the temperature distribution of the working substance is approximately a Gaussian distribution, i. e., \(T_{n}\sim\mathcal{N}(\langle T_{n}\rangle,\sigma_{T_{n}}^{2})\) for \(n=1,\cdots,4\). Therefore, we find
\[\Delta\overline{s}=S_{\rm 3h}-S_{\rm 2h}+O\left(\frac{k_{\rm B}}{N}\right)=S_{ \rm 4c}-S_{\rm 1c}+O\left(\frac{k_{\rm B}}{N}\right), \tag{58}\]
and
\[\langle s_{\rm p}\rangle=\frac{1}{2}\left[\ln(\kappa\kappa^{\prime})+\frac{1} {\kappa}+\frac{1}{\kappa^{\prime}}-2\right], \tag{59}\]
where \(S_{\rm nc(h)}\equiv S_{n}|_{T=T_{\rm c(h)}}\) for \(n=1,\cdots,4\), \(\kappa\equiv C(T_{\rm c},\lambda_{2})/C(T_{\rm h},\lambda_{1})\), and \(\kappa^{\prime}\equiv C(T_{\rm h},\lambda_{4})/C(T_{\rm c},\lambda_{3})\) (see Appendix B).
As an example, we specific the working substance as the \(N\)-particles system studied in Ref. [44], with the following Hamiltonian
\[H=\sum_{i=1}^{dN}\left(\frac{p_{i}^{2}}{2m}+a\left|\frac{x_{i}}{L}\right|^{ \lambda}\right)+V, \tag{60}\]
where \(m\) denotes the mass of the \(N\) particles, \(a\) the characteristic energy of the system, \(L\) the characteristic length of the system, \(\lambda\) the work parameter, and \(V\) the interactions among these particles (which can be ignored in comparison with the kinetic and potential energy but is strong enough to make the particles be ergodic). Taking use of the internal energy of the system at the equilibrium state (Eq. 15 in Ref. [44]), we obtain the total average entropy production of the cycle from Eq. (59) as
\[\langle s_{\rm p}\rangle=\] \[\frac{1}{2}\ln\left[\frac{\lambda_{1}(\lambda_{2}+2)\lambda_{3}( \lambda_{4}+2)}{(\lambda_{1}+2)\lambda_{2}(\lambda_{3}+2)\lambda_{4}}\right]+ \frac{(\lambda_{2}-\lambda_{1})}{\lambda_{1}(\lambda_{2}+2)}+\frac{(\lambda_ {4}-\lambda_{3})}{\lambda_{3}(\lambda_{4}+2)}, \tag{61}\]
with \(\kappa=\lambda_{1}(\lambda_{2}+2)/[(\lambda_{1}+2)\lambda_{2}]\), and \(\kappa^{\prime}=\lambda_{3}(\lambda_{4}+2)/[(\lambda_{3}+2)\lambda_{4}]\). Such a result is consistent with the leading order of the expression of the relative entropy shown in Eq. (7) of Ref. [44] in the large-\(N\) limit, while the latter was previously obtained through complicated calculation. Such a consistence indicates that our formalism is universal and is able to exactly capture the entropy production of the cycles due to the finite-\(N\) effects of the working substance.
Substituting Eqs. (58, 59) into Eq. (56), the average efficiency is obtained as
\[\langle\eta\rangle=\eta_{\rm C}-(1-\eta_{\rm C})\frac{\kappa\kappa^{\prime} \ln(\kappa\kappa^{\prime})+\kappa+\kappa^{\prime}-2\kappa\kappa^{\prime}}{2 \kappa\kappa^{\prime}(S_{\rm 3h}-S_{\rm 2h})}+O\left(\frac{1}{N^{2}}\right). \tag{62}\]
In particular, for a constant heat capacity of the working substance (such as ideal gas), \(\kappa=\kappa^{\prime}=1\) and the Carnot efficiency is recovered \(\langle\eta\rangle=\eta_{\rm C}\) to the order of \(N^{-1}\).
Furthermore, we consider the variance of the efficiency \(\sigma_{\eta}^{2}\). It follows from Eq. (55) that \(\sigma_{\eta}^{2}=\sigma_{w_{\rm in}}^{2}/(q_{\rm h})^{2}\). Using the linear relation between the initial and final internal energy in the adiabatic process (Appendix B), we obtain the correlation functions
\[\langle E_{1}E_{2}\rangle-\langle E_{1}\rangle\langle E_{2}\rangle=k_{\rm B}T _{c}T_{\rm h}C_{\rm 1c}+O\left(k_{\rm B}^{2}T^{2}\right), \tag{63}\]
\[\langle E_{3}E_{4}\rangle-\langle E_{3}\rangle\langle E_{4}\rangle=k_{\rm B}T _{c}T_{\rm h}C_{\rm 3h}+O\left(k_{\rm B}^{2}T^{2}\right). \tag{64}\]
Combining Eqs. (52, 53, 58), one finds
\[\sigma_{\eta}^{2}= \frac{\sigma_{E_{2}-E_{1}}^{2}+\sigma_{E_{4}-E_{3}}^{2}}{\langle q _{\rm h}\rangle^{2}} \tag{65}\] \[= \frac{k_{\rm B}(T_{\rm h}-T_{\rm c})^{2}(C_{\rm 1c}+C_{\rm 3h})}{T_{\rm h}^{2}(S_{ \rm 3h}-S_{\rm 2h})^{2}}+O\left(\frac{1}{N^{2}}\right).\]
It is worth mentioning here that such a result also appeared in Ref. [49] for the spectra of the working substance with scale property, while our result is not limited to this case. That is to say that our theory is independent of the details of the working substance and is thus universal.
## VI Conclusion and outlook
In this paper, we have studied the temperature fluctuations of a finite-size system. Initially, we drive a stochastic differential equation to describe the evolution of the system's temperature during a isothermal process. The noise term in the equation accounts for finite-size effects resulting from random energy exchanges between the system and its reservoir. Consequently, the system's stationary state represents a quasi-equilibrium state according to the theory of equilibrium fluctuation, in which the generalized thermodynamic quantities (which are extensive in the thermodynamic limit) deviate from being extensive due to finite-size effects.
Furthermore, we develop stochastic thermodynamics based on the temperature fluctuations of the system and substantiate the fluctuation theorems. The obtained results provide finite-size corrections to the Jarzynski equality, which is quantified by the square root of the ratio of the system's heat capacities at the final and initial stages of the driving. Additionally, we observe a breach of the principle of maximum work by an amount on the order of \(N^{-1}\).
To demonstrate the impact of temperature fluctuations at the mesoscopic level in typical thermodynamic processes, we analyze the efficiency of a finite-size heat engine operating in a quasi-static Carnot cycle. Our findings reveal that even under the quasi-static limit, the Carnot efficiency \(\eta_{C}\) remains unattainable due to the irreversible entropy production arising from temperature
fluctuations of the working substance. For some specific models, our general results of the mean value and variance of the efficiency align with previous findings in Refs. [44] and [49], respectively.
In closing, several theoretical predictions made in this work can potentially be verified on a mature experimental platform [50]. As possible extensions of our current results, the finite-time performance [23; 48; 51] and optimization [52; 33; 53] of the proposed fluctuating Carnot cycle are worth further investigation. Moreover, considering both the temperature fluctuations of the working substance and the finiteness of the heat reservoirs [35; 36; 54; 55; 56], finding the power-efficiency trade-off relation [51; 57; 58] and relevant optimizations of the heat engine present another challenging task with implications for mesoscopic heat engines.
###### Acknowledgements.
Y. H. Ma thanks the National Natural Science Foundation of China for support under grant No. 12305037.
| ```
mesoscopic N-body系における非平衡過程を確率的 thermodynamic の視点から温度変動を研究しています。
非平衡過程における系温度の進化を確率的微分方程式を用いて記述し、
熱力学的限界における有限サイズの効果を考慮したノイズが、系とリソースの間のランダムなエネルギー転送から生じる。
これらの変動は、その結果、熱力学的限界における拡張量(extensive)が等しくなることを示しています。
また、Jarzynski 等式に対する有限サイズ補正を導き出し、
熱容量がこれらの補正にどのように影響するかについての洞察を提供しています。
さらに、この研究の結果は、N-1の比例に比例して最大仕事原理を破る可能性を示唆しています。
また、温度変動の影響を finite-size quasi-static Carnot エンジンに評価しました。
温度変動による不必要な entropy productionが |
2309.04584 | Analysis of active optics correction for a large honeycomb mirror | In the development of space-based large telescope systems, having the
capability to perform active optics correction allows correcting wavefront
aberrations caused by thermal perturbations so as to achieve
diffraction-limited performance with relaxed stability requirements. We present
a method of active optics correction used for current ground-based telescopes
and simulate its effectiveness for a large honeycomb primary mirror in space.
We use a finite-element model of the telescope to predict misalignments of the
optics and primary mirror surface errors due to thermal gradients. These
predicted surface error data are plugged into a Zemax ray trace analysis to
produce wavefront error maps at the image plane. For our analysis, we assume
that tilt, focus and coma in the wavefront error are corrected by adjusting the
pointing of the telescope and moving the secondary mirror. Remaining mid- to
high-order errors are corrected through physically bending the primary mirror
with actuators. The influences of individual actuators are combined to form
bending modes that increase in stiffness from low-order to high-order
correction. The number of modes used is a variable that determines the accuracy
of correction and magnitude of forces. We explore the degree of correction that
can be made within limits on actuator force capacity and stress in the mirror.
While remaining within these physical limits, we are able to demonstrate sub-25
nm RMS surface error over 30 hours of simulated data. The results from this
simulation will be part of an end-to-end simulation of telescope optical
performance that includes dynamic perturbations, wavefront sensing, and active
control of alignment and mirror shape with realistic actuator performance. | Solvay Blomquist, Hubert Martin, Hyukmo Kang, Rebecca Whitsitt, Kevin Derby, Heejoo Choi, Ewan S. Douglas, Daewook Kim | 2023-09-08T20:27:19 | http://arxiv.org/abs/2309.04584v1 | # Analysis of active optics correction for a large honeycomb mirror
###### Abstract
In the development of space-based large telescope systems, having the capability to perform active optics correction allows correcting wavefront aberrations caused by thermal perturbations so as to achieve diffraction-limited performance with relaxed stability requirements. We present a method of active optics correction used for current ground-based telescopes and simulate its effectiveness for a large honeycomb primary mirror in space. We use a finite-element model of the telescope to predict misalignments of the optics and primary mirror surface errors due to thermal gradients. These predicted surface error data are plugged into a Zemax ray trace analysis to produce wavefront error maps at the image plane. For our analysis, we assume that tilt, focus and coma in the wavefront error are corrected by adjusting the pointing of the telescope and moving the secondary mirror. Remaining mid- to high-order errors are corrected through physically bending the primary mirror with actuators. The influences of individual actuators are combined to form bending modes that increase in stiffness from low-order to high-order correction. The number of modes used is a variable that determines the accuracy of correction and magnitude of forces. We explore the degree of correction that can be made within limits on actuator force capacity and stress in the mirror. While remaining within these physical limits, we are able to demonstrate sub-25 nm RMS surface error over 30 hours of simulated data. The results from this simulation will be part of an end-to-end simulation of telescope optical performance that includes dynamic perturbations, wavefront sensing, and active control of alignment and mirror shape with realistic actuator performance.
active optics, large optics, space telescope, thermal gradients, diffraction-limited imaging, telescope simulation, wavefront error, bending mode Further author information: (Send correspondence to H.M.)
H.M. E-mail: [email protected]
## 1 Introduction
The scope of this study is to demonstrate control of the shape of a large honeycomb mirror as it is perturbed by changing thermal gradients in a dynamic environment. The method of active optics correction has been developed and exercised for other honeycomb mirrors fabricated at the Richard F. Caris Mirror Lab (RFCML) for ground-based telescopes such as the Large Binocular Telescope (LBT) and the MMT. The mirror shape is controlled by actuators that apply force to the rear surface of the mirror. A modal correction is applied, using a limited number of bending modes, or combinations of the individual actuator forces, to alter the shape of the mirror surface [1, 2, 3, 4]. The bending modes are ordered from low to high spatial frequency correction. Using more bending modes reduces the residual surface error but increases the range of actuator forces. Using more modes also increases the uncertainty in actual performance relative to simulated performance. We generally want to use enough bending modes to meet the requirements for wavefront error, and not significantly more. The simulations are valuable in determining a reasonable number of modes for the control system.
The input wavefront errors for this simulation are representative of the errors that might be due to thermal evolution of a spacecraft on orbit. These are a single statistical realization of a hypothetical observatory [5], and serve as a proof of concept. Future work will more fully explore the sensitivity of the bending modes to different wavefront time series (e.g. as done in Lyon and Clampin [6]). In this study, we consider the time series of input wavefront error and assume ideal, near real-time measurement of the wavefront error. Residual error therefore represents the limitations of the active control of mirror shape with a given number of bending modes and is not limited by sensor noise or phase retrieval algorithms used [7].
## 2 Active Support and Correction System
Active optics generally includes control of alignment of the secondary mirror (M2) and control of the shape of the primary mirror (M1), based on feedback from a wavefront sensor. In this paper, we focus on controlling the shape of the primary mirror. This study is part of a larger simulation of the full active optics system for a large telescope, including wavefront sensing, alignment of M2 and bending of M1 [7]. For the analysis presented in this proceeding, we assume tilt has been corrected by pointing the telescope, and focus and coma have been corrected by aligning M2. All remaining wavefront error will be addressed by bending M1 with its actuators.
We consider a case where the dominant wavefront errors come from temperature gradients. Gradients in the telescope structure cause pointing error and misalignment of M2. Gradients in M1 cause thermal distortion of M1.
The M1 support system includes six hard points that define its position in six degrees of freedom, and 166 axial force actuators that control its shape. Forces on the hard points are sensed and transferred to the force actuators, so the hard points apply minimal force at all times [2]. The actuator force capacity exceeds \(\pm\) 1000 N.
## 3 Bending Mode Calculation and Fit
The simulated correction of M1 surface error involves fitting a set of bending modes to the surface error. The bending modes, which are linear combinations of actuator influence functions, form a basis set for the full range of shape change that can be made with the 166 actuators. They are ordered from low to high spatial frequency, or equivalently from most flexible to stiffest. Figure 1 shows several bending modes and illustrates the progression in spatial frequency. Using bending modes allows a modal solution, where the number of modes can be selected to optimize performance.
We start by using the finite-element model to compute the actuator influence function, i.e., the shape change for a unit change in actuator force, for each of the 166 actuators. Laboratory measurements of other large honeycomb mirrors confirm that the measured influence functions agree well with those computed with the finite-element model [2, 3]. We then use singular value decomposition to compute the bending modes, the corresponding sets of actuator forces, and the mode flexibilities [8]. The mode flexibility is the ratio of RMS surface deflection to RMS force over the 166 actuators.
We simulate the correction of surface errors by fitting a given number of bending modes to the initial surface error. The fit represents the part of the initial error that can be corrected with the given modes. The residual error represents the part that cannot be corrected.
Figure 2 shows the flow of data through the analysis pipeline for the current study. It starts with a map of M1 surface error, which may come from a thermal model or, for the data presented here, from simulated data that are representative of thermal distortion of M1 [5]. For consistency with the simulation of the full active optics system, we use a 231-term Zernike polynomial fit to represent the M1 surface error. We use Zemax to compute the resulting wavefront error in the focal plane. (In the full simulation, we also include here the wavefront error due to misalignment of all the telescope optics.) The active optics correction is based on the measured wavefront error in the focal plane. A second 231-term Zernike fit to the corrected surface is fed back to the full simulation as M1's new surface shape [7]. Wherever a Zernike fit is used, we keep track of the fitting error (data minus fit) and consider it along with other analysis errors when we estimate the full system wavefront error [9].
Figure 1: Selected bending modes of the honeycomb primary mirror with 166 force actuators, illustrating the progression from low to high spatial frequency and stiffness. The modes are normalized to identical RMS surface deflection.
## 4 Numerical Modeling and Simulation Results
As an example of the simulated correction of M1, we start with the uncorrected surface error shown in Figure 3. The dominant error is astigmatic and will be corrected with a single bending mode, but significant errors will remain. The number of modes used in the correction is an important variable. Figure 4 illustrates how effective the correction is for 1, 14 and 68 modes. In this case, while a single mode removes most of the RMS surface error, at least 14 modes are needed to correct the large-scale structure and approach 10 nm RMS surface error. We include the 68-mode correction in order to explore the potential capability to correct thermal distortion. Even with 68 modes, extreme actuator forces are well within the \(\pm\) 1000 N capacity, and the stress in the glass remains at a safe level, but it's not yet clear that we can control as many as 68 modes with good stability and accuracy.
Figure 3: Surface error in nm taken from time series of simulated data. This is the input to the active optics correction. This map represents the point in the time series with the highest RMS surface error, 109 nm. The 3-legged obscuration is a generic representation of a tripod to support M2.
Figure 2: Flow chart for simulating the active optics correction of M1. M1 surface errors are represented by Zernike fits, to maintain compatibility with the simulation of the full active optics system.
Figure 4: Given an input surface error as shown in Figure 3, different numbers of bending modes are fit and subtracted from the initial map to obtain a corrected map. Color bars are labeled in nm of surface error.
For the simulated data, even the 14-mode correction reduces thermal distortion to the point that it will not be the dominant wavefront error. Manufacturing error due to limitations in polishing and in-process measurements will leave mid-scale structure at the level of 10-15 nm RMS surface error.
Figure 5 shows a 30-hour time series of simulated data, including RMS surface error for the input (uncorrected) surface map and the corrected surface using 14 and 68 bending modes. The input map has had tilt, focus and coma removed, corresponding to pointing the telescope and aligning M2. The 14-mode correction meets the goal (12.5 nm RMS surface error[9]) for most of the 30 hours. The 68-mode correction comfortably meets the goal for the full period, and remains consistent with the actuator capacity and the glass stress limits. Further analysis, including simulation of the full active optics system, will guide the choice of number of modes to correct. This parameter can be changed dynamically as conditions change.
Figure 6 shows how the corrected surface improves and the actuator forces increase as we increase the number of bending modes in the correction. The input map for this plot is the map shown in Figure 3. The RMS surface error improves dramatically after correction with the first mode, which happens to match the dominant astigmatism in the input surface error. It continues to improve rapidly as modes 2-14 are added to the correction. The RMS actuator force remains low, less than 12 N, through 14 modes. Including more modes, up to 68, slowly reduces the RMS surface error by another factor of 2.5 while the RMS actuator force increases steadily by a factor of 8. While the actuator forces and glass stress remain acceptable, we expect that a real 68-mode correction will not behave nearly as well as the simulated correction. At best, multiple iterations would be needed to achieve the simulated result.
Figure 5: RMS surface error for the uncorrected surface and for the surface after correction with 14 and 68 bending modes, for a 30 hour time series of simulated data. The goal for the corrected surface RMS is at 12.5 nm.
## 5 Conclusions
This study demonstrates a viable method of correcting for thermal deformation of a large honeycomb primary mirror in space using the mirror modes used for ground-based control of RFCML mirrors. Results to date indicate that the mirror support system with 166 force actuators has excellent authority to bend out thermal deformations with acceptable actuator forces and glass stress. The examples we show illustrate how the accuracy of the corrected surface and the magnitude of actuator forces depend on the number of bending modes used in the correction. We can use the simulation to determine an optimum number of modes for different conditions and generate an envelope of the allowable wavefront error perturbations. We continue to refine the full active optics simulation to include realistic wavefront measurements, actuator performance, and other sources of wavefront error.
## Acknowledgements
Portions of this research were supported by funding from the Technology Research Initiative Fund (TRIF) of the Arizona Board of Regents and by generous philanthropic donations to the Steward Observatory of the College of Science at the University of Arizona.
| 宇宙空間大型望遠鏡システムの開発において、アクティブオプティクス補正の能力を持つことは、熱変動による波面収差を修正することで、フレミング限界による性能を実現でき、安定性要求を緩和できる。私たちは現在地上にある望遠鏡に用いるアクティブオプティクス補正の方法は提示し、空間の大型ハニカム型プリズムミラーにシミュレーションを行った。望遠鏡の有限要素モデルを使って、熱勾配による光学系とプリズムミラー表面の誤差を予測する。この予測された表面誤差データは、イメージ平面上での波面誤差マップを生成するためのZEmaxの光線追跡分析に組み込まれた。この分析には、望遠鏡の指向調整や二次鏡の移動による波面誤差のTilt、焦点、 coma の補正を想定している。残りの中程度から高次誤差は、アク |
2309.05842 | Fairness- and uncertainty-aware data generation for data-driven design | The design dataset is the backbone of data-driven design. Ideally, the
dataset should be fairly distributed in both shape and property spaces to
efficiently explore the underlying relationship. However, the classical
experimental design focuses on shape diversity and thus yields biased
exploration in the property space. Recently developed methods either conduct
subset selection from a large dataset or employ assumptions with severe
limitations. In this paper, fairness- and uncertainty-aware data generation
(FairGen) is proposed to actively detect and generate missing properties
starting from a small dataset. At each iteration, its coverage module computes
the data coverage to guide the selection of the target properties. The
uncertainty module ensures that the generative model can make certain and thus
accurate shape predictions. Integrating the two modules, Bayesian optimization
determines the target properties, which are thereafter fed into the generative
model to predict the associated shapes. The new designs, whose properties are
analyzed by simulation, are added to the design dataset. An S-slot design
dataset case study was implemented to demonstrate the efficiency of FairGen in
auxetic structural design. Compared with grid and randomized sampling, FairGen
increased the coverage score at twice the speed and significantly expanded the
sampled region in the property space. As a result, the generative models
trained with FairGen-generated datasets showed consistent and significant
reductions in mean absolute errors. | Jiarui Xie, Chonghui Zhang, Lijun Sun, Yaoyao Zhao | 2023-09-11T21:54:49 | http://arxiv.org/abs/2309.05842v1 | # Fairness- and Uncertainty-Aware Data Generation for Data-Driven Design
###### Abstract
_The design dataset is the backbone of data-driven design. Ideally, the dataset should be fairly distributed in both shape and property spaces to efficiently explore the underlying relationship. However, the classical experimental design focuses on shape diversity and thus yields biased exploration in the property space. Recently developed methods either conduct subset selection from a large dataset or employ assumptions with severe limitations. In this paper, fairness- and uncertainty-aware data generation (FairGen) is proposed to actively detect and generate missing properties starting from a small dataset. At each iteration, its coverage module computes the data coverage to guide the selection of the target properties. The uncertainty module ensures that the generative model can make certain and thus accurate shape predictions. Integrating the two modules, Bayesian optimization determines the target properties, which are thereafter fed into the generative model to predict the associated shapes. The new designs, whose properties are analyzed by simulation, are added to the design dataset. An S-slot design dataset case study was implemented to demonstrate the efficiency of FairGen in auxetic structural design. Compared with grid and randomized sampling, FairGen increased the coverage score at twice the speed and significantly expanded the sampled region in the property space. As a result, the generative models trained with FairGen-generated datasets showed consistent and significant reductions in mean absolute errors._
Keywords: machine learning; data-driven design; fairness and diversity; uncertainty; data generation; adaptive sampling.
## 1 Introduction
Design space exploration (DSE) searches through a wide range of design parameters and configurations for optimal engineering design solutions [1, 2]. With the advent of advanced machine learning (ML) algorithms, data-driven design methods have emerged and allowed rapid, accurate and cost-efficient design generation and DSE [3]. In mechanical design, various data-driven design pipelines and databases have been constructed to aid design tasks such as metamaterial and structural design [4, 5, 6].
Conventional data-driven design (Figure 1) starts with the parameterization of target designs, followed by design of experiments (DOE) techniques that sample from the design space such as geometric space [4]. Recently, non-parametric representations such as topology optimization have been implemented in data-driven and generative design [7, 8]. With design representations and experimental plans established, designs can be generated in computer-aided design environments. Thereafter, the mechanical and physical properties of the designs can be analyzed using simulation or real-world experiments. After the data are acquired from the experiments, the relationship between the design space and the property space can be modeled using ML. There are typically two modeling tasks: design performance prediction and generative models. Performance prediction models predict the properties of a design given the design parameters. They are frequently used as surrogate models to replace computationally heavy simulations and speed up design optimization [9]. Generative models, characterizing the inverse relationship, generate designs with respect to specified properties or constraints [8]. It is more difficult to learn such one-to-many relationships that one input could correspond to multiple outputs [10]. Although such workflows have been commonly implemented and have been contributing to various design research discoveries, risks of representation bias stemming from data acquisition might cause fairness issues in the dataset and thus compromise the performance of ML models.
Representation bias describes the phenomenon that some parts of the target population are underrepresented in the dataset [11]. In design datasets, the most salient representation bias resides in the property space, where samples are passively populated [12]. DOE conducted on the design space ensures the generation of diverse design geometries and configurations. Nonetheless, it results in skewed underlying distribution in the property space due to the nonlinear relationship between design shape and properties. Consequently, design datasets are commonly unbalanced in the property space with intensively searched regions, voids in the sampled regions, and unexplored regions [13]. Representation bias in the dataset will propagate to
the ML models and eventually yields unsatisfactory designs. Unexplored regions imply missing knowledge in the dataset and contribute to inaccurate predictions of unexplored properties. Data imbalance may cause the ML model to focus on the intensively sampled property regions, while overlooking the underrepresented properties.
Current methods to mitigate representation bias in design datasets mainly concentrate on data augmentation such as over-sampling and under-sampling. Over-sampling techniques increase the sample size by partially altering existing samples or generating new synthetic samples [14]. Down-sampling removes similar samples from the overrepresented groups [15]. However, the former might contribute to the overfitting of existing samples and the latter might remove samples with important information [16]. Chan et al. [13] proposed METASET to select an unbiased subset from a large metamaterial shape database. Determinantal point process (DPP) is utilized to model the diversity in both shape and property spaces, which are jointly considered to evaluate the subsets. The selected subset is highly diverse with a small sample size, offering better predictive performance and less training time. Lee et al. [12] proposed t-METASET that iteratively generates diverse unit cell shapes and acquires diverse properties from the existing samples. Its task-aware functionality guides property sampling toward the designated region. However, these methods only implement subset selection in the property space and thus cannot actively expand the sampled region. Wang et al. [6] designed a shape perturbation algorithm that gradually samples new properties toward the unexplored regions in the property space. It builds on the assumption that a small perturbation in the shape of a design will yield a small change in its properties. The rarely explored regions can be populated by slightly altering the shapes of the existing samples nearby. However, the assumption has serious limitations because small perturbations in different shapes can yield considerably different property shifts, which potentially makes the process uncontrollable and inefficient.
To ensure fair and efficient property space exploration, there needs to be a more reliable method that detects the regions where the existing dataset has insufficient coverage and accurately generates designs to increase coverage. Accurate design generation requires this method to model the relationship between shapes and properties instead of relying on assumptions such as small perturbation. Generative models and reinforcement learning (RL) have recently been implemented to generate design geometries that can achieve desirable properties. Chen and Ahmed [3] presented performance augmented diverse generative adversarial network (GAN) that combines GAN loss with performance augmented DPP loss in the training process. Such a GAN model learns to synthesize the training design data while generating diverse shapes with desirable properties. Considering there are usually multiple target properties in design tasks, Chen and Ahmed [17] integrated performance augmented diverse GAN with multi-objective Bayesian optimization (BO). As demonstrated in the case studies, this pipeline can generate diverse shapes and facilitate the exploration of the full Pareto fronts in the property space. Nobari et al. [18] proposed performance conditional diverse GAN to enable the generation of designs with designated properties. Compared with performance augmented diverse GAN, this model is more flexible as the users can appoint desirable properties instead of maximizing or minimizing properties. Instead of GANs that directly generate designs with desirable properties, RL traverses the property space and moves toward the optimal properties iteratively. Jang et al. [4] trained an RL agent that iteratively generates diverse designs by rewarding the diversity of topology. Compared with conventional greedy search, this method can generate 5% more design shapes on average in the tire design case study. Agrawal and McComb [5] trained an RL-based design agent that explores the design space with varying model fidelity. This framework is computationally efficient because of its embedded mechanism to tradeoff between low- and high-fidelity models during DSE.
The common limitation of the above generative models and RL pipelines is that a specific application must be defined before the initiation of DSE. The goals of these methods are to find the optimal or designated properties within the design space. It is straightforward to define the optimality of some properties such as tensile strength, whose optimality means its maximum. For properties such as elastic modulus (EM) and porosity, optimality is dependent on the use case. For instance, soft robotics would favor designs with relatively small EM, while the EM of human bone implants should be close to the EM of original human bones for improved bone integration. To prepare a general-purpose database for various applications, there needs to be a method that fairly explores the property space with no optimal properties specified. This method can explore and exploit the potential of a type of design to facilitate future DSE and design decision-making.
Adaptive sampling is an efficient method to actively generate new data from insufficiently regions [19]. Typical adaptive sampling techniques select new samples according to the predictive performance or uncertainty of ML models. When determining the new samples using predictive performance, the feature space is usually segregated into subspaces. Based on the test set, the subspaces that exhibit the highest predictive error will be the regions of interest (ROI) for adaptive sampling. For
Figure 1: Schematics of the procedures in data-driven design and the role of FairGen.
instance, Zhang et al. [20] designed an adaptive sampling technique to iteratively improve the performance of surrogate models in design optimization. The test set is divided into subgroups using K-means clustering and KNN. The subgroup that possesses the highest total prediction error is the ROI. Thereafter, maximum curvature is used to select a set of points from the ROI to generate new samples. Adaptive sampling based on predictive performance has also been implemented for structural design [21], configuration design [22], electromagnetic design [23], and protective coating design [24]. Uncertainty metrics such as entropy of prediction probabilities are also widely deployed in adaptive sampling. Gaussian process regression models are trained as surrogate models in various design optimization works and can guide adaptive sampling because of their inherent uncertainty measurement functionality [19]. Xu et al. [25] and Liu et al. [26] implemented Gaussian process adaptive sampling for hall effect sensor design optimization and functionally graded cellular structure design optimization, respectively. Nonetheless, the existing adaptive sampling methods lack the ability to deal with inverse problems and one-to-many relationships in generative design.
In this paper, the authors propose a fairness- and uncertainty-aware data generation (FairGen) pipeline that adaptively samples designs with missing properties (Figure 1). It adds an iterative process to the conventional design pipeline to fairly generate new samples. FairGen does not only exploit the voids within the sampled region, but also gradually expands the sampled region to explore the property space. The key contributions and features of this pipeline include:
* Introducing a fairness metric to design data generation to quantify and visualize data coverage.
* Constructing a novel pipeline and generative models to directly generate missing properties in the dataset.
* Building deep ensemble to model the predictive uncertainties of the generative models and guide the generative process.
* Proposing a pipeline to achieve adaptive sampling for data-driven design problems with inverse modeling and one-to-many relationships.
* FairGen rapidly explores the property space to expand the sampled regions.
* FairGen significantly improves the performance of inverse design models.
The remainder of this paper is organized as follows. Section 2 introduces the methodology of FairGen, including the formulation of the coverage and uncertainty modules. Section 3 presents the setting and procedures of the S-slot auxetic design property space exploration case study. Section 4 discusses the results with respect to the coverage increase rate, property space sampled region expansion, and the impact on generative models. Section 5 highlights the remarks of this research.
## 2 Methodology
This section illustrates the elements and procedures of FairGen. Section 2.1 demonstrates the FairGen pipeline. Section 2.2 discusses the coverage module with respect to data fairness, data coverage, and Voronoi diagram to construct the coverage map. Section 2.3 discusses the uncertainty module with respect to mixture density network (MDN) and deep ensemble method to capture the predictive uncertainty. Section 2.4 discusses BO integrating the coverage and uncertainty modules to find the target properties.
### FairGen pipeline
Figure 2 visualizes the pipeline and modules of FairGen. This pipeline starts with an initial dataset (D\({}^{0}\)) sampled from the shape space (R\({}^{4}\)) that contains d geometric parameters. The p types of properties of the n designs from the D\({}^{0}\) are analyzed using simulation, then populated in the property space, R\({}^{7}\). At each iteration, the mission is to find the empty regions in the property space and generate designs to supplement them. Thus, a data coverage module is built to indicate the uncovered regions. Due to the limitation of the existing knowledge, it is infeasible to accurately generate all missing properties at once. This becomes an optimization problem in which an optimal set of n\({}_{p}\) target property samples (D\({}^{p}\)) is searched. Thus, BO is implemented at every iteration to find a solution of D\({}^{p}\) that maximally increases the data coverage in the property space. The coverage module computes the covered area as the coverage score (S\({}_{C}\)) when D\({}^{p}\) is added to the existing dataset. After D\({}^{p}\) is solved by BO, the corresponding shape sets (D\({}^{5}\)) that can provide D\({}^{p}\) must be found.
MDN, a generative model, is trained using the existing dataset and predicts the shapes given D\({}^{p}\). However, BO purely maximizing the coverage score will yield target properties that maximize the covered area and thus is far away from the existing samples. MDN trained on the existing samples will generate inaccurate shape predictions that do not correspond to the target properties. This raises a conflict between the expansion of coverage and the predictive performance of the generative model. Therefore, an uncertainty module consisting of multiple MDNs is established to compute the predictive uncertainties regarding D\({}^{p}\). An uncertainty score S\({}_{U}\) characterizing the predictive uncertainties is added to the objective function as a trade-off with the coverage score. This way, BO is encouraged to find a D\({}^{p}\) that both efficiently increases the data coverage and ensures accurate shape prediction. The shapes predicted by the MDNs from the uncertainty module are analyzed in simulation to find the actual properties. The new shape-property set is added to the existing dataset, which forms a new dataset D\({}^{i}\), where i is the number of iterations. This pipeline can be executed iteratively until the desired S\({}_{C}\) is reached or the designated computational resource is exhausted.
### Coverage module
The first task of measuring representation bias is to establish a metric. There are two mentalities to model representation bias: fairness and diversity. Fairness describes the lack of bias and diversity describes the richness of variety [27]. Distance-based diversity metrics have been commonly implemented in the research domain of data-driven design [12, 13, 17, 18]. For example, Chan et al. [13] implemented DPP where Euclidean distance and Hausdorff distance were used to construct similarity kernels for 2-dimensional and 3-dimensional shapes, respectively. The authors argued that diversity metrics such as DPP are more flexible and easily implementable to be incorporated into ML pipelines. However, it is hard to use diversity metrics to quantify and visualize data coverage. The quantification and visualization of data coverage at different sample sizes and different D\({}^{p}\)'s help evaluate and guide the data generation process; thus, a data coverage module must be constructed with a suitable fairness metric.
Asudeh et al. [28] defined the notion of coverage of a point in a continuous-valued feature space. Given a dataset D, a query point q, a distance function \(\Delta\), a vicinity value \(p\), and a threshold value k, the coverage of q by D is defined:
\[Cov_{\rho,k}(q,D)=\begin{cases}true\quad\text{ {if} }|\{t\in D|\Delta(t,q)\leq\rho\}|\geq k\\ false\quad\text{ {otherwise}}\end{cases} \tag{1}\]
This definition essentially checks if the query point is at the vicinity defined by \(p\) and \(\Delta\) of at least k data points from the dataset D. With user-defined \(p\) and k, a region covered by the dataset can be computed by:
\[S_{C}(D)=\{q|Cov(q,D)=True\} \tag{2}\]
In FairGen, the coverage of the property space is to be improved. The covered area is the coverage score of the coverage module to quantify coverage and evaluate the selection of target properties. BO will utilize the coverage score to find a set of target properties that optimally increases the improvement of data coverage. The definition of data coverage is clear and straightforward to understand and implement. The covered region can also be plotted for users to monitor coverage progress and data generation efficiency. However, the computational complexity increases rapidly with the magnitude of k, and the size and dimension of the dataset. A naive algorithm that enumerates through all \(n!/[k!(n!-k!)]\) data point combinations and finds all mutually covered regions is computationally inefficient. The overlap among the covered regions requires additional and complex computation.
Asudeh et al. [28] proposed using Voronoi diagram to reduce the computational complexity when calculating data coverage [29, 30]. Given two samples, \(t_{i}\) and \(t_{j}\) from dataset D, any point on a line \(h(i,j)=\{q|\Delta(q,t_{j})=\Delta(q,t_{j})\}\) is equidistant to the two points. The half-space \(h^{+}(i,j)=\{q|\Delta(q,t_{j})\leq\Delta(q,t_{j})\}\) includes \(t_{i}\), and any point in this half-space is closer to \(t_{i}\). A polygon \(V(i)=\{V_{|i\neq j}\}h^{+}(i,j)\) is a Voronoi cell of sample i in which any point is closer to \(t_{i}\) than other samples in D. In this way, the aggregation of all Voronoi cells is the Voronoi diagram of the first order. Similarly, for \(k^{th}\) order Voronoi diagram, the k-nearest neighbors of any point in a Voronoi cell V(S) belong to S, where \(S\in D\) and \(|S|=k\). For an arbitrary value of k used in data coverage, a \(k^{th}\) order Voronoi diagram can be constructed. To find the covered area, an algorithm only needs to enumerate through the Voronoi cells, and only computes the concurrently covered area by the associated k samples in S. This method does not suffer from overlap as the feature space has been segregated into Voronoi cells.
Figure 3 demonstrates the use of Voronoi diagram to find the covered area by 1000 samples in the property space. Using the method proposed by Boots et al. [30], a \(k^{th}\) order Voronoi diagram can be constructed in a time complexity of \(O(k^{2}n\log n)\) in a 2-dimensional space. For each Voronoi cell, the region covered by the associated data point is solved. The aggregation of all the regions is equivalent to the covered region by the dataset. Therefore, the area of the covered region is computed as the \(S_{C}\) that reflects how well the property space is covered. \(S_{C}-S_{C}^{\prime}\) can be the metric to evaluate the selection of D\({}^{p}\), where \(S_{C}\) and \(S_{C}^{\prime}\) are the coverage score after and before D\({}^{p}\) is added to the property space, respectively.
Figure 3: Data coverage using first order Voronoi diagram in a standardized 2-dimensional property space with k=1 and \(p\)=0.08.
Figure 2: FairGen pipeline to iteratively generate missing properties in the property space.
Through FairGen iterations and BO, the coverage module may consume considerable computational resources as Voronoi diagrams will be constructed repeatedly to compute new \(\mathrm{Sc}\)'s. The advantage of Voronoi diagram is that a new diagram can be generated based on the preceding diagram to speed up the computation.
From an optimization perspective, the coverage improvement metric \(\mathrm{S_{C}-S_{C}^{\prime}}\) can be simplified to \(\mathrm{S_{C}}\) during optimization since \(\mathrm{S_{C}^{\prime}}\) is a constant. Moreover, \(\mathrm{S_{C}}\) as the objective function of BO encourages the selection of target properties that are far away from the existing properties. Taking those properties as the input, the MDN model will generate shapes that do not correspond to them. Thus, an uncertainty module is constructed to resolve this issue.
### Uncertainty module
The uncertainty module calculates the predictive uncertainties of the MDN models for a given \(\mathrm{D^{P}}\). The predictive uncertainties form an uncertainty score (\(\mathrm{S_{C}}\)) that penalizes the objective function to prevent selecting a \(\mathrm{D^{P}}\) that yields uncertain and thus potentially inaccurate shape predictions. There are two types of uncertainties: aleatoric and epistemic uncertainties [31]. Aleatoric uncertainty describes the inherent randomness such as sensor noise and measurement errors; epistemic uncertainty characterizes missing knowledge such as missing data or variables [32]. In such a context, the predictive uncertainty is to be modeled and utilized to guide BO.
Deep ensemble is a scalable and robust method to model predictive uncertainty [33]. To estimate the predictive uncertainty, multiple probabilistic neural network (NN) models are trained with different weight initialization and training data shuffling. The models are treated as a uniformly weighted mixture model where the predictions are combined as:
\[p(y|x)=M^{-1}\sum_{m=1}^{M}p_{\theta_{m}}(y|x,\theta_{m}) \tag{3}\]
where \(\mathrm{x}\) is the input, \(\mathrm{y}\) is the prediction, \(\mathrm{M}\) is the number of models, and \(\theta\) are the parameters. For regression problem, the prediction is a of Gaussian mixture:
\[M^{-1}\sum N(\mu_{\theta_{m}}(x),\sigma_{\theta_{m}}^{2}(x)) \tag{4}\]
where \(\mu\) and \(\sigma^{2}\) are the mean and variance, respectively. This mixture can be approximated as one Gaussian distribution where the mean and variance are:
\[\mu_{*}(x)=M^{-1}\sum_{m=1}^{M}\mu_{\theta_{m}}(x) \tag{5}\]
\[\sigma_{*}^{2}(x)=M^{-1}\sum_{m=1}^{M}\left(\sigma_{\theta_{m}}^{2}(x)-\mu_{ \theta_{m}}^{2}(x)\right)-\mu_{*}(x) \tag{6}\]
Suppose the true relationship in Figure 4 is to be modeled with some training data collected. Given the same input, each model will provide a prediction, \(\mathrm{y_{m}}\), as a Gaussian distribution. The five predictions are approximated using one Gaussian distribution. If the input is within the region where data is available, the variance of the prediction is small, indicating small predictive uncertainty. If the input has no training data nearby, the variance of the predictions is large, characterizing a large predictive uncertainty. The deep ensemble method essentially investigates the difference among the M distributions learned by the M models. The same rationale is utilized to build the uncertainty module and obtain a \(\mathrm{D^{P}}\) with low predictive uncertainty through BO.
In generative design, generative models such as MDN are trained to predict the shapes that possess the input properties. MDNs proposed by Bishop [34] use the output discrete values from NNs to create a mixed Gaussian distribution and then, train the NNs to achieve consistency between the training dataset and the mixed distribution. Figure 5 depicts the structure of MDN comprised of a deep NN and a mixed Gaussian. The input layer receives the target properties. The output of the deep NN is reparametrized to construct a batch of Gaussian distributions, which are combined to form a Gaussian mixture. Design shapes are then sampled from the mixed Gaussian distribution. MDN is chosen to build the uncertainty module because it has embedded uncertainty measurement functionality. Thus, the deep ensemble method to characterize predictive uncertainty can be extended to MDN.
Figure 4: Modeling predictive uncertainty using ensemble method.
The previous deep ensemble scenario in Figure 4 describes a mixture of several single univariate Gaussian distributions. Modeling the predictive uncertainty of MDNs requires a mixture of several batches of multivariate Gaussian distributions (Figure 6 (a)). Each batch of Gaussian distributions is from one MDN model and each Gaussian distribution has d dimensions. Each model learns G distributions instead of one distribution in the previous example. Therefore, the deep ensemble method must be modified to investigate the difference among the M batches of G distributions learned by the M models. The assumption is that the M models are trained to learn the same G distributions, which characterize the true marginal distributions of the output variables.
The first step is to find the correspondence of the G distributions from different MDNs using the training data. Although the MDNs learn the same ground truth distributions, the orders can be different. The approximation method using equations (5) and (6) must be conducted among the corresponding distributions indicated by the arrows in Figure 6 (a). When the training input X of size \(\mathbf{n\times p}\) is fed into an MDN, three matrices describing the proportions (\(\mathbf{n\times G}\)), means (\(\mathbf{n\times G\times d}\)), and variances (\(\mathbf{n\times G\times d}\)) of the G distributions will be the output. The corresponding distributions should have mean matrices close to each other. With this trait, the correspondence of distributions from multiple MDNs can be discovered by calculating the differences among the mean matrices.
After the correspondence is established, the corresponding distributions are approximated using one Gaussian distribution:
\[\mu_{*,g}(x)=M^{-1}\sum_{m=1}^{M}\mu_{\theta_{m,g}}(x) \tag{7}\]
\[\sigma_{*,g}^{2}(x)=M^{-1}\sum_{m=1}^{M}\left(\sigma_{\theta_{m,g}}^{2}(x)- \mu_{\theta_{m,g}}^{2}(x)\right)-\mu_{*,g}(x) \tag{8}\]
for \(\forall\)\(g=1,2,...,\)\(G\)
where each \(\sigma_{*,g}^{2}(x)\) has a size of \(1\times\text{d}\). To obtain an uncertainty score that characterize the predictive uncertainty of the MDN models regarding a property input x, the variances are summed across G aggregated distributions and d dimensions:
\[S_{U}(x)=\sum_{g=1}^{G}\sigma_{*,g}^{2}(x)\times J_{d} \tag{9}\]
where J is a \(\text{d}\times 1\) matrix of ones.
Using the uncertainty module, the predictive uncertainty can be obtained at an arbitrary x in the property space. The example of a predictive uncertainty heatmap is plotted in Figure 6 (b). This heatmap indicates that the predictive uncertainty is low at the region where data is abundant and thus conveying sufficient knowledge to the model. As x travels toward the regions where data are sparse or absent, the predictive uncertainty increases, signaling a high potential for inaccurate predictions. Flexibility is the reason why the uncertainty score is used as a penalty instead of a constraint. By tuning the penalty factor (\(\psi\)), FairGen can switch between exploration and exploitation modes to actively search outside or within the sampled regions. Moreover, there will be fluctuation of the overall uncertainty levels, which could be compensated by the penalty factor.
This module might impose a high computational cost on the pipeline as multiple MDNs must be trained at every FairGen
Figure 5: Structure of an MDN.
Figure 6: Uncertainty module using deep ensemble method to model predictive uncertainty: a) Gaussian mixtures; and b) predictive uncertainty heatmap.
iteration. To speed up the uncertainty module, parallel training of the multiple models can be implemented as they are independent of each other. Transfer learning can help reduce training time. Instead of training from a randomly initialized model at every FairGen iteration, the models trained during the last iteration can be re-trained with the new dataset.
### Bayesian optimization
The optimization function in FairGen finds the optimal \(D^{p}=\left\{x_{1},x_{2},...,x_{n_{p}}\right\}\) as the input to the generative models for design generation. At the i\({}^{\text{th}}\) iteration, the coverage and uncertainty modules calculate the coverage and uncertainty scores, accounting for the entire D\({}^{p}\):
\[S_{C}(D)=S_{C}(D^{\text{t}}\cup D^{p}) \tag{10}\]
\[S_{U}(D^{p})=\sum_{l=1}^{n_{p}}\sum_{g=1}^{c}\sigma_{g,g}^{2}(x_{l})\times J_{d} \tag{11}\]
The objective function of the BO can be formulated as:
\[\underset{D^{p}}{\text{max}}\qquad f(D^{p})=S_{C}(D)-\psi S_{U}(D^{p}) \tag{12}\]
After D\({}^{p}\) is determined by BO, the MDNs trained during the construction of the uncertainty module are utilized to generate design shapes. As D\({}^{p}\) is found with the penalty of their predictive uncertainties, some accurate estimations of the design shapes are likely to be obtained from the MDNs. Thereafter, the designs generated are analyzed using simulation to acquire the real properties. Finally, the shapes and properties generated during this iteration are added to the dataset. The next iteration can be executed with the updated dataset to further explore the property space.
## 3 Results
This section exhibits the S-slot design case study with respect to the design problem, FairGen setting, and results.
### S-slot design space exploration
In this paper, a case study of S-shaped perforated auxetic metamaterial design is conducted. S-slot designs have been proven to have an enhanced fatigue life due to its lower von Mise stress compared to the traditional circular design [35]. A dataset will be generated using FairGen and compared with conventional methods. The design spaces in this case study, including the shape and property spaces, are defined in this subsection. As shown in Figure 7, the S-slot is defined by four parameters including slot tail height (h), slot cap length (a), slot cap height (b) and cap rotation (a). The slot thickness, vertical spacing (VS) and horizontal spacing (HS) are fixed in this case study.
Maximum von Mises stress (MS) and EM are investigated in this DSE problem. As stress concentrations are the main reason for crack initiation, the MS of S-slot designs needs to be considered during the design process. Ideally, the MS in the design should be as small as possible. EM is also a mechanical property frequently discussed in research articles related to auxetic metamaterial [36]. The definition of optimal EM is determined based on the application as mentioned in the introduction. The goal of this case study is to generate a design dataset to build a generative model that predicts the design shapes given the required MS and EM. This dataset should efficiently explore the property space to possess abundant generative design knowledge. Although the design should have a small MS, the data generation process is not driven toward small MS regions to demonstrate a general case.
We adopted the same numerical simulation as in the previous research [37] using static linear analysis with 3-dimensional triangle shell elements (S3R) on a unit cell with periodic boundary conditions in Abaqus to generate our simulation dataset. Although the elastic-plastic behavior is not considered, this simulation takes a relatively low computational cost and still provides stress distribution information related to crack initiation.
### FairGen setting and iterations
The initial dataset consists of the shapes and properties of 1000 designs sampled using grid search from the shape space. The properties are standardized to the range of around [-1, 3] to facilitate the subsequent ML and FairGen operations. For the coverage module, \(\rho\) is 0.08 because a 2% percentage error of the property is acceptable in property prediction tasks. k is 1 because the initial dataset has only partially explored the property space. The uncertainty module includes 5 MDN models, which have six hidden layers, 10 Gaussian distributions, and 3000 training epochs. The uncertainty penalty is 0.1. BO will find the optimal 3 target properties in 50 iterations and 10 extra random walks. In this setting, it was found that selecting more than 3 target properties is likely to yield some unreasonable property selections. Experiments were run on a computer with a 12\({}^{\text{th}}\) Gen Intel i7 processor with 16 gigabytes of available RAM on Windows 11. The models were trained in the central processing unit.
Figure 7: S-slot design. a) Geometric parameters defining the S-slot; and b) Slot layout.
At the beginning of every FairGen iteration, the existing dataset was used to initialize the Voronoi diagram in the coverage module and train 5 MDNs in the uncertainty module. The two modules output the coverage and uncertainty scores for the D\({}^{\text{p}}\) selected at every BO iteration. The scores were combined to compute the objective function, which guides the Bayesian optimizer to select the D\({}^{\text{p}}\) for the next BO iteration. The final D\({}^{\text{p}}\) selected by BO both optimally increased the data coverage and yielded reasonable shape predictions. New designs were generated using the 5 MDNs trained in the uncertainty module with D\({}^{\text{p}}\) as the input. For each property in D\({}^{\text{p}}\), 3 designs were generated from each MDN, resulting in 45 new designs per FairGen iteration. Thereafter, the new designs were subject to manufacturability check to filter our infeasible designs such as S-slot intercept and thin wall. The properties of the feasible designs were obtained from simulation, and then added to the existing dataset. Some designs with properties that extended the coverage to the lower-right part of the properties space were regarded as outliers because they possess high maximum stress on the design. Such properties are undesirable and might bring some errors from the simulation.
Figure 8 showcases the properties of the generated designs at some FairGen iterations. At the 5\({}^{\text{th}}\) iteration, S\({}_{\text{U}}\) was increased from 3.5 at the beginning to 4.2 (Figure 8 (a)). One target property aimed to exploit a void within the sampled region. The generated properties successfully filled the void. Two target properties tried to explore the uncovered region. Many new designs were generated that considerably expanded the sampled region. At the 10\({}^{\text{th}}\) iteration, one target property exploited a void and densified the surrounding region (Figure 8 (b)). The other two target properties led to the finding of two designs that expanded the sampled region. At the 15\({}^{\text{th}}\) iteration, two target properties exploited the sampled region and one target property explored the rightmost unexplored region (Figure 8 (c)). At the 20\({}^{\text{th}}\) iteration, one target property searched a void region, and two properties explored the rightmost region (Figure 8 (d)).
## 4 Discussion
After 20 FairGen iterations, 799 new designs have been generated based on the 1000 initial designs. To form a comparison and investigate the effectiveness of FairGen, 3000 designs were generated using grid sampling and randomized sampling from the shape space, respectively. The former is a conventional DOE method with a strong bias toward the designated geometrical parameters [38]. The latter utilizes a Latin Hypercube sampling (LHS) that encourages shape diversity [39]. The comparison among the three sampling methods will be analyzed with respect to the data coverage, property space exploration, generative modeling, and computational cost.
### Data coverage and property space exploration
Figure 9 (a) reveals the increase in data coverage as the number of samples increased using the three sampling techniques. Grid sampling started from a low coverage score than randomized sampling because of its strong bias. FairGen
Figure 8: Iterative results of FairGen in the case study at a) 5\({}^{\text{th}}\) iteration with 1178 samples; b) 10\({}^{\text{th}}\) iteration with 1389 samples; c) 15\({}^{\text{th}}\) iteration with 1579 samples; and d) 20\({}^{\text{th}}\) iteration with 1769 samples
started from the same coverage score as grid sampling because it was initialized with a dataset based on grid sampling. Although randomized sampling offered a high initial coverage score, data coverage is increasing at the same speed as grid sampling. Also, they both show the trend to converge. On the contrary, the FairGen coverage score curve has not shown the trend to converge. Using FairGen, the data coverage was rapidly improved and quickly surpassed randomized sampling at the second iteration. Eventually, FairGen sampling reached a coverage score of 5.8 while the other two methods were below 4.8.
Figure 9 (b) visualizes the datasets generated by the three methods. Grid sampling provided the worst property space exploration effect as most of its samples are covered by the other two methods. Grid sampling intensively sampled the low-MS and low-EM region, while the rest of the property space is either sparsely populated or unexplored. Samples were likely to stick together and repetitively cover a region. Randomized sampling also intensively searched the low-MS and low-EM region, which was less severe than grid sampling. Samples were likely to evenly disperse instead of forming blocks, but also created some greater voids. FairGen significantly avoided the intensive search effect and generated more evenly distributed properties. It almost established the full contour of the sampled area with only a small portion established by others. In reality, the best design at a certain level of elastic modulus should possess the smallest maximum von Mises stress. Figure 9 (b) indicates that FairGen offered the smallest maximum von Mises stress at almost all elastic modulus levels with fewer samples, especially at high elastic moduli. In conclusion, FairGen has a better capability to explore and exploit the potential of the design in DSE.
### Generative modeling
The purpose of increasing data coverage is to improve the performance of the ML models. This subsection investigates the effect of FairGen on generative models. MDN models were trained using the dataset acquired from the three sampling techniques. 50 test properties were randomly sampled within the sampled region of the property space. For each test property, each MDN predicted 10 shapes. In total, each MDN predicted 500 shapes, whose properties were analyzed using simulation. The real properties were compared with the target properties to find the predictive errors. To avoid being misled by randomness, tests were conducted at different data sizes: 1200, 1400, 1600, and 1800 designs in the training set (Table 1). This way, both the predictive error and the trend can be the evidence for comparison.
The mean absolute error (MAE) of generative model predictions can sometimes be misleading for generative models as some outliers might be generated. Thus, both the MAEs (Table 1) and the absolute prediction error scatter plots (Figure 10) are provided. The horizonal and vertical axes in Figure 10 represent the absolute prediction errors of MS and EM, respectively. Table 1 indicates that all three sampling techniques helped reduce the MAE as the number of samples increased. The MAEs of FairGen were always 1/3 smaller than grid sampling and on average 1/8 smaller than randomized sampling. This could be verified by the scatter plots. When trained with 1200 designs (Figure 10 (a)), large prediction errors were obtained from all models. The performances of FairGen and randomized sampling are close to each other and are significantly better than grid sampling. As more designs being generated, the prediction errors of the three methods became smaller and smaller. Meanwhile, the models trained using FairGen generated datasets performed better than the models trained using randomly sampled datasets (Figure 10 (b)-(d)). The generative modeling test results revealed that data generated using FairGen efficiently explored the property space to embed more knowledge regarding generative design.
\begin{table}
\begin{tabular}{c c c c} \hline \hline n & FairGen & Grid sampling & Randomized sampling \\ \hline
1200 & 0.2067 & 0.2989 & 0.1835 \\
1400 & 0.1499 & 0.2095 & 0.1610 \\
1600 & 0.1408 & 0.2081 & 0.1671 \\
1800 & 0.1286 & 0.1903 & 0.1574 \\ \hline \hline \end{tabular}
\end{table}
Table 1: MAEs of the MDNs trained with different numbers of training examples generated from FairGen, grid sampling, and randomized sampling.
Figure 9: Comparison among FairGen, grid sampling, and randomized sampling with respect to a) data coverage (k=1 and p=0.08); and b) property space exploration.
### Computational cost
The goal of FairGen is to reduce the time and resources required to build an unbiased dataset. It has been shown that FairGen provides higher data coverage and better generative modeling capabilities. Nonetheless, it also adds the coverage module, uncertainty module, and BO to the data generation pipeline. The upper bound of the time complexity of Voronoi diagram construction is \(O((d+1)n^{d/2}\,k^{d/2+1})\)[40]. For a 2-dimensional Voronoi diagram where \(k=1\), the cost can be reduced to \(0(n\log n)\)[22]. The number of Voronoi cells is bounded by \(O(n^{d/2}\,k^{d/2})\), which yields an upperbound of n cells in this case study [28]. For each cell, the complexity of identifying the covered region is \(O(k(d+1))\). Thus, the complexity to compute the entire covered area is bounded by \(O((d+1)n^{d/2}\,k^{d/2+1})\), which is \(O(3n)\) in this case study. The computational cost of building the uncertainty module is equivalent to training 5 MDNs. At every FairGen iteration, the two modules must be constructed again. At every BO iteration, the Voronoi diagram covered area, and the output of 5 MDNs are computed. This case study has relatively small n, k, and d such that the baseline pipeline is not computationally heavy. For large values of n, k and d, methods such as transfer learning [41] and data coverage approximation [28] can be utilized to significantly reduce the computational cost.
For the computation unit in this case study, it took around 37 seconds to complete the simulation of one design. From 1000 to 1799 samples, the time to initialize the coverage module ranged from 2 to 3 seconds. The time to build the uncertainty module increased from 50 to 78 seconds. The entire BO consumed around 120 to 240 seconds. The computational time spent on 20 FairGen iterations was around 4960 seconds. The time to generate 799 designs using FairGen is equivalent to 933 designs generated by geometric sampling techniques. With reasonable extra computational time, FairGen achieved exceptional property exploration and generative modeling results.
## 5 Conclusions
This paper proposed and demonstrated the FairGen pipeline that efficiently explores the property space in DSE problems. The existing methods cannot directly generate missing properties in a design dataset to explore the potential of the design. This leads to missing knowledge and unsatisfactory ML model performance. FairGen finds the missing properties and actively generates designs that provide those properties to complement the dataset. Its coverage module detects unexplored regions in the property space using a fairness metric. The uncertainty module evaluates the predictive uncertainty in the property space to avoid sampling from the regions about which the generative models are uncertain. BO integrates the coverage and uncertainty modules to solve for the target properties that both maximally increase the data coverage and yield reasonable shape predictions. Thereafter, the target properties are input into the generative models to generate the associated shapes, whose properties are analyzed using simulation. The new designs are
Figure 10: Scatter plots of the absolute prediction errors yielded by different sampling techniques at a) n=1200; b) n=1400; c) n=1600; and d) n=1800.
added to the dataset and the above steps can be implemented iteratively to improve data coverage.
In the S-slot case study, FairGen was implemented to investigate its efficiency, starting with a dataset that has 1000 designs sampled using grid geometric sampling. After 20 iterations, 799 new designs were generated. The coverage score was increased from 3.5 to 5.8 whereas grid sampling and randomized sampling could only increase the coverage score to 4.8 at 3000 samples. FairGen also significantly expanded the sampled region in the property space more than the other sampling techniques. The expanded area means designs with better properties can be obtained from the dataset generated using FairGen. The generative modeling test revealed that the models trained using FairGen generated dataset reduced the MAE by 1/3 and 1/8 on average compared with the datasets generated using grid sampling and randomized sampling, respectively. Computationally, the time spent on generating 799 designs using baseline FairGen is equivalent to generating 933 designs using other sampling methods in the current setting of the case study and computational resources.
The limitation of FairGen is the lack of a shape diversity mechanism. Future work will focus on the simultaneous improvement of shape and property fairness. Moreover, FairGen can be modified to actively drive data generation toward desirable property regions.
## Acknowledgements
This work is funded by McGill University Graduate Excellence Fellowship Award [grant number 00157]; Mitacs Accelerate program [grant number IT13369]; and McGill Engineering Doctoral Award (MEDA).
## Declaration of Competing Interest
The authors declare that they have no known competing interests.
| データセットは、データドリブンな設計の骨格です。理想的には、形状と属性空間において公平に分布しているデータセットが必要で、それによって、underlyingの関係を効率的に探索することができます。しかし、古典的な実験設計は形状の多様性を重視しており、属性空間における偏った探索を達成します。近年開発された方法は、大規模なデータセットからサブセットを選択したり、 severeな制限を持つ仮定を用いることが多いです。この論文では、公平性と不確定性Awareなデータ生成(FairGen)が提案されています。FairGenは、小さなデータセットから欠落する属性を積極的に検出し、生成します。各迭代において、そのカバーリングモジュールは、データのカバー率を計算して、ターゲット属性の選択をガイドします。不確定性モジュールは、生成モデルが特定の形状を予測できるようにします。両モジュールを統合することで、ベイジアン最適化はターゲット属性 |
2309.11837 | Stellar model calibrations with the Ai Phe binary system. Open questions
about the robustness of the fit | We explore the robustness of the calibration of stellar models achievable
with Ai Phe binary system. By means of the SCEPtER pipeline, we investigated
the impact of different assumptions about the surface efficiency of microscopic
diffusion. In the reference scenario, we allowed modification of the surface
metallicity due to microscopic diffusion, while in the alternative scenario we
assumed that competing mixing from other sources cancels out this effect. Due
to the fact that the primary star has already experienced the first dredge-up
while the secondary has not, the tested scenarios show interesting differences.
While the estimated age is quite robust ($4.70^{+0.13}_{-0.14}$ Gyr and
$4.62^{+0.13}_{-0.06}$ Gyr), the calibration of the convective core
overshooting parameter $\beta$ reveals noticeable differences. The reference
scenario suggests a wide multi-modal range of possible values of $\beta$ around
0.10; the alternative scenario computations point towards a sharp and lower
$\beta$, around 0.04. The impossibility to obtain an unambiguous fit confirms
the difficulty in achieving a sensible calibration of the free parameters of
stellar models using binary systems, even when very accurate masses and radii
are available. | G. Valle, M. Dell'Omodarme, P. G. Prada Moroni, S. Degl'Innocenti | 2023-09-21T07:27:17 | http://arxiv.org/abs/2309.11837v1 | # Stellar model calibrations with the Ai Phe binary system
###### Abstract
Context:
Aims:Relying on recently available and very precise observational data for the Ai Phe binary system, we explore the robustness of the calibration of stellar models achievable with this system.
Methods:We adopt the SCEPER pipeline with a fitting grid of stellar models computed for different initial chemical compositions and convective core overshooting efficiencies. We investigated the impact of different assumptions about the surface efficiency of microscopic diffusion, whose efficiency is still debated in the mass range of the system. We obtained the fit of this system adopting two alternative scenarios. In the reference scenario, we allowed modification of the surface metallicity due to microscopic diffusion, while in the alternative scenario we assumed that competing mixing from other sources cancels out this effect.
Results:Due to the fact that the primary star has already experienced the first dredge-up while the secondary has not, the tested scenarios show interesting differences. While the estimated age is quite robust, changing from \(4.70^{+0.13}_{-0.14}\) Gyr to \(4.62^{+0.13}_{-0.06}\) Gyr, the calibration of the convective core overshooting parameter \(\beta\) reveals noticeable differences. The reference scenario suggests a wide multi-modal range of possible values of \(\beta\), peaking around 0.10; on the contrary the alternative scenario computations point towards a sharp and lower \(\beta\), peaking around 0.04.
Conclusions:The impossibility to obtain an unambiguous fit confirms the difficulty in achieving a sensible calibration of the free parameters of stellar models using binary systems, even when very accurate masses and radii are available. The results also suggest that the biases due to the assumptions underlying the stellar track computations may be different from one binary system to another.
## 1 Introduction
Detached double-lined eclipsing binaries allow precise and accurate measurements of the masses, radii, and effective temperatures of the components, and it is reasonable to assume that the system components share a common age and initial chemical composition. Therefore, these systems are routinely adopted as excellent experimental environments in which to test models of stellar evolution and structure in order to gain a deeper understanding of some physical processes, such as the treatment of convection or diffusion (see among many Andersen et al., 1991; Torres et al., 2010; Valle et al., 2017; Claret & Torres, 2017; Valle et al., 2023). Many authors have attempted to determine the exact relation, if any, between the stellar mass and the value of the convective core overshooting parameter, with opposing findings (see Anders & Pedersen, 2023, for a review). It is widely recognised that only systems observed with outstanding precision can provide further insight into this topic (see e.g. Valle et al., 2017; Miller et al., 2020; Helminiak et al., 2021; Anders & Pedersen, 2023).
A perfect target for this investigation is AI Phoenicis (AI Phe, HD 6980), an eclipsing binary system composed of two stars with masses of around 1.2 \(M_{\odot}\), for which very high-precision masses and radii are available. Several works in the literature propose an age estimate for this system (e.g. Andersen et al., 1988; Ribas et al., 2000; Kirkby-Kent et al., 2016). Recently, observations by Miller et al. (2020) significantly improved the precision of the estimated effective temperatures for the system. It is therefore interesting to investigate how these new observations impact the fit of the system. In the present paper, we attempt to calibrate the age and the convective core overshooting efficiency whilst assisted by this new data set. Besides the obvious interest in obtaining these estimates, we are particularly interested in the exploration of possible systematic effects that may undermine the robustness of the obtained calibrations.
The Ai Phe system is composed of a primary in the early red giant branch (RGB) and a secondary in the early subgiant branch (SGB) phase. The stellar masses of the two objects are in a range where the convective envelope of both stars nearly vanishes in the main sequence (MS). The lack of such an envelope may lead to noticeable variations of the surface chemical abundances during the MS evolution owing to different mixing processes. Many stellar models adopted for binary system studies account for the effect of microscopic diffusion during the stellar evolution (e.g. PARSEC, MIST, DSEP, GARSTEC, or Pisa models Nguyen et al., 2022; Choi et al., 2016; Dotter et al., 2008; Weiss & Schlattl, 2008; Dell'Omodarme et al., 2012), which has the potential to significantly modify the surface chemical composition and to change the internal characteristics in a non-negligible way. Indeed, including the effect of microscopic diffusion in stellar model computations has been shown to be of fundamental importance for correctly pre
dicting the internal structure of the Sun (see e.g. Bahcall et al., 2001; Christensen-Dalsgaard & Di Mauro, 2007), while the diffusion efficiency in Galactic GC stars is still debated (see e.g. Korn et al., 2007; Gratton et al., 2011; Nordlander et al., 2012; Gruyters et al., 2014). The adoption of unmoderated microscopic diffusion results in a steep drop of the surface [Fe/H] in the MS phase, a drop that nearly cancels out in the first part of the RGB when the external convection sinks down to regions in which helium and heavy elements previously diffused (first dredge-up). Microscopic diffusion is not the only mixing mechanism able to modify the surface chemical abundances, and other competing mechanisms have been investigated, such as rotational mixing, turbulence, mass advection, and radiative acceleration (e.g. Eggenberger et al., 2010; Vick et al., 2010; Deal et al., 2020; Dumont et al., 2021). However, the effort of including the effects of mechanisms competing with microscopic diffusion in the stellar model computations in a physically consistent way is still ongoing and will require considerable theoretical progress (see e.g. Moedas et al., 2022).
As a consequence of these theoretical difficulties, a firm prediction of the surface metallicity for stars in the mass range of Ai Phe before the first dredge-up is problematic. This poses an interesting question as to the robustness of the fit obtained for similar systems, because knowledge of the surface metallicity is recognised to be of utmost importance in order to break the age-metallicity degeneracy and obtain any meaningful calibration from binary systems (see e.g. Lastennet & Valls-Gabaud, 2002; Torres et al., 2010; Higl & Weiss, 2017). Different stellar codes, relying on different assumptions about the microscopic diffusion efficiency, and given the inclusion of extra-mixing mechanisms, may obtain different age and/or overshooting efficiency calibrations on the same target system. Given the constant and progressive refinement of the observational precision, quantifying this potential bias is extremely relevant.
The structure of the paper is as follows. In Sect. 2, we discuss the method and the grids used in the estimation process. The result of the calibration is presented in Sect. 3 with an analysis of its robustness and comparison with the literature in Sect. 4. Some concluding remarks are provided in Sect. 5.
## 2 Methods and observational constraints
### Fitting technique
The analysis is conducted adopting the SCEPtER pipeline1, a well-tested technique for fitting single and binary systems (e.g. Valle et al., 2014, 2015; D'Amico et al., 2017; D'Amico et al., 2017). The pipeline estimates the parameters of interest (i.e. the system age, its initial chemical abundances, and the core overshooting parameter) adopting a grid maximum likelihood approach.
Footnote 1: Publicly available on CRAN: [http://CRAN.R-project.org/package/http://CRAN.R-project.org/package=SCEPtERbinary](http://CRAN.R-project.org/package/http://CRAN.R-project.org/package=SCEPtERbinary)
The method we use is explained in detail in Valle et al. (2015); here, we provide only a brief summary for convenience. For every \(j\)-th point in the fitting grid of precomputed stellar models, a likelihood estimate is obtained for both stars:
\[\mathcal{L}^{1,2}_{j}=\left(\prod_{i=1}^{n}\frac{1}{\sqrt{2\pi}\sigma_{i}} \right)\times\exp\left(-\frac{\chi^{2}}{2}\right), \tag{1}\]
\[\chi^{2}=\sum_{i=1}^{n}\left(\frac{\sigma_{i}-g_{i}^{j}}{\sigma_{i}}\right)^ {2}, \tag{2}\]
where \(o_{i}\) are the \(n\) observational constraints, \(g_{i}^{j}\) are the \(j\)-th grid point corresponding values, and \(\sigma_{i}\) are the observational uncertainties.
The joint likelihood of the system is then computed as the product of the single star likelihood functions. It is possible to obtain estimates both for the individual components and for the whole system. In the former case, the fits for the two stars are obtained independently, while in the latter case the two objects must have a common age (with a tolerance of 1 Myr), identical initial helium abundance, and initial metallicity.
The error on the estimated parameters is obtained by means of Monte Carlo simulations. We generate \(N=10\,000\) artificial binary systems, sampling from a multivariate Gaussian distribution centred on the observational data, taking into account the correlation structure among the observational data for the two stars. As in Valle et al. (2017), we assume a correlation of 0.95 between the primary and secondary effective temperatures, and 0.95 between the metallicities of the two stars. Regarding mass and radius correlations, the high precision of the estimates means that these parameters are of no importance, but we set them to 0.8 for the mass and -0.9 for the radius, which are typical values for this class of stars (Valle et al., 2015, 2017). Different correlation values for mass and radius lead to negligible modifications of the results.
### Observational data
As observational constraints, we use the masses, radii, metallicities [Fe/H], and effective temperatures of both stars. The adopted values and their uncertainties reported in Table 1 are taken from Miller et al. (2020).
The uncertainties in the effective temperature reported in Miller et al. (2020) are 16 K and 22 K for the primary and secondary component, respectively. These values do not account for the systematic effects that might modify the calibration scale, which were quantified in that paper as 11 K. In light of the existing difference in the calibration scale among different literature sources, we adopt a conservative approach to this observational constraint, assuming an uncertainty of 50 K for both stars.
### Stellar models grid
The grids of models were computed for the exact masses of the target stars, from the pre-main sequence up to the start of the RGB or to the RGB tip for the more evolved, primary star in the Ai Phe system. The initial metallicity [Fe/H] was varied from \(-0.4\) dex to 0.3 dex with a step of 0.01 dex. We adopted the solar heavy-element mixture by Asplund et al. (2009); a test conducted adopting the Grevesse & Sauval (1998) heavy-element mixture showed negligible differences in the results. Several initial helium abundances were considered at fixed metallicity, by adopting the commonly used linear relation \(Y=Y_{p}+\frac{\Delta Y}{\Delta Z}Z\) with the primordial abundance of \(Y_{p}=0.2471\) from
\begin{table}
\begin{tabular}{l c c} \hline \hline & primary & secondary \\ \hline \(M\) (\(M_{\odot}\)) & \(1.2438\pm 0.0008\) & \(1.1938\pm 0.0008\) \\ \(R\) (\(R_{\odot}\)) & \(2.9303\pm 0.0023\) & \(1.8036\pm 0.0022\) \\ \(T_{\rm eff}\) (K) & \(5094\pm 50\) & \(6199\pm 50\) \\ \([{\rm Fe/H}]\) & \(-0.14\pm 0.1\) & \(-0.14\pm 0.1\) \\ \hline \end{tabular}
\end{table}
Table 1: Masses, radii, effective temperatures, and surface metallicities adopted as observational constraints in the fit of the Ai Phe binary system.
Planck Collaboration et al. (2020). The helium-to-metal enrichment ratio \(\Delta Y/\Delta Z\) was varied from 1.0 to 3.0 with a step of 0.1 (Gennaro et al. 2010).
Models were computed with the FRANEC code, in the same configuration as was adopted to compute the Pisa Stellar Evolution Data Base2 for low-mass stars (Dell'Omodarme et al. 2012). The models were computed assuming the solar-scaled mixing-length parameter \(\alpha_{\rm ml}=1.74\). The extension of the extra-mixing region beyond the Schwarzschild border was considered only for the primary star and was parametrised in terms of the pressure scale height \(H_{\rm p}\): \(l_{\rm ov}=\beta H_{\rm p}\), with \(\beta\) in the range [0.00; 0.28] with a step of 0.005. The code adopts step overshooting assuming an instantaneous mixing in the overshooting treatment. The radiative temperature gradient is adopted in the overshooting region (see Degl'Innocenti et al. 2008, for more details of the overshooting implementation). Atomic diffusion was included adopting the coefficients given by Thoul et al. (1994) for gravitational settling and thermal diffusion. To prevent extreme variations in the surface chemical abundances for stars without a convective envelope, a diffusion inhibition mechanism similar to the one discussed in Chaboyer et al. (2001) was adopted. The diffusion velocities were multiplied by a suppression parabolic factor that takes a value of 1 for 99% of the mass of the structure and 0 at the base of the atmosphere. Further details of the stellar models are fully described in Valle et al. (2015, 2016) and references therein.
Footnote 2: [http://astro.df.unipi.it/stellar-models/](http://astro.df.unipi.it/stellar-models/)
Raw stellar evolutionary tracks were reduced to a set of tracks with the same number of homologous points according to the evolutionary phase. Details about the reduction procedure are reported in the Appendix of Valle et al. (2013). Given the accuracy in the observational radius data, a linear interpolation in time was performed for every reduced track in order to ensure that the separation in radius between consecutive track points in the 10\(\sigma\) range from the observational targets was less than one-quarter of the observational radius uncertainty.
## 3 Stellar model calibrations
The standard approach of the FRANEC evolutionary code is based on a damping of diffusion velocities in the outermost layers of the stars, but it was not enough to mitigate the drop in the MS, which is around 0.1 dex on average but can reach 0.2 dex. This drop nearly cancels out after the first dredge-up. Therefore, the surface metallicities of the two stars are predicted by stellar models to be significantly different. However, the only metallicity constraint available for the system comes from the analysis by Andersen et al. (1988). These authors measured the metallicity of both stars and detected a spread of 0.04 dex, the more evolved stars having higher surface metallicity. However, the presence of systematic errors and biases suggested a prudential common estimate of \(-0.14\pm 0.1\).
In light of the theoretical difficulty discussed in Sect. 1 to unambiguously predict the surface metallicity of stars in the mass range of Ai Phe before the first dredge-up, in the following we investigate two different configurations. In the first fit, we adopt the surface [Fe/H] value resulting from the stellar evolutionary code; in a second fit, we modify it by fixing its value to the initial one. In this second scenario, we still have the effect of the microscopic diffusion on the stellar interior, because we merely block its effect on the surface. The choice to fully inhibit the efficiency of the microscopic diffusion allows us to mimic the effect shown by Moedas et al. (2022) in their Fig. 1, that is, a cancellation of the surface metallicity drop owing to the effect of the radiative levitation. While this assumption may be too drastic for a model of 1.2 \(M_{\odot}\), nonetheless it sets an extreme reference. The comparison of the calibrations obtained under the two scenarios allows us to investigate their robustness to different choices of the efficiency of the microscopic diffusion in this critical mass range. It should also be noted that the adoption of the initial [Fe/H] as an observational constraint is justified for Ai Phe system, because the primary star already experienced the first dredge-up. In a different binary system, with both stars still on the MS, this assumption would be questionable because the initial [Fe/H] value could not safely be assumed as representative of at least one of them.
### Surface [Fe/H] taking into account diffusion
Despite the remarkable precision of the observational constraints, the fitting procedure was unable to clearly identify a unique solution, as several acceptable solutions for the system are possible. The kernel density estimator of the marginalised core overshooting parameter \(\beta\) (left panel in Fig. 1) suggests the presence of possible multiple solutions for the system fit. Three of them are located at low or intermediate \(\beta\) values up to \(\beta\approx 0.14\), while one is near the upper edge of the explored range. The examination of the joint 2D density in the age versus overshooting parameter plane helps us to gain insight into the solution substructure. As shown in the right panel of Fig. 1, the three peaks at low and intermediate overshooting have a similar age, while the peak close to the upper \(\beta\) range has a significantly lower age.
According to the position of the peaks, we identified four different solutions, labelled S1 to S4 at increasing \(\beta\) values, as follows:
* S1: solutions with \(\beta<0.055\);
* S2: solutions with \(0.055\leq\beta<0.11\);
* S3: solutions with \(0.11\leq\beta<0.15\);
* S4: solutions with \(\beta\geq 0.15\).
The results of the fit, divided according to these regions, are reported in Table 2. The presence of different islands of solutions stems directly from the degeneracy in the impact of the chemical composition and \(\beta\) on the stellar age. This effect, which is already discussed in the literature (e.g. Kirkby-Kent et al. 2016; Valle et al. 2017; Constantin & Baraffe 2018), prevents us from firmly constraining the \(\beta\) value because a set of observational constraints can be reproduced by different values of the parameters governing the stellar evolution.
Looking in detail at the proposed solutions, it appears that S4 has a significantly poorer goodness of fit (\(\chi^{2}=7.3\)) than the others (\(\chi^{2}\approx 3\)). A formal assessment of the goodness of fit is only asymptotically appropriate when the \(\chi^{2}\) statistic is evaluated over a discrete grid (see Frayn & Gilmore 2002; Valle et al. 2021 for more detail on this topic), approaching the underlying distribution for an infinitely dense grid. However, a rough estimate, assuming 2 degrees of freedom (6 observational constraints and 4 parameters), provides a \(P\) value of about 0.02, suggesting that the fit S4 is remarkably poor. Therefore, in the following we restrict the investigation to the solution at low and intermediate \(\beta\).
The structure of the solutions is further complicated because S3 is composed of a pool of models. The dominant peak at an age of about 4.7 Gyr corresponds to low-helium models (\(\Delta Y/\Delta Z\approx 1.1\)), while a secondary, much lower peak is located around 4.2 Gyr. This secondary peak corresponds to a very high helium-to-metal enrichment ratio close to 3.0. However, the relevance of this secondary peak is low, as it accounts for about 12% of
the models in the S3 island. Removing the solutions around the secondary peak increases the S3 best-fit age to \(4.67\pm 0.15\) Gyr, with little modification of the S3 initial chemical abundances.
As expected, the effect of microscopic diffusion is clearly shown in Table 2, which reports a difference of 0.10 to 0.20 dex in surface [Fe/H] fitted values. The large error in the [Fe/H] constraints mitigates the problem, allowing the pipeline to find solutions at different but compatible surface metallicities.
The agreement between observational data and best-fit evolutionary tracks is displayed in Fig. 2 in the radius versus effective temperature plane. The corresponding best-fit models are not shown in order to improve readability, but correspond to the points where the radius assumes the observational value. The identified evolutionary stages show a primary ascending the RGB and a secondary in the SGB, in agreement with the analysis of Kirkby-Kent et al. (2016). The evolutionary stage of the secondary is slightly different among S1, S2, and S3, progressively moving towards an earlier SGB phase. Overall, the pooled age estimate from S1 to S3 solutions is \(4.70^{+0.13}_{-0.14}\) Gyr when considering the secondary peak in the S3 island. Neglecting the secondary solution in the S3 basin has a negligible impact, only modifying the age estimate by 0.01 Gyr.
The proposed fits show a clear preference for low-helium models; more precisely, a share of about 75% of the solution in S1 to S3 areas are at \(\Delta Y/\Delta Z<1.2\). This result is at odds with a recent investigation performed on the Hyades cluster by Tognelli et al. (2021), who obtained a value of \(\Delta Y/\Delta Z=2.03\pm 0.15\). For comparison, only 4% of the Ai Phe system solutions lie in the \(1\sigma\) range [1.87, 2.18]. We performed a direct test by restricting the fitting grid to a \(2\sigma\) range [1.7, 2.3] around \(\Delta Y/\Delta Z=2.0\). In this scenario, the algorithm finds a solution for only 8% of the Monte Carlo experiments. The solution is strongly peaked at \(\beta=0.125\pm 0.005\) and an age of \(4.56\pm 0.12\) Gyr in the S3 island. The \(\chi^{2}\) of the solution is 4.1, suggesting an acceptable agreement with data. However, as discussed above, better solutions (according to the goodness-of-fit statistic) exist for the non-restricted grid.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & S1 & S2 & S3 & S4 \\ \hline \(Y\) & \(0.260^{+0.001}_{-0.000}\) & \(0.261\pm 0.001\) & \(0.263^{+0.016}_{-0.003}\) & \(0.277\pm 0.003\) \\ \(Z\) & \(0.0115^{+0.0086}_{-0.0003}\) & \(0.0123^{+0.0008}_{-0.0006}\) & \(0.0124^{+0.0018}_{-0.0010}\) & \(0.0096^{+0.0008}_{-0.0007}\) \\ \(\beta\) & \(0.042^{+0.006}_{-0.007}\) & \(0.084^{+0.014}_{-0.011}\) & \(0.125^{+0.003}_{-0.002}\) & \(0.265\pm 0.012\) \\ age (Gyr) & \(4.62^{+0.05}_{-0.05}\) & \(4.76^{+0.02}_{-0.02}\) & \(4.64^{+0.16}_{-0.25}\) & \(4.07^{+0.12}_{-0.10}\) \\ \hline \multicolumn{5}{c}{Fit parameters} \\ \hline \(T_{\rm eff,1}\) (K) & 5065 & 5034 & 5040 & 5213 \\ \(T_{\rm eff,2}\) (K) & 6227 & 6189 & 6249 & 6216 \\ \(R_{1}\) (\(R_{0}\)) & 2.9307 & 2.9301 & 2.9309 \\ \(R_{2}\) (\(R_{0}\)) & 1.8033 & 1.8038 & 1.8038 & 1.8029 \\ \([{\rm Fe/H}]_{1}\) & -0.08 & -0.04 & -0.04 & -0.14 \\ \([{\rm Fe/H}]_{2}\) & -0.26 & -0.20 & -0.15 & -0.26 \\ \hline \(\chi^{2}\) & 3.4 & 2.8 & 3.2 & 7.3 \\ \hline \end{tabular} 10
\end{table}
Table 2: Results of the Ai Phe binary system fitting in the four identified solution islands with variable surface [Fe/H].
Figure 1: Fit of the Ai Phe system with surface [Fe/H] abundances resulting from model calculations with diffusion. _Left_: Density of probability for the estimated core overshooting parameter of the Ai Phe system. _Right_: Joint 2D density of probability for the estimated overshooting parameter \(\beta\) and the system age.
Interestingly, the result for the Ai Phe fit agrees with the findings of Valle et al. (2017) for the TZ For binary system, a system with an evolved primary star --already in the central He burning phase-- and a secondary close to the hydrogen depletion. The fit of that system, performed using an identical pipeline and stellar tracks computed with the same stellar evolutionary code, resulted in \(\Delta Y/\Delta Z\) of close to 1.0, ruling out solutions at higher helium-to-metal enrichment ratios.
### Surface [Fe/H] fixed at the original value
The solutions proposed by the fit performed whilst imposing that surface [Fe/H] abundance be fixed at the initial value, as in the case of inefficient microscopic diffusion, show some differences with respect to those discussed in the previous paragraph. Figure 3 shows the presence of two islands, identified as follows:
* F1: Solutions with \(\beta<0.065\) and an age of less than 4.75 Gyr. A long tail extends at higher age but it is not considered a part of this island.
* F2: Solutions with \(\beta\geq 0.15\).
The corresponding best-fit values and the estimated parameters are collected in Table 3.
It is apparent that the F1 solution corresponds to S1. As opposed to the previously discussed solutions, the two prominent S2 and S3 islands disappear, the only remnant of S2 being the long tail towards higher age stemming from F1. Interestingly, there is no remnant of the S3 solution at \(\beta\approx 0.12\). Solution F2 corresponds to S4. Similarly to the previous scenario, this solution has a relatively high \(\chi^{2}\) and can therefore be disregarded as a system fit.
Including the tail in the F1 island (see Fig. 3) modifies the estimated age of the system to \(4.62^{+0.13}_{-0.06}\) Gyr, which is only 2% younger than the age estimated before. The corresponding convective core overshooting parameter is \(\beta=0.042^{+0.027}_{-0.005}\).
Similarly to the previous results, in this case low-helium models are also preferred. As in the previous section, we directly tested a restriction of the grid in the \(2\sigma\) range around \(\Delta Y/\Delta Z=2.0\). This change has drastic consequences for the results: only about 1% of the Monte Carlo experiments converged towards a bimodal solution, that is, at \(\beta=0.032^{+0.030}_{-0.008}\) and at an age of \(4.59^{+0.12}_{-0.04}\) Gyr (F1 island) and \(\beta=0.276^{+0.005}_{-0.054}\) and an age of \(4.22^{+0.04}_{-0.06}\) Gyr (F2). As opposed to the previous section, the \(\chi^{2}\) of these solutions are relatively high, at 11.2 and 9.9, respectively, suggesting very poor fits, and confirming the difficulties the algorithm has in providing solutions in this scenario.
## 4 Discussion
Despite the extraordinary precision in observational constraints, we were unable to obtain an undisputed fit of the system. This indetermination, which possibly limits the calibrating power of some binary systems, has already been recognised and discussed in the literature (e.g. Valle et al., 2017; Constantino & Baraffe, 2018; Johnston et al., 2019).
The system fits under the two explored scenarios show some similarity, but also several significant differences. First of all, we find the estimated age of Ai Phe to be very robust to different assumptions about the variation of [Fe/H] during stellar evolution. The improved precision in the stellar radii and effective temperatures allowed us to reach a 3% precision in the system age. However, this reassuring finding cannot be taken as a general rule, because systems composed of stars of different masses or in different evolutionary phases may react differently to changes in the surface [Fe/H]. On the other hand, a sharp calibration of the core overshooting parameter was not achievable. The differences between the obtained calibrations under the two scenar
\begin{table}
\begin{tabular}{l c c} \hline \hline & F1 & F2 \\ \hline \(Y\) & \(0.260^{+0.001}_{-0.0003}\) & \(0.274^{+0.004}_{-0.007}\) \\ \(Z\) & \(0.0114^{+0.0003}_{-0.0003}\) & \(0.0093^{+0.0005}_{-0.0007}\) \\ \(\beta\) & \(0.040\pm 0.004\) & \(0.265^{+0.005}_{-0.015}\) \\ age (Gyr) & \(4.58^{+0.05}_{-0.014}\) & \(4.09\pm 0.12\) \\ \hline \multicolumn{3}{c}{Fit parameters} \\ \hline \(T_{\rm eff,1}\) (K) & 5072 & 5224 \\ \(T_{\rm eff,2}\) (K) & 6237 & 6226 \\ \(R_{1}\) (\(R_{\odot}\)) & 2.9301 & 2.9309 \\ \(R_{2}\) (\(R_{\odot}\)) & 1.8040 & 1.8029 \\ \([{\rm Fe}/{\rm H}]_{1}\) & -0.07 & -0.14 \\ \([{\rm Fe}/{\rm H}]_{2}\) & -0.07 & -0.14 \\ \hline \(\chi^{2}\) & 1.9 & 7.2 \\ \hline \end{tabular}
\end{table}
Table 3: Results of the Ai Phe binary system fitting with fixed surface [Fe/H].
Figure 2: Comparison between the observational values of effective temperature and radius of the two stars (grey circle) and the evolutionary tracks for the different solutions found in the analysis. _Left_: Solution S1. The error bars correspond to 1 \(\sigma\) errors, and the errors in radii are too small to see. _Middle_: Same as in the left panel but for solution S2. _Right_: Same as in the left panel but for solution S3.
ios are quite relevant. In the scenario with unmodified surface [Fe/H], the degeneracy between \(\beta\) and the initial chemical composition allows solutions in a wide range of \(\beta\). The estimate is barely constrained, and found to be in the range of 0.04 to 0.12. When the surface metallicity is kept fixed at its original value, the pipeline points towards a cleaner solution in the range of 0.03 to 0.07. The two dominant solutions around \(\beta\) from 0.07 to 0.12 disappear. While none of these fits can be considered as a gold standard, the differences obtained by only modifying the variation of surface [Fe/H] during stellar evolution --a parameter that is currently far from fully understood-- are striking. As such this investigation raises some doubt as to the robustness of the calibrations obtained in the mass range between 1.2 \(M_{\odot}\) and 1.5 \(M_{\odot}\).
Thanks to the availability of high-quality observations, this system has been extensively investigated in the literature, with the most recent investigation performed by Kirkby-Kent et al. (2016). Relying on WASP photometry, which allowed accuracy on the radius of about 1% and effective temperature uncertainty of about 150 K, the authors suggested an age of \(4.39\pm 0.32\) Gyr for an initial helium value of \(0.26^{+0.02}_{-0.01}\). Even after the relevant improvement in the observation accuracy, these values are in agreement with those reported here at the 1\(\sigma\) level.
Regarding the convective core overshooting efficiency, relying on older and less accurate observations, Claret & Torres (2016) and Claret & Torres (2017) performed a calibration of the overshooting parameter working with both step and diffusive overshooting approaches. These authors obtain values of \(\beta=0.04\) and 0.00 for the primary and secondary star, respectively, in the first scenario, and \(f_{\rm ov}=0.00\) in the second scenario at an age of 4.38 Gyr. Many differences exist between the adopted fit frameworks, input physics in the stellar track computation, and observational constraints, making a direct comparison with those results questionable. Moreover, as discussed by Valle et al. (2018), a comparison of overshooting parameters without considering the overshooting scheme implemented in the evolutionary codes should be avoided. A safer comparison is between physical quantities, such as the convective core mass, whenever possible. This is not the case for the Ai Phe system, because the secondary is in an evolutionary stage where the convective core has already vanished.
While the pipeline was able to produce a satisfactory fit of the system, there are nonetheless some possible distortions and systematic effects that are worth discussing. First of all, the fit was performed at fixed input physics. It is well known that different pipelines, adopting different estimation algorithms and different stellar tracks, might obtain different estimates of the system fundamental parameters (Reese et al., 2016; Stancliffe et al., 2016; Silva Aguirre et al., 2017; Valle et al., 2017; Gallenne et al., 2023). A conservative estimation of the precision achievable on the age of Ai Phe is probably double the figure obtained by a single pipeline approach, that is, about 7%-10%.
We performed the fit assuming identical initial chemical composition and a common age. These assumptions are easily justified because the formation scenario supposes both stars formed nearly simultaneously from the same matter. Some other assumptions are more questionable. A common solar-scaled mixing length value was adopted for both stars and this values was kept fixed during the evolution. Trampedach et al. (2013) investigated the mixing length variation during stellar evolution using radiation hydrodynamics simulations. As discussed by Kirkby-Kent et al. (2016), this effect might cause the primary star in the Ai Phe system to have a slightly larger mixing-length parameter value. However, this effect was considered to have little impact on the estimated age. Some authors (e.g. Claret, 2007; Graczyk et al., 2016) deal with this problem by introducing further degrees of freedom in the fit, allowing the two stars to have different mixing-length values. However, this approach may lead to over-fitting the system and, something that is more problematic, may mask possible mismatches between stellar tracks and real observations by only adjusting the mixing length. While this fine tuning may be necessary to obtain a reliable fit of a system (see the discussion in Gallenne et al., 2023), it is possibly only a 'cosmetic' remedy. Masking the existing impossibility to fit a system under canonical assumptions and hiding the difficulties
Figure 3: Fit of the Ai Phe system with fixed surface [Fe/H] abundance. (_Left_): Joint 2D probability density in the \(\beta\) vs. age plane. (_Right_): Comparison between the observational values of effective temperature and radius of the two stars (grey circle) and the evolutionary tracks for the solution F1 found in the analysis. The error bars correspond to 1 \(\sigma\) errors.
by allowing every star to have its own mixing-length value does not contribute to developing more sensible stellar models.
Identical considerations apply to the choice of having a common core-overshooting parameter value for both stars. Ultimately, allowing the stars to have independent overshooting efficiencies or mixing-length values would lead to additional free parameters, which would reflect not only the additional mixing, but also other effects from any given source, which would remain hidden (see the discussion in Valle et al. 2017 for greater details on this topic). Moreover, while a difference in the overshooting parameter can be theoretically required and justified when the two stars have different masses, it is somewhat questionable for Ai Phe, given that the two stars have nearly identical masses.
## 5 Conclusions
Taking advantage of recently available, very precise observational data for the Ai Phe double-lined eclipsing binary system (Miller et al., 2020), we attempted to constrain the age and the efficiency of the convective core overshooting of the two stars under different assumptions. To do this, we used the SCEPER pipeline (Valle et al., 2014, 2015, 2015) on a dense grid of stellar models computed ad hoc.
We were able to obtain a satisfactory but multi-modal fit for the system at age \(4.70^{+0.13}_{-0.13}\) Gyr, with an overshooting parameter \(\beta\) in the range of \(0.04\)-\(0.12\). The estimated age was in agreement with the results of Kirkby-Kent et al. (2016), who suggested an age of \(4.39\pm 0.32\) Gyr. The fitting grid of the stellar track adopted for these estimates was computed including the effect of microscopic diffusion, which alters the surface metallicity [Fe/H] during stellar evolution, which impacts the fit of the system. Due to the fact that the efficiency of microscopic diffusion in the mass range of the system --around 1.2 \(M_{\odot}\)-- is still debated, we tested an alternative scenario by blocking the update of the surface metallicity, but allowing microscopic diffusion in the interior layers. This test is quite relevant because the two stars are in different evolutionary phases: while the primary has already experienced the first dredge-up, almost recovering its initial surface metallicity, the secondary has not. The age fitted in this second scenario, of namely \(4.62^{+0.13}_{-0.06}\) Gyr, agrees well with the age from the former fit. The most relevant difference is in the convective core overshooting calibration, because this scenario points towards a sharp solution in the 0.03 to 0.07 range.
The comparison of the two solutions provides satisfactory confirmation of the robustness of the age estimates obtained by our pipeline. The same conclusion was obtained for the CPD-54 810 binary system (Valle et al., 2023). On the other hand, it suggests great care should be taken when adopting binary systems for parameter calibrations, because the obtained parameters may only reflect the decisions by the modellers in their stellar model computations (see e.g. Constantinino & Baraffe 2018; Johnston et al. 2019). In this specific case, precise measurements of individual surface [Fe/H] would be of utmost importance in helping us to judge which scenario is the most reliable. While this is relevant for the Ai Phe system, given the evolutionary phases of the two stars, it may not be so for a different system. As already discussed in previous works (Valle et al., 2017, 2023), every system provides different challenges and poses interesting questions about the possibility to use them to constrain free parameters in stellar model computations.
###### Acknowledgements.
We thank our anonymous referee for the very useful comments and suggestions. G.V., P.G.P.M. and S.D. acknowledge INFN (miziativa specifica TAsP).
| ```
AI Phe星系における質量計算の安定性を評価する。表面効率に関する異なる仮定を用いてSCEPtERパイプラインを用いて、微小 diffuion の影響を調べる。参照シナリオでは、微小 diffuionにより表面金属濃度が変化する可能性を許容し、代替シナリオでは、他のソースからの競合混合がこの効果をキャンセルすることを想定した。主星が最初の dredge-up を経験した一方で、副星はしていないため、テストされたシナリオは興味深い違いを示す。推定年齢は非常に安定している ($4.70^{+0.13}_{-0.14}$ Gyr と $4.62^{+0.13}_{-0.06}$ Gyr) があるものの、対流コアのオーバーシュートパラメータ $\beta$ のCalibration には明瞭な違いがある。参照シナリオでは、$\beta$ の可能な値 |
2306.00167 | Reinforced Borrowing Framework: Leveraging Auxiliary Data for
Individualized Inference | Increasingly during the past decade, researchers have sought to leverage
auxiliary data for enhancing individualized inference. Many existing methods,
such as multisource exchangeability models (MEM), have been developed to borrow
information from multiple supplemental sources to support parameter inference
in a primary source. MEM and its alternatives decide how much information to
borrow based on the exchangeability of the primary and supplemental sources,
where exchangeability is defined as equality of the target parameter. Other
information that may also help determine the exchangeability of sources is
ignored. In this article, we propose a generalized Reinforced Borrowing
Framework (RBF) leveraging auxiliary data for enhancing individualized
inference using a distance-embedded prior which utilizes data not only about
the target parameter, but also uses different types of auxiliary information
sources to "reinforce" inference on the target parameter. RBF improves
inference with minimal additional computational burden. We demonstrate the
application of RBF to a study investigating the impact of the COVID-19 pandemic
on individual activity and transportation behaviors, where RBF achieves 20-40%
lower MSE compared with existing methods. | Ziyu Ji, Julian Wolfson | 2023-05-31T20:24:52 | http://arxiv.org/abs/2306.00167v1 | # Reinforced Borrowing Framework: Leveraging Auxiliary Data for Individualized Inference
###### Abstract
Increasingly during the past decade, researchers have sought to leverage auxiliary data for enhancing individualized inference. Many existing methods, such as multisource exchangeability models (MEM), have been developed to borrow information from multiple supplemental sources to support parameter inference in a primary source. MEM and its alternatives decide how much information to borrow based on the exchangeability of the primary and supplemental sources, where exchangeability is defined as equality of the target parameter. Other information that may also help determine the exchangeability of sources is ignored. In this article, we propose a generalized Reinforced Borrowing Framework (RBF) leveraging auxiliary data for enhancing individualized inference using a distance-embedded prior which utilizes data not only about the target parameter, but also uses different types of auxiliary information sources to "reinforce" inference on the target parameter. RBF improves inference with minimal additional computational burden. We demonstrate the application of RBF to a study investigating the impact of the COVID-19 pandemic on individual activity and transportation behaviors, where RBF achieves 20-40% lower MSE compared with existing methods.
Bayesian method; individualized inference; multisource data borrowing; supplemental data +
Footnote †: journal:
ORIGINAL ARTICLE
Journal Section
Introduction
Due to the increasing availability of high-resolution individual-level data, inference on individual units is of interest to researchers across many disciplines. For example, in the context of mobile health (mHealth), it is feasible to collect substantial amounts of data about individual study participants via continuous sensor-based monitoring over time [1]. However, this intensive monitoring can be burdensome to participants, and hence it may not be possible to collect enough data about each individual to make precise inferences. Researchers have proposed various methods to borrow information from other similar individuals or data sources to increase the precision of individual-level inference, for example by using fusion learning via maximum likelihood [2, 3], cooperative learning based on model linkage graph [4], or leveraging auxiliary summary statistics to improve the inference of an internal study under meta-analysis settings [5].
Recently, multisource exchangeability models (MEM, proposed by Kaizer et al. [6]) have gained prominence as a data borrowing method, particularly in the context of clinical trials. The popularity of MEM is likely inherited from its close relationship to Bayesian model averaging (BMA) [7, 8] as BMA has been widely used in many different disciplines since first proposed in the 1990s [9, 10, 11]. BMA takes the weighted average across multiple Bayesian models in order to achieve comprehensive and robust posterior inference of a target parameter. MEM combines the idea of BMA and the exchangeability-nonexchangeability (EX-NEX) model [12], and has been applied to improve treatment effect estimation in pivotal or basket clinical trials [13, 14]. The MEM framework has been extended in several directions: Brown et al. [15] proposed iterated MEM (IMEM) which reduced the computational complexity of MEM, allowing it to be applied with a larger number of supplemental sources; Kotalik et al. [16] combined the idea of MEM with regression models and considered treatment effect heterogeneity; Ling et al. [17] applied capping priors on MEM, which controlled the extent of borrowing by placing a cap on the effective supplemental sample size; and Ji et al. [18] developed data-driven MEM (dMEM) that improved performance by filtering out highly nonexchangeable supplemental sources and incorporating data from a large number of sources in the final inference without substantially increasing computational burden.
Existing MEM techniques for making posterior inferences about a parameter in a primary source use information about that same parameter in secondary sources to determine how much to borrow from them. For example, if the target parameter is the mean, then the sample means of the same parameter from the secondary sources (and their precisions, calculated using standard deviations) are used to determine the amount of borrowing. However, by only leveraging information on the target parameter, MEM and its current extensions do not consider other data in the primary and supplemental sources that may also be useful in determining how much information to borrow from that source. When the target parameter does not provide accurate and precise information indicating the exchangeabilities of the sources, we could utilize this auxiliary information to improve inference.
In our motivating example, we consider data from the COVID Travel Impact (CTI) Study which investigated the impact of the recent COVID-19 pandemic on individual activity and transportation behaviors. One of the measures of interest is the perceived risk of COVID-19 infection during daily activities estimated by self-reported measurements such as the number of close contacts. Suppose we are interested in making inferences about Person A's mean perceived COVID-19 risk. We also observe two other study participants, Person B and Person C, with limited or rough data for us to confidently determine their ground-truth exchangeability with Person A on the COVID-19 risk. However, Person B and Person C may have other characteristics that make them more or less similar to Person A; for example, perhaps Person B (like Person A) generally works from home while Person C is an essential worker who usually works in-person in high risk areas. In this case, it is more likely that the sample mean risk from Person C is actually closer to that of Person A, despite the vague indication from the target parameter. In this example, the
individual-level characteristic "work site" can help us decide how much to borrow from each person as it indirectly contributes information to the inference on infection risk. Further, we may also want to use the information on other measures captured simultaneously with the measure on which we are making inferences. For example, in addition to the self-perceived risk of infection, CTI participants were also asked about the perceived level of congestion (i.e., how "busy" an area was) during daily trips and activities. This subjective measurement is highly correlated with the measurements of infection risk and may better represent the contact level during the activity, so it could influence how we want to adjust the borrowing behavior.
In this article, we propose a reinforced borrowing framework (RBF) using a distance-embedded prior within MEM which utilizes data not only about the target parameter, but also uses other auxiliary information sources to "reinforce" (i.e., improve) inference on the target parameter. The RBF provides a flexible approach to incorporate different types of auxiliary information into the data borrowing process based on MEM. The "reinforcement" provided by the RBF can yield substantial improvements for individual inference ( 20% reduction in MSE for our motivating CTI example). The method is straightforward to implement, poses minimal additional computational burden, and is compatible with existing extensions of MEM such as iMEM and dMEM.
The remainder of the article is structured as follows. Section 2 gives a brief overview of BMA and a more detailed presentation of MEM, while introducing the notation used in the rest of the sections. Section 3 introduces our proposed method using distance-embedded priors and Sections 3.1 and 3.2 demonstrate the prior construction with different types of auxiliary data. Section 3.3 discusses how to incorporate the reinforced borrowing prior into MEM. Section 4 provides a series of simulation studies analyzing the performance of our method under different data environments. Section 5 illustrates a real-world application of the method on the COVID Travel Impact (CTI) Study. Finally, the article is concluded by a brief discussion in Section 6.
## 2 Overview and Notation
We begin with an overview of BMA and MEM, which our proposed method builds on. Just as its name implies, BMA takes the weighted average of multiple Bayesian models in order to get the final inference. Given a single Bayesian model \(\Omega_{k}\) estimating a parameter \(\theta\) with observed data \(D\), the conditional posterior is \(P(\theta\mid D,\Omega_{k})\sim P(\theta\mid\Omega_{k})P(D\mid\theta,\Omega_{k})\). Then, BMA is the weighted average of the \(K\) Bayesian models: \(P(\theta\mid D)=\sum_{k=1}^{K}w_{k}P(\theta\mid D,\Omega_{k})\), where the weight is
\[w_{k}=P(\Omega_{k}\mid D)=\frac{P(D\mid\Omega_{k})P(\Omega_{k})}{\sum_{i=1}^ {K}P(D\mid\Omega_{i})P(\Omega_{i})}, \tag{1}\]
which is the probability that the model \(\Omega_{k}\) is true given the data. For each \(k\), \(w_{k}\) could be calculated by using the marginal likelihood of data \(P(D\mid\Omega_{k})\), and a prior probability \(P(\Omega_{k})\) that the model \(\Omega_{k}\) is true. Therefore, under the framework of BMA, the final inference is not only determined by a single model, but jointly considers multiple models, thereby providing the potential to integrate different data sources.
The idea of MEM is directly derived from BMA. Suppose the data \(D\) includes one primary source \(S_{p}\) and \(H\) supplemental sources \(S_{1},...,S_{H}\), and the goal is to make inference on a parameter \(\theta(p)\) of the primary data source \(S_{p}\) by borrowing information from the same parameters \(\theta(1),\ldots,\theta(h)\) of the supplemental sources. A secondary source \(h\) is said to be _exchangeable_ with respect to \(\theta(p)\) if \(\theta(h)=\theta(p)\); we write \(s_{h}=1\) if source \(h\) is exchangeable and \(s_{h}=0\) otherwise. The goal of MEM is to identify the supplemental sources most likely to be exchangeable with the primary source \(p\) and borrow most strongly from them in making inferences on \(\theta(p)\). In MEM, each Bayesian model \(\Omega_{k}\) is
defined by a set of supplemental sources assumed exchangeable with the primary source, which could be defined as a set of values of \(s_{h}\) such as \(s_{k1}=1\), \(s_{k2}=0\),..., \(s_{kH}=1\), etc. The prior probability \(P(\Omega_{k})\) when calculating the posterior weight of model \(\Omega_{k}\) can be written as \(P(\Omega_{k})=P(S_{1}=s_{k1})\times...\times P(S_{H}=s_{kH})\). Also, MEM requires \(P(D\mid\Omega_{k})\) in the calculation of posterior weight, which is expressed by \(P(D\mid\Omega_{k})=\int P(D\mid\theta(p),\Omega_{k})P(\theta(p)\mid\Omega_{k} )d\theta(p)\).
When the data follow Gaussian, Poisson, or Binomial distributions with known parameters, there are closed-form marginal likelihood and posterior expressions for the key components of MEM. For example, with a flat prior on \(\theta(p)\) and Gaussian data, the posterior distribution of parameter \(\theta(p)\) given data \(D\) with known variances is:
\[P(\theta(p)\mid D)=\sum_{k=1}^{K}w_{k}P(\theta(p)\mid\Omega_{k},D)=\sum_{k=1} ^{K}w_{k}N\left(\frac{\frac{n_{\theta}}{\sigma_{p}^{2}}\tilde{y}_{p}+\sum_{h= 1}^{H}\frac{n_{h}}{\sigma_{h}^{2}}\tilde{y}_{h}}{\frac{n_{\theta}}{\sigma_{p} ^{2}}+\sum_{h=1}^{H}s_{k}h\frac{n_{\theta}}{\sigma_{h}^{2}}},\left(\frac{n_{p }}{\sigma_{p}^{2}}+\sum_{h=1}^{H}s_{k}h\frac{n_{\theta}}{\sigma_{h}^{2}} \right)^{-1}\right), \tag{2}\]
where the sample means \(\tilde{y}_{p}\) and \(\tilde{y}_{h}\) are the estimators of the target parameter in the primary source \(p\) and supplemental source \(h\) respectively; \(n_{p}\) and \(n_{h}\) are the numbers of observations; \(\sigma_{p}\) and \(\sigma_{h}\) are the assumed known standard deviations of the target parameter. When calculating the posterior weights, we usually choose equal prior probabilities on all models, so the posterior weights are proportional to the marginal likelihood \(P(D\mid\Omega_{k})\) and have closed-form expressions (see [19]). Maximum likelihood-based empirical estimates of \(\sigma_{p}\) and \(\sigma_{h}\) can be used; extensions that treat the covariance matrix as unknown have been published [20, 6].
## 3 Reinforced Borrowing Framework: Using a Distance-Embedded Prior in Mem
In this section, we describe our proposed method, the Reinforced Borrowing Framework (RBF), which utilizes information beyond the target parameter to help refine the parameter inference from MEM. The basic idea of the RBF is that the similarity of supplemental sources with the primary source on aspects _other_ than the target parameter can be used to modify (i.e., "reinforce") the standard MEM model weights that are based on the apparent exchangeability of the target parameter. As we show, reinforcing the weights in this way can not only improve efficiency by shrinking the posterior variance but can also correct potential selection bias in the primary source.
We distinguish between two types of auxiliary information that can contribute to our borrowing procedure: information about _auxiliary characteristics_ and information about _auxiliary parameters_. For each _characteristic_, we only have one observation per unit or individual, while for each _parameter_ we have multiple observations that are "aligned" (i.e., collected synchronously) with the target parameter. In what follows, for ease of exposition, we will use abuse terminology slightly and use the term "auxiliary parameter" (sometimes just "parameter") to refer to data that could be used to estimate something other than the target parameter.
A diagram of the method for auxiliary characteristics is shown as Figure 1, which has notation consistent with expressions in the following subsection. The method for auxiliary parameters is similar to that for characteristics. In our proposed method, we use distance metrics calculated from the auxiliary characteristics or parameters as priors \(P(\Omega_{k})\) of each Bayesian model, a technique we refer to as a "distance-embedded prior". The main goal of borrowing from those auxiliary data sources is to better determine the exchangeability of the supplemental sources with the primary source when the target parameter provides noisy or misleading information, leading to additional precision, potentially lower bias, and thus better parameter inference. Notice that we want more accurate inferences instead of simply higher precision, meaning that we would also wish to avoid borrowing from the sources that accidentally _appear_ to be seemingly exchangeable, although doing so may further increase the estimating precision in an overly
confident manner. Sometimes, sources that are determined as exchangeable when only looking at the limited data collected for the target parameter could be not truly exchangeable with the primary source (as the motivating example in Section 1 illustrates).
Another advantage of our proposed method is that borrowing auxiliary information could potentially correct selection bias in the primary source affecting the estimation of the target parameter. When the auxiliary characteristics or parameters carry the correct information regarding the true exchangeability of the supplemental sources, inference on the target parameter for the primary source can be driven towards the real underlying distribution by borrowing more from the truly exchangeable supplemental sources. For example, if the target parameter is mean heart rate in beats per minute and the primary source is a person that tends to only test their heart rate when doing physical activity, then the observed distribution of heart rates would be left-skewed. If some characteristics that are less subject to measurement bias (e.g., age, BMI) are highly correlated with heart rate, we can use data on these characteristics to make better decisions about how much to borrow from supplemental sources and thereby correct the skewness in the parameter inference of the primary source.
### Distance-embedded Prior for Auxiliary Characteristics
Auxiliary characteristics are measured at the source level, i.e., they are based on one observation per source. Suppose the goal is to estimate \(\theta(p)\) for a primary data source \(p\), which also has \(L\) observed auxiliary characteristics \(\{\eta_{1}(p),...,\eta_{L}(p)\}\). Meanwhile, there are \(H\) supplemental sources with observations on the target parameter \(\theta(1),\ldots,\theta(H)\) and the same set of \(L\) characteristics \(\{\eta_{1}(1),\ldots,\eta_{1}(H),\eta_{2}(1),\ldots,\eta_{L}(H)\}\). In each source, multiple outcome samples are observed from the distribution containing the target parameter \(\theta\), but each of the characteristics has only one observation per source, e.g. age, income level, BMI, and other one-time qualitative measurements. In a Bayesian model \(\Omega_{k}\), \(s_{k1},...,s_{kH}\in 0,1\) are the indicators of source exchangeability, where \(s_{kh}=1\) means supplemental source \(h\) is assumed to be exchangeable with the primary source on \(\theta_{p}\). Characteristics \(\eta_{l}\) are assumed to be correlated with \(\theta\) with a correlation \(r_{l}\).
Specifically, we modify the prior on posterior weights using the distance between the primary and supplemental characteristics. Suppose the distance between primary and supplemental source \(h\) on characteristic \(\eta_{l}\) is noted as \(d_{lh}\), then the distance-embedded prior for Bayesian model \(\Omega_{k}\) (provided \(\sum_{h=1}^{H}s_{kh}>0\)) is:
\[\pi_{d}(\Omega_{k})=\frac{\sum_{l=1}^{L}\lambda_{l}\cdot\frac{(\sum_{h=1}^{H} \varepsilon_{kh})^{2}}{\sum_{h=1}^{H}\varepsilon_{kh}d_{lh}}}{\sum_{h=1}^{K} \varepsilon_{kh}^{L}\sum_{l=1}^{L}\lambda_{l}\cdot\frac{(\sum_{h=1}^{H} \varepsilon_{kh})^{2}}{\sum_{h=1}^{H}\varepsilon_{kh}d_{lh}}},\ \lambda_{l}=b_{l} \cdot|r_{l}\cdot 1\left(|r_{l}|>\rho\right)| \tag{3}\]
where \(b_{l}\) is the weight for \(\eta_{l}\) to normalize the potentially different scales of the characteristics; \(\rho\) is a predetermined positive threshold on the correlation that only keeps auxiliary characteristics that have larger absolute correlations with the target parameter, more details are discussed in section3.3.
Clearly, the prior is normalized using the denominator to be added up to \(1\) for all \(k\). The numerator of the equation (3) is represented by taking the summation for the multiplication of two parts: the weighting term \(\lambda_{l}\) and the distance term. The weighting term contains the weights \(b_{l}\) on \(\eta_{l}\) to standardize different characteristics, as well as the correlation between \(\theta\) and \(\eta_{l}\) to quantify the relationship between the parameter of interest and characteristics. The distance part is derived from the number of assumed exchangeable sources for the corresponding Bayesian model times the inverse of the average distance between the primary source and the exchangeable sources, which symbolizes the similarity between the primary source and the selected (meaning that \(s_{kh}=1\)) supplemental sources of the
Bayesian model \(\Omega_{k}\) in a normalized way.
The prior in (3) is not well-defined when the Bayesian model \(\Omega_{k}\) has \(\sum_{h=1}^{H}s_{kh}=0\) thus \(s_{k1},...,s_{kH}=0\), which means none of the supplemental sources are assumed exchangeable with the primary source. So, we apply a flat prior to this circumstance to complete the prior specification, which means the prior is represented as \(\frac{1}{2^{H}}\) when \(\sum_{h=1}^{H}s_{kh}=0\). The finalized distance-embedded prior for any \(\Omega_{k}\) is:
\[\pi_{d}(\Omega_{k})=1\left(\sum_{h=1}^{H}s_{kh}>0\right)\cdot\frac{2^{H}-1}{2^ {H}}\cdot\frac{\sum_{l=1}^{L}\lambda_{l}\cdot\frac{(\sum_{h=1}^{H}s_{kh})^{2}} {\sum_{h=1}^{H}s_{kh}d_{lh}}}{\sum_{k=1}^{K}\sum_{l=1}^{L}\lambda_{l}\cdot\frac {(\sum_{h=1}^{H}s_{kh})^{2}}{\sum_{h=1}^{H}s_{kh}d_{lh}}}+1\left(\sum_{h=1}^{H }s_{kh}=0\right)\cdot\frac{1}{2^{H}}. \tag{4}\]
Parameters \(d,\lambda,b,r\) are estimated and plugged into the formula in a data-driven manner. Usually, we use squared euclidean distance (SED) as the distance metric: \(\hat{d}_{lh}=(\hat{q}_{l}(h)-\hat{q}_{l}(p))^{2}\), where \(\hat{q}_{l}(h)\) and \(\hat{q}_{l}(p)\) are the observed characteristics \(\eta_{l}\) for the primary source and supplemental source \(h\), respectively. In one of our primary studies, when comparing with euclidean distance, adding up the squared euclidean distance of assumed exchangeable sources had better performance in matching the ground truth exchangeability status of the Bayesian models. The framework is also applicable to other distance metrics if they are better fits. The weight for characteristics could be constructed using either the pooled standard deviation ratio: \(\hat{b}_{l}=\hat{q}_{l}/\sum_{k=1}^{L}\hat{\sigma_{k}}\), where \(\hat{\sigma_{k}}\) is the pooled standard deviation of characteristic \(\eta_{k}\) with observations from all the sources (including the primary source); or the inverse of pooled variance \(\hat{b}_{l}=1/\hat{\sigma_{l}}^{2}\). When using the inverse of pooled variance as the weight for characteristics, the constructed distance between sources turns out to be the squared Mahalanobis distance. The performance of the two weights is similar, so we will use the pooled standard deviation ratio as a default in this paper. In addition, if the target parameter is mean, we use Pearson's correlation coefficient as the correlation estimator \(\hat{r}_{l}\), while RBF is also flexible to other correlation estimators or even other ways to quantify the similarity between two variables. If selecting an alternative estimator other than mean, we suggest considering the correlation estimator between the target parameter and auxiliary characteristic correspondingly. For example, if the estimator is minimum or maximum, it's better to choose Spearman's correlation over Pearson's correlation since Spearman's correlation is based on the ranked values rather than the raw data, thus more consistent with estimators related to ranking.
### Distance-embedded Prior for Auxiliary Parameters
We assume that there are multiple observations contributing to only estimating the target parameter in either the primary or supplemental sources. In some cases, there may also be additional observations that could contribute to estimating other parameters that are potentially correlated with the target parameter. We refer to these as _auxiliary parameters_. Note that the distinction between auxiliary characteristics and auxiliary parameters is the number of per-source observations used to define/estimate them; an auxiliary characteristic is defined by a single observed value per source, while an auxiliary parameter is estimated based on multiple observations per source. We could borrow information from \(L\) auxiliary parameters, noted as \(\zeta_{1},...,\zeta_{L}\). Parameter \(\zeta_{l}\) is assumed to have correlation \(r_{l}\) with \(\theta\) and the distance between primary and supplemental source \(h\) on parameter \(\zeta_{l}\) is noted as \(d_{lh}\). For each source, we have vectors of measures on the auxiliary parameters, and there are two ways to define the distance function. First, the distance could be defined by distance metrics between distributions, such as Kullback-Leibler (KL) divergence, Hellinger distance, or Bhattacharyya distance, which all belong to the family of f-divergence. Among those distance metrics, KL divergence is the most commonly used, and a symmetric version of KL divergence is Jeffreys divergence,
which is used in our distance function:
\[\hat{d}_{lh}=D_{KL}(\hat{\xi}_{l}(\rho)||\hat{\xi}_{l}(h))+D_{KL}(\hat{\xi}_{l}(h) ||\hat{\xi}_{l}(\rho)), \tag{5}\]
where \(\hat{\xi}_{l}(\rho)\) and \(\hat{\xi}_{l}(h)\) are the observed probability density function of the parameter \(\zeta_{l}\) on primary source and supplemental source \(h\), respectively. Hellinger distance and Bhattacharyya distance are also both symmetric and could aslo be potentially applied as the distance function. On the other hand, the parameters could be also treated as characteristics by collapsing to the form of a single observation per source. When collapsing the original auxiliary parameter, as the default, we would recommend using the same estimator as the target parameter, or the users should prudently use their best judgment and expertise on the specific practical circumstance to select appropriate estimators that represent the exchangeability in the target parameter. The corresponding estimated correlations between the parameters and distances between the primary source and supplemental sources could be then calculated in a similar manner to the characteristics described in Section 3.1.
### Distance-embedded Prior in MEM
When applying the prior on MEM, we seek to balance between the amount of borrowing from the target parameter and the amount of borrowing from auxiliary characteristics or parameters. Hence, we propose a new prior for model \(\Omega_{k}\) which is a convex combination of the original flat prior and the constructed distance-based prior:
\[\pi(\Omega_{k})=(1-\mathbf{a})\cdot\frac{1}{2^{H}}+\mathbf{a}\cdot\pi_{d}(\Omega_{k}). \tag{6}\]
\(\pi(\Omega_{k})\) can be plugged into the formula that calculates \(\mathbf{w}_{k}\) for MEM.
The value of \(\mathbf{a}\in[0,1]\) in (6) determines the relative influence of the original and distance-based priors. Usually, we recommend selecting \(\mathbf{s}=1\) to fully leverage the benefit of information borrowing. However, when the correlation between the target parameter and auxiliary characteristics or parameters \(r_{l}\) is small, the distance-embedded prior will introduce bias into the model and influence the performance of the final MEM. There are two solutions for those cases: we could either select \(\mathbf{s}<1\) to borrow less from the auxiliary information and control the loss caused by the extra bias, or set \(\rho>0\) to drop the parameters which are weakly correlated with the target parameter and have a high risk of introducing unnecessary bias into the model. In order to maximize the gain and minimize the potential loss, we recommend applying the second approach. A rule-of-thumb selection of \(\rho\) based on our simulations is 0.3, as little seems to be gained when auxiliary characteristics or parameters have lower correlations than this. If there is particular concern about weakly correlated characteristics potentially doing more harm to the final performance when there is selection bias in the primary source, we recommend increasing \(\rho\) accordingly as an adjustment.
In addition, we apply normalization on the auxiliary characteristics (similarly for parameters) with pooled min and max before information-borrowing to make sure the characteristics are on the same 0-1 scale. Note that the usual standardization is not appropriate in our case because we assume that our data arise from a mixture of normals. Thus, when there are both exchangeable and nonexchangeable sources, conventional location-scale normalization generally assumes sometimes over-shrinks the data, which may lead to the distance calculation also being nonlinearly shrunk undesirably for some of the characteristics or parameters.
## 4 Simulation studies
The goal of our simulation studies is to demonstrate the performance of our proposed distance-embedded prior. Via three different simulation scenarios, we show in this section that our proposed approach not only achieves better performance over the regular MEM with flat prior on posterior weights when using auxiliary characteristics or auxiliary parameters, but can also correct selection bias in the primary source through the data borrowing process. The reinforced borrowing framework provides the most benefit for the inference of a target parameter when there is a limited amount of primary data available which can be "reinforced" with auxiliary characteristics and parameters. So, our simulation settings are designed to reflect such scenarios. To evaluate the methods, the measures include posterior variances, biases, mean square errors (MSE) or root mean square errors (RMSE), and the posterior weight of the ground-truth 'correct' model. We note that our approach is not expected to yield a large increase in the effective supplemental sample sizes [21] (ESSS) over regular MEM because we are mainly directing the method to borrow from the appropriate sources instead of borrowing more. Results on ESSS are available as supplemental Materials.
For all scenarios, the simulations are repeated 1000 times with different random seeds in data generation. To better demonstrate the basic behavior of our method, we set \(\rho=0\) between the target parameter and auxiliary characteristics or parameters (as we recommend in Section 3.3) in Simulation I and II. In Simulation III, we do not change the correlations and provide two scenarios with \(\rho=0.5\) as comparisons. All calculations are completed with R version 4.0.2 [22].
### Simulation Scenario I: Borrowing from Characteristics
The first scenario illustrates the case when the distance-embedded prior is constructed based on auxiliary characteristics. The characteristics are assumed to be correlated with the target parameter but not necessarily with each other because it is common to take the characteristics of the sources as observed. In addition, although the covariance matrix among the characteristics is not strictly a diagonal matrix and the correlation between characteristics would cause duplicated information if characteristics are considered independent, the correlation would affect all supplemental sources equally, which just multiplies a scale on all the distance metrics and would have minimal impact on the final weight. The target parameter is from either \(N(0,1)\) for the primary source and 5 exchangeable supplemental sources or from \(N(1,1)\) for 5 nonexchangeable supplemental sources. The sample size of the primary source is fixed at 10, while supplemental sources have sample sizes from 5 to 15. The auxiliary characteristics are generated independently with predetermined correlations \(r_{i}\) with the ground truth of the source-level means of the target parameter. More details about generating the auxiliary characteristics are available in the supplemental Material.
Different correlation combinations between the target parameter and characteristics are tested in this scenario. The results are shown in Figure 2. Each bar in the figure stands for a separate scenario with different sets of correlations between the characteristics and the means of the target parameter. Directly from the plot, almost all the scenarios have better performance than the original MEM, except the case with characteristics all weakly correlated with the target parameter. In the best case with correlations of 0.99, 0.7 and 0.5, the distance-embedded prior decreases posterior variance by a median of 36.8%, decreases bias by 55.5%, leading to a reduction in MSE of 41.4% (RMSE lower by 23.5%). The posterior weight of the correct model increases by 68.8%. When comparing across different scenarios, the performances are close between having 3 characteristics with a correlation of 0.7 and having 5 characteristics with the same correlations. In general, the higher the correlation between the auxiliary characteristic and the target parameter, the more advantage could be obtained by the distance-embedded prior. Weakly correlated characteristics (correlation \(\leq\) 0.3) appear to harm the performance in terms of posterior, bias and RMSE, but if there
are also characteristics with high correlation in the study, the effect would be offset to some extent. For example, in one scenario the advantage gained from a characteristic with a correlation of 0.7 compensate for the disadvantage introduced by two characteristics with correlations 0.3 and 0.1, with 9.5% lower posterior variance, 7.9% lower bias, and have MSE around 9% lower. Also, even when the characteristics are only weakly correlated with the target parameter, the weight of the correlated model is still increased by 43.4%, which means the prior keeps directing the MEM towards the ground truth.
### Simulation Scenario II: Borrowing from Auxiliary Parameters
Under this scenario, we test the performance of the distance-embedded prior when borrowing from auxiliary parameters. For simplicity, we assume that there are the same sample sizes from each of the auxiliary parameters, and that this number matches the sample size from the target parameter. We generate the target parameter of a primary source and 5 exchangeable supplemental sources from \(N(0,p(1-p))\) and generate the 5 nonexchangeable sources from \(N(1,p(1-p))\). We randomly draw 10 samples for the primary source and the sample sizes of supplemental sources are randomly selected from 5 to 15. Then \(p\) is dynamically calculated from the sample sizes as the proportion of the exchangeable samples to make sure the correlations are correctly preserved when generating auxiliary parameters using the Cholesky decomposition. Details are described in the Supplemental Material. The distance function used in this study is Jeffery's divergence since it provides the most consistent results compared with other divergences mentioned before.
The results are illustrated in Figure 3. Note that the numbers in the title of each bar are the correlations between the target parameter and the parameters; the full covariance matrices are in Table 1 in the supplemental Material. The performance of auxiliary parameters has a similar trend as for auxiliary characteristics, although the magnitudes are smaller. The best performance scenario is still the case with correlations 0.99, 0.7, and 0.5, which has 31.9% lower posterior variance, 19.1% lower bias, and 20.4% lower MSE (10.8% lower RMSE) when compared with a regular MEM. More parameters with the same correlation of 0.7 provides only modest benefit to the final MSE (12.4% lower for 3 parameters vs 14.5% lower for 5 parameters). The shape of the auxiliary parameter distribution also does not have much impact on the result when we generate the parameters from either an exponential distribution or normal distribution. The parameters with lower correlations do not hurt the performance as much as the scenario for characteristics, but there is barely any advantage by borrowing from them. Still, the weights on the correct model are always higher for the distance-embedded prior.
Simulation Scenario III: Correcting the Selection Bias of the Target parameter Using Characteristics
To illustrate the method without making the problem overly complicated, we consider the selection bias of the primary source as represented by the truncated normal distribution. The target parameter of supplemental sources is either from regular or truncated normal distributions, with randomly sampled truncating thresholds. The correlations of the 3 characteristics are set to be \(0.99,0.7\) and \(0.5\), respectively. Note that the characteristics are correlated with the means of the ground-truth unbiased target parameter instead of the observed means of the truncated normal distributions. The bias in the results is also calculated with respect to the true underlying mean of 0.
We test different truncating thresholds and directions in different scenarios. The results are shown in Figure 4. There are 4 basic scenarios in this simulation study: \(\theta>0\) means the target parameter of the primary source is sampled from a truncated normal \(TN_{x>0}(0,1)\); \(\theta<0\) means the target parameter of the primary source is from
\(TN_{x<0}(0,1)\); '\(\theta>0,\eta<u,u\sim Unif(-1,1)\)' is when the target parameter of the primary source is from a truncated normal \(TN_{x>0}(0,1)\), the target parameter of the supplemental sources from truncated normal \(TN_{x>u}(0,1)\) if exchangeable otherwise \(TN_{x>u}(1,1)\), where \(u\sim Unif(-1,1)\); and '\(\theta>0,\eta<u,u\sim Unif(-1,1)\) or \(u\sim Unif(0,2)\)' is a similar case with the target parameter of the exchangeable supplemental sources having \(u\sim Unif(-1,1)\) and non-exchangeable supplemental sources having \(u\sim Unif(0,2)\). There are two extra cases with \(\rho=0.5\) for \(\theta>0\) and \(\theta<0\). Among all the scenarios, \(\theta>0\) with \(\rho=0.5\) works the best with a 10.8% decrease in bias and 17.7% reduction in MSE, compared with a 6.9% decrease in bias and 10.6% reduction in MSE for the case with \(\rho=0\). The changes in bias and MSE are substantial in absolute values because the bias from the original MEM is around 0.8. The posterior variance of both \(\theta>0\) scenarios with or without \(\rho=0\) are inflated by over 20%, but we consider that the increase is actually a sign of good performance, because the original MEM would borrow more than it should from non-exchangeable sources, and lead to an artificially small posterior variance. The posterior weights of the \(\theta>0\) cases are 7 to 8 times the original MEM, meaning that the distance-embedded prior greatly encourages borrowing from truly exchangeable sources and discourages borrowing from nonexchangeable sources that appear to be exchangeable. The idea is different for the \(\theta<0\) scenario when we try to encourage borrowing in general because none of the sources appears to be non-exchangeable and the null model takes most of the posterior weights. We have posterior variances decreased by 17.4% and 3.4% respectively for \(\rho=0.5\) or 0. The decreases in bias are 7.8% and 4.1%, and the reductions in MSE are 14.5% and 6.7%. For the other two scenarios with biased supplemental sources, the method is robust to the two different ways of truncation for supplemental sources but both of the cases only have modestly around 3.5% improvements on bias and around 7.3% deductions on MSE, while there the posterior variances decline by 13.0% and 25.1%, respectively.
## 5 Application
During the past two years, daily lives and routines have been reshaped by the COVID-19 pandemic. In order to develop policies that improve the quality of life during and after the pandemic, the COVID Travel Impact (CTI) Study was conducted to investigate the impact of the pandemic on people's daily trips and activities. CTI enrolled 160 participants living in the Minneapolis-St. Paul area that successfully completed the intake survey followed by 14 days of data collection using a mobile application, Daynamica, that records daily trips and activities using smartphone sensor data and delivers surveys asking participants to provide additional COVID-related information about those trips and activities. In an end-of-day survey, participants were prompted to provide information about their trips and activities during the day.
In our study, to further understand the risk of exposure in different subpopulations, we are interested in how to better estimate the number of contacts during activities. In event surveys collected during the day, there are questions asking participants to report the approximate number of contacts (as an integer) and the level of congestion or crowdedness (on a 1-4 scale) during each trip or activity. In the end-of-day survey, the participants were asked about their level of concern about having contracted COVID that day (on a 1-5 scale). For this illustration, our target parameter is the mean number of contacts during activities for a given individual. All other information is transformed to characteristics by taking the individual-level average, which means for each question, either from the event surveys or the end-of-day survey, there is only one number per person to borrow from. Also, to further highlight the potential value of our method when used in conjunction with MEM, we focus on the subpopulation of \(n=11\) participants who only work from home (having \(n=10\) supplemental sources for each primary source makes standard MEM computationally feasible, though as we note in the Discussion, RBF is also compatible with approaches that do
source selection for MEM). Our goal is to do inference on the mean for each of the 11 participants, so all individuals would serve as the primary source. With a specific primary source, we randomly subsample 10 measures on the target parameter respectively for all sources. The process is applied repeatedly 500 times with the original MEM, which only borrows from the target parameter, and RBF, which borrows from both the characteristics and the target parameter, on each participant to be treated as the primary source. A naive approach that directly calculates the sample mean and the standard deviation is also applied as a comparison. Through leveraging the estimated results from the 500 iterations on 11 participants, the posterior variance, bias, MSE, and ESSS are calculated to compare the performance of the three approaches. Note that bias is estimated as the posterior mean minus the 'ground-truth' mean of all the observations from the individual. There are on average 73 total observations per individual, which is usually enough to achieve a good parameter inference, so we assume that the general mean from all the observed data is the ground truth of the target parameter.
The summarized results are shown in Figure 5. In the top figure, it shows the posterior weights in RBF and MEM. On the x-axis, each column represents the results of borrowing from a single individual viewed as the primary source and the colors represent the average sum of posterior weights from the models containing the supplemental source in the y-axis across 500 iterations. The colors are clearly changed and the RBF figure appears to have higher contrast, meaning that the data borrowing process is reinforced in RBF by dynamically either upweighting or downweighting certain supplemental sources for different borrowers. In the figure below, RBF leads to lower posterior variance, lower bias, lower MSE, and a lightly higher ESSS compared with both the original MEM and the naive approach. RBF has a 27.4% reduction in the median posterior variance compared with the original MEM (0.093 vs 0.128) and a 3.1% decrease in the median absolute bias (0.356 vs 0.368), which leads to an 18.0% decline in the median MSE (0.341 vs 0.416). All differences in performance metrics substantially exceed the magnitude that would be expected by chance given our stochastic resampling process. The median ESSS of the RBF is 10.6% larger than MEM (26.047 vs 23.553) and as expected, the resulting disparity is even larger between RBF and the naive approach.
## 6 Discussion
In this paper, we have proposed the reinforced borrowing framework (RBF) as a novel comprehensive data leveraging method inspired by multisource exchangeability models (MEM). By combining data not only from the supplemental sources on the same targeted parameter, but also from more alternative data sources, such as external or auxiliary parameters and characteristics which are closely correlated with the targeted parameter, RBF increases the efficiency of individual parameter inference and decreases the potential for selection bias. When compared with the previous application of MEM on heterogeneous treatment effects [16], RBF avoids joint modeling when leveraging high dimensional data. In addition, RBF is easily compatible with the current extensions of MEM such as iMEM [15] and dMEM [18] for source selection and clustering when there are a large number of available supplemental sources. Overall, RBF is a flexible and efficient data borrowing framework that outperforms the existing methods in a range of scenarios.
In a secondary simulation study (not shown in this manuscript), we found that under certain simulation settings, the approach of collapsing auxiliary parameters to their means and then treating parameters the same way as characteristics may sometimes significantly outperform using distribution-based distance metrics, such as f-divergences. Under our simulation setups, the divergences tend to overreact to the difference in the shape or spread of the distributions between the primary and supplemental sources, especially when the sample size is small and the sample SD estimation is sensitive to the sample selection. For instance, the divergence between the primary and a supplemental source might be overestimated when there are outliers in the observation and the SD estimation is inflated. However,
we still believe in the potential advantage of using divergences as distance metrics when the shape or spread of the auxiliary parameter distribution also implies the exchangeability or non-exchangeability of the target parameter. Because when the standard deviation of the auxiliary parameter is large enough to cover the signal of exchangeability in the mean or median, using the sample mean of the auxiliary parameter may cause wrongly estimated correlations and mislead the borrowing process. However, the advantages and disadvantages could be conditional on the specific scenario. In general, we recommend exploring the distribution of both parameters of interest and auxiliary parameters before applying the method.
RBF also has some limitations in its current form. First, RBF is now demonstrated only for estimating the mean. It could be challenging to derive a closed-form posterior distribution for some of the other parameters, and an MCMC process may substantially increase computational complexity. This should not be considered a weakness specifically for RBF since similar limitations apply to almost all Bayesian-based methods. Second, the advantage of RBF over MEM on posterior variance and MSE is more clear when the available sample size for individual sources is modest. When there is more data, parameter inference does not require as much "reinforcement" and hence the extra information borrowed by RBF does not make much difference. However, as we show, even in larger sample size scenarios RBF could still provide value when selection bias impacts the data available for estimating the target parameter. Third, to provide substantial reinforcement of data borrowing, auxiliary characteristics and parameters must be moderately to highly correlated with the target parameter. In practical problems, there will not always be measures available with the high degree of correlation needed to achieve the largest possible gains from RBF.
## 11 Data Generation in Simulation
### Simulation Scenario I: Borrowing from Characteristics
The parameter of interest is from either \(N(0,1)\) for the primary source and 5 exchangeable supplementary sources or from \(N(1,1)\) for 5 nonexchangeable supplementary sources. The sample size of the primary source is fixed at 10, while supplementary sources have sample sizes from 5 to 15. The auxiliary characteristics are generated independently with predetermined correlations \(r_{I}\) with the ground truth of the source-level means of the parameter of interest, which is a vector of 6 zeros and 5 ones, noted as \(Y\). For each characteristic \(X_{I}\), we first randomly sample a vector \(X^{\prime}_{I}\) from a Gaussian distribution, and the correlated characteristic is \(X_{I}=r_{I}\cdot SD(residual(Y))\cdot Y+\sqrt{1-r_{I}^{2}}\cdot SD(Y)\cdot residual(Y)\), where \(residual(Y)\) is a vector that removes the component of \(Y\) from the \(X^{\prime}_{I}\) and becomes orthogonal to \(Y\). Note that the generated characteristics would almost always have lower estimated correlations in practice compared with the desired correlations because of the randomness of the parameter of interest.
### Simulation Scenario II: Borrowing from Auxiliary Parameters
For simplicity, we assume that there are the same number of observations from each of the auxiliary parameters, and that this number matches the number of observations from the parameter of interest. In order to generate parameters with fixed correlation matrix \(R\) and covariance matrix \(\Sigma_{R}\), where \(\Sigma_{R}=Diag(\sigma)\cdot R\cdot Diag(\sigma)\), where \(Diag(\sigma)\) is a diagonal matrix with the desired standard deviation of parameters as diagonal elements, a common way is using Cholesky decomposition of \(\Sigma_{R}=C^{T}C\). Suppose all the observations of our parameter of interest could form a random column vector \(Y\), to get the matrix with auxiliary parameters as columns \(X\) such that the full parameter matrix \((Y,X)\) has correlation matrix \(R\), we could first generate each row of random samples \(X^{\prime}\) identically and independently
with the same size as \(X\). Then, the parameter matrix could be obtained by \((Y,X)=(Y,X^{\prime})\,C\) when the first element of \(Diag(C)\) equal to \(1\) to preserve the first column and make sure the covariance matrix of \((Y,X)\) will then become \(C^{T}\,Cov((Y,X^{\prime}))\,C\), where we could set \(Cov((Y,X^{\prime}))=I\) since the standard deviation of \(Y\) is determined to be \(1\) if the first element of \(Diag(C)\) is \(1\) and the columns of \(X^{\prime}\) are any independent random samples. A positive definite covariance matrix example is as below.
\[\Sigma_{R}=I\begin{pmatrix}1&0.7&0.3&0.1\\ 0.7&1&0.4&0.1\\ 0.3&0.4&1&0.05\\ 0.1&0.1&0.05&1\end{pmatrix}I\]
Note that the standard deviation of \(Y\) is the pooled standard deviation across all observations from different sources. Thus, we have to adjust the standard deviations proportionally when generating the sources considering the heterogeneity between the sources introduces extra variability to \(Y\). Since our parameter of interest is supposed to be Gaussian, the standard deviation of the mixed Normal is \(\sigma^{2}+P(1-p)(\mu_{0}-\mu_{1})^{2}\), with \(p\) as the proportion of the observations to be exchangeable, exchangeable sources from \(N(\mu_{0},\sigma^{2})\) and non-exchangeable sources from \(N(\mu_{1},\sigma^{2})\). So, when given exchangeable mean \(\mu_{0}=0\) and non-exchangeable mean \(\mu_{1}=1\), with a desired standard deviation of the mixed Normal as well as the standard deviation of \(Y\) to be \(1\), we could solve that \(\sigma=\sqrt{p(1-p)}\). In practice, we use the actual selected sample sizes to calculate \(p\) to keep the theoretical standard deviation of \(Y\).
We generate the parameter of interest of a primary source and \(5\) exchangeable supplementary sources from \(N(0,p(1-p))\) and generate the \(5\) nonexchangeable sources from \(N(1,p(1-p))\). There are \(10\) samples in the primary source and the sample sizes of supplementary sources are randomly selected from \(5\) to \(15\). Then \(p\) is dynamically calculated from the sample sizes as the proportion of the exchangeable samples. The auxiliary parameters are then generated as described above using Cholesky decomposition. To keep the process simple, we randomly generate each column in \(X^{\prime}\) from \(N(0,1)\) or \(Exp(1)\), depending on our assumptions on the distribution of auxiliary parameters. The distance function used in this study is Jeffery's divergence since it provides the most consistent results compared with other divergences mentioned before.
## 4 Simulation Scenario III: Borrowing from Auxiliary Parameters
The original distribution for the primary source and exchangeable supplementary sources is always \(N(0,1)\) and for the non-exchangeable sources is \(N(1,1)\). The sample sizes and correlated characteristics are generated similarly as in Simulation Scenario I.
| **過去10年間、研究者は、個体化推論を向上させるために補助的なデータの活用を目指しています。多源交換可能性モデル(MEM)のような既存の方法は、主要なソースから情報を読み込むことでパラメータ推論を支援するための代替手段として開発されています。MEMとその代替案は、主要なソースと補助的なソースの交換可能性に基づいて、情報を読み込む量を決定します。この交換可能性は、ターゲットパラメータの等しい値を表します。他の情報も、この交換可能性を決定する際に考慮されますが、無視されています。この論文では、ターゲットパラメータを推論する際に補助的なデータの活用を強化する、一般化した強化借用フレームワーク(RBF)を提案します。RBFは、ターゲットパラメータに関するデータに加え、様々な補助的な情報源を用いることで、推論を強化する「強化」機能を備えています。 |
2309.06393 | Real-time VaR Calculations for Crypto Derivatives in kdb+/q | Cryptocurrency market is known for exhibiting significantly higher volatility
than traditional asset classes. Efficient and adequate risk calculation is
vital for managing risk exposures in such market environments where extreme
price fluctuations occur in short timeframes. The objective of this thesis is
to build a real-time computation workflow that provides VaR estimates for
non-linear portfolios of cryptocurrency derivatives. Many researchers have
examined the predictive capabilities of time-series models within the context
of cryptocurrencies. In this work, we applied three commonly used models -
EMWA, GARCH and HAR - to capture and forecast volatility dynamics, in
conjunction with delta-gamma-theta approach and Cornish-Fisher expansion to
crypto derivatives, examining their performance from the perspectives of
calculation efficiency and accuracy. We present a calculation workflow which
harnesses the information embedded in high-frequency market data and the
computation simplicity inherent in analytical estimation procedures. This
workflow yields reasonably robust VaR estimates with calculation latencies on
the order of milliseconds. | Yutong Chen, Paul Bilokon, Conan Hales, Laura Kerr | 2023-09-11T14:51:30 | http://arxiv.org/abs/2309.06393v1 | # Real-time Var Calculations for Crypto Derivatives in Kdb+/q
###### Abstract
Cryptocurrency market is known for exhibiting significantly higher volatility than traditional asset classes. Efficient and adequate risk calculation is vital for managing risk exposures in such market environments where extreme price fluctuations occur in short timeframes. The objective of this thesis is to build a real-time computation workflow that provides VaR estimates for non-linear portfolios of cryptocurrency derivatives. Many researchers have examined the predictive capabilities of time-series models within the context of cryptocurrencies. In this work, we applied three commonly used models - EMWA, GARCH and HAR - to capture and forecast volatility dynamics, in conjunction with delta-gamma-theta approach and Cornish-Fisher expansion to crypto derivatives, examining their performance from the perspectives of calculation efficiency and accuracy. We present a calculation workflow which harnesses the information embedded in high-frequency market data and the computation simplicity inherent in analytical estimation procedures. This workflow yields reasonably robust VaR estimates with calculation latencies on the order of milliseconds.
## 1 Introduction
Since the creation of Bitcoin over a decade ago, cryptocurrencies have undergone a notable transformation, evolving from a speculative concept to a distinct asset class known as virtual assets. These digital assets are now increasingly acknowledged by investors as a diversifier for their portfolios [1]. The growth of derivative markets has contributed to enhanced market efficiency and liquidity, yet it has also intensified price jumps and mass liquidation events, as observed in the market of Bitcoin (BTC) [2, 3] and Ethereum (ETH) [4]. The
significant fluctuations in the underlying cryptocurrency assets, amplified through the leverage of derivative products, expose market participants to exceptionally high risks. Consequently, investors need adequate tools for making informed investment choices and managing market risk.
In this thesis, we focus on the application of _Value-at-Risk (VaR)_, which is the most prominently adopted risk management tool for assessing market risk, to portfolios consisting of cryptocurrency derivatives, specifically the futures and options traded on Deribit. By focusing on downside risk, VaR answers the fundamental question from investors: how much loss might my portfolio incur over some investment horizon.
While employing a sophisticated inference model for capturing volatility dynamics, together with a simulation-based approach for VaR estimation, often leads to a better comprehension of risk exposures, its practicality can be hindered when considering the cost of latency which arises from the use of stale market data. It is necessary to balance the trade-off between accuracy and latency, especially in the context of a high-frequency trading environment.
This thesis aims to develop a real-time solution for VaR calculation tailored to cryptocurrency derivative portfolios. This solution aims to provide reasonably accurate risk estimation while maintaining computational efficiency. To accomplish this objective, this thesis leverages the capabilities of kdb+, a specialised database designed to support real-time analytics on high-frequency time-series data.
### kdb+ and q
kdb+ is a vector-oriented in-memory database developed by KX [5]. The database is optimised for real-time analysis on large and continuously growing volumes of time-series data, making it a popular choice among financial institutions. q is the built-in array programming and query language in kdb+.
The outstanding performance by kdb+ comes from its optimised data storage workflow, columnar representation of data and its small footprint measuring just over 800kb [6]. To support immediate query on ingested data, data is first published to real-time database (RDB) in memory. At the end of each day, the data is then migrated to an on-disk historical database (HDB) where it is stored as memory mapped files, eliminating CPU operations required for object translation between memory and disk. The columnar format allows for efficient disk writes and facilitates more targeted data retrieval when queried with q-SQL, which mimics SQL syntax for convenience, but operate on tables via vector operations on column lists, rather than a row-by-row basis.
The kdb+ implemented a unique tick capture architecture, kdb+tick [7], which subscribes to incoming high-frequency data, and updates RDB or performs relevant analytics in real time as customised by q scripts. The details on the specific use case in this thesis will be discussed later.
Figure 1: kdb+tick architecture [7]
### Deribit
In this thesis, financial time series data are sourced from Deribit exchange, a leading cryptocurrency exchange that specialises in the futures and European-style options market. Launched in June 2016, Deribit now holds over 90% of the market share in crypto options trading, with average daily trading volume of over 1 billion dollar [8]. Notably, Deribit offers both futures and European style cash-settled options for Bitcoin (BTC) and Ethereum (ETH), which are crypto derivatives examined in this thesis. Therefore, utilizing data from Deribit ensures adequate coverage of the relevant financial instruments of interest.
### Contribution
Recognising the importance of appropriate risk management for investors participating in the cryptocurrency derivatives market, this work aims to develop a system that calculates Value-at-Risk (VaR) for cryptocurrency derivatives portfolios in real time. The main contributions of our work are as follows:
* **design of a real-time VaR computation workflow**: we developed a step-by-step procedure that applies analytical approaches at each stage of the computation. The workflow starts with applying a parsimonious volatility model in conjunction with efficient OLS estimators to high-frequency market data for volatility forecasting. Subsequently, the parametric approaches of delta-gamma-theta approximation and Cornish-Fisher expansion are employed to perform VaR estimation for crypto portfolios. Replacing the commonly used simulation approach with a reasonably robust analytical approach, our architecture produces real-time VaR forecast based on the most recent market data available.
* **implementation of a VaR calculation system in kdb+ and q**: we implemented a kdb+ tick architecture and a VaR calculation system in q. The tick architecture was customised to pre-process the high-frequency market data, thus facilitating more efficient and robust calculation process. The calculation system was developed to leverage the in-memory compute engine to optimise the calculation latency. When tested on a portfolio holding all available cryptocurrency derivatives on Deribit, our architecture successfully delivered VaR estimations with the shortest latency at 14.2 milliseconds and the space usage of 1MB.
* **implementation of a web-based interface in KX dashboards**: to enhance the accessibility of the calculation system, we also developed a web-based interface to deliver calculation results and other associated market metrics.
* **a comparative analysis of three common volatility models**: as part of the work, we examined the trade-off between calculation latency and accuracy for using EWMA, DCC-GARCH and HAR models in real-time volatility forecasting, in the context of tail risk metric Value-at-Risk (VaR).
### Structure
The remainder of this work is structured as follows:
**Section 2** begins with an overview of fundamental financial concepts and instruments relevant to this thesis. We then detailed the common approaches in calculating the VaR metric. Furthermore, we reviewed prior researches by others on the stylised facts in cryptocurrency market and the popular volatility models used to capture these observed dynamics.
**Section 3** presents the design principles and implementation details of the real-time VaR calculation system. The system comprises three key components: data service, calculation service and user inference. Following this sequence, we thoroughly examined the design and implementation choices made.
**Section 4** elaborates on the evaluation of the computation workflow. This evaluation is conducted from two dimensions: calculation latency and VaR estimate accuracy. We analysed and summarised the results derived from these empirical assessments.
**Section 5** concludes on this work summarising the achievements of this work and discussing potential areas for future work.
## 2 Background
### Derivatives
Derivatives are financial products that derive values from the performance of a specific or basket of financial instruments referred to as _underlyings_. Derivatives are typically represented as contracts between two or more parties, specifying terms and conditions under which payments would be made from one party to the other. Use cases of derivatives include hedging, speculation and leveraging. Additionally, derivatives also provide an alternative medium to gain exposure to specific assets, offering flexibility and potentially lower costs compared to direct ownership. Common derivative products include options, futures, forwards and swaps. The former two are introduced in the following sections as they are the focus of this thesis.
#### 2.1.1 Futures
Futures are standardised exchange-traded contracts that imply an obligation for the parties involved to buy or sell the underlying asset at a predetermined price, known as the _strike price_, and on a specified future date, referred to as the _expiration date_ or _maturity date_. The payoff for futures buyer at expiration is calculated as the difference between price of underlying asset \(S_{t}\) and strike price \(K\), implying a linear relationship between \(S_{t}\) and \(K\) as shown in Figure 1(a).
#### 2.1.2 Options
In contrast to futures contracts, options represent the right, rather than the obligation, to buy or sell the underlying asset at a predetermined strike price and on or before a specified expiration date. Call options grant the holder the right to buy, and put options grant the holder the right to sell. Depending on when the right to buy or sell can be exercised, options are categorised as European-style or American-style. European-style options allow the holder to exercise the right on the expiration date, while American-style options provide the holder with the flexibility to exercise the right at any time before the expiration date.
One important implication of this right to buy or sell is that when the underlying price is not in favour of the option holder, the holder could choose not to exercise it. This discretion introduces non-linearity into the relationship between the movement of the underlying asset price and the change in corresponding options positions, where payoff for option holders is floored at \(0\), as illustrated in Figure 2.
Figure 2: Payoff of futures and options at expiration for derivative buyer/holder.
#### 2.1.3 Cryptocurrency Derivatives
A important feature that distinguishes cryptocurrency derivatives from derivatives on traditional assets is the mechanism of cash settlement. In traditional derivatives, cash settlement will takes place in quoted currency, which is the currency used to price the underlying.
To illustrate, in the case of a futures contract on the British Pound with a strike price denominated in US Dollars, the cash payment would be made in US Dollars. Conversely, for majority of cryptocurrency derivatives, particularly those traded on Deribit, profits or losses will be settled in the corresponding cryptocurrency, akin to the British Pound in the aforementioned example.
### Value-at-Risk (VaR)
Formally introduced with the RiskMetrics Technical Documents by JP Morgan in 1996, Value-at-Risk (VaR) has evolved to an established measure of market risk exposure in the banking industry since the Basel Accord III mandated the use of VaR [9]. For a given portfolio, VaR measures the maximum potential loss in market value that is not expected to be exceeded over a given time horizon \(t\) with a probability of \(p\), assuming no trading of portfolio positions occurred during \(t\)[10]. More formally, let \(R_{t}\) be a random variable representing change in portfolio value over time horizon \(t\), VaR represents the \(1-p\)th quantile in the distribution of \(R_{t}\), such that
\[P(R_{t}\leq VaR)=1-p \tag{1}\]
While the choice of \(p\) and \(t\) are discretionary, common parameters are 1-day and 2-week for \(t\) and 95% and 99% for \(p\). Figure 3 provides an example of 95% 1-day VaR.
#### 2.2.1 Measuring VaR
According to Holton [10], VaR is a risk measure that combines exposure and uncertainty in its representation. Therefore, VaR estimation typically involves 3 procedures:
* a mapping procedure to quantify exposure: By considering a vector of market variables \(\mathbf{R_{t}}\), this procedure is to express the values of the portfolio \(P_{t}\) as a function of these variables. For instance, in the case of a portfolio holding Bitcoin call options with strike \(k\) and expiration date \(t\), portfolio value can be expressed as \(P_{t}=f(\mathbf{R_{t}})=\max\{\mathbf{R_{t}}-\mathbf{K},\mathbf{0}\}\), where \(\mathbf{R_{t}}\) represents is the market variable vector containing only Bitcoin price.
* a inference procedure to quantify uncertainty: To quantify uncertainty in market variables over time \(t\), the procedure starts with assuming them as random variables following a stochastic process and model their movements via some choice of
Figure 3: 95% 1-day VaR for a hypothetical portfolio representing the 5 percentile of the distribution for change in portfolio value over 1 day.
distributions, such as log-normal or Student \(t\), thus arriving at their conditional distribution at time \(t\) based on information available at time \(t-1\). The widely applied EWMA and GARCH model will be discussed in Sections 2.3.2 and 2.3.3.
* a transformation procedure to combine exposure and uncertainty: Leveraging the result from previous two procedures, risk is presented with a characterisation of conditional distribution of portfolio value, in alignment with the quantile concept of VaR. Various approaches exist to compute or estimate this risk measure. In general, higher accuracy comes at the cost of computation complexity. In the next section, we present the two main classes of these approaches.
#### 2.2.2 Parametric Approach
Parametric approach, also known as variance-covariance approach, transforms conditional distribution of market variables to that of portfolio with analytical equations. A simplistic example is to assume joint Gaussian distribution on market variables \(\mathbf{R_{t}}\sim\mathcal{N}(\mu_{t},\,\sum_{t})\), where \(\mu_{t}\) and \(\sum_{t}\) are estimated from historical market data. If portfolio can be expressed as a linear transformation of these Gaussian random variables through \(P_{t}=f(\mathbf{R_{t}})=\mathbf{a}\mathbf{R_{t}}+b\), portfolio value would also follow Gaussian distribution, \(P_{t}\sim\mathcal{N}(\mathbf{a}\mu_{t}+b,\,\mathbf{a^{T}}\sum_{t}\mathbf{a}).\) With a tractable form, portfolio VaR can be derived analytically as a defined number of standard deviations below expectation.
Figure 4: Methodology for measuring VaR [10].
#### 2.2.3 Non-Parametric Approach
Non-parametric approach relies on simulation for transformation. Changes in market variables are sampled and the portfolio is re-evaluated on each sampled market conditions. The estimation of VaR is then obtained by applying an appropriate sample estimator to these re-evaluations.
Depending on the specific sampling procedure employed, transformation can be further categorised into Monte Carlo transformation or historical transformation: in the former, changes in market variables are obtained from applying Monte Carlo method, which involves simulating scenarios based on specified distributions; in the latter, changes in market variables are sampled from historical data, thereby capturing the observed variability in the past market conditions.
### Volatility Model
With the widely used assumption that conditional mean of return equals zero, and the argument that construction of conditional mean is largely derived from economic theories on behaviours of market variables [10], most of the work in the inference procedure is to model the conditional variance and covariance of market variables. This section first introduces some stylised facts observed in cryptocurrencies' time series data, then brings out the common models used in volatility forecast.
#### 2.3.1 Dynamics in cryptocurrency returns
When assessing price movement in financial instruments, it is commonly assumed that the focus is to model the log returns observed in the time series data via
\[r_{t}=\ln\frac{P_{t}}{P_{t-1}}. \tag{2}\]
Firstly, returns are more comparable than prices. To illustrate, BTC trades around USD 27000 while ETH trades around USD 1900, a price movement of USD 1000 implies a mild change in BTC market but a significant one in ETH. Secondly, additive property of log returns linearises the effect of compounding. When analysing cumulative returns over multiple periods, log returns can be summed directly. Thirdly, the non-negativity of the argument of log function is well aligned with such property in any asset price.
Below are some stylised facts consistently found in existing literature on cryptocurrency:
* **Volatility Clustering:** empirical analysis [11, 12] established statistically significant evidence for conditional heteroskedasticity in returns behaviour, specifically the pattern that large change in crypto prices tend to be followed by large changes. Volatility shows a tendency to persist, and in statistical term, to auto-correlate. This is one of the key features leveraged in volatility forecast.
* **Volatility Asymmetry:** studies examining different periods of cryptocurrency market have found consistent evidence that volatility responds asymmetrically to past returns [12, 13, 14, 15]. For majority of cryptos, researchers observed reverse leverage effect that positive shocks increasing volatility more than negative shocks, a feature that resonates with safe-haven assets, such as gold. For BTC and ETH, however, such safe-haven feature has diminished over time. In studies covering more recent data by Gupta et al [16] and Aharon et al [17], it is noted that following the introduction of derivatives and expansion in market capitalisation, BTC and ETH resemble stocks where leverage effect dominates.
* **Excess Kurtosis:** consistently large excess kurtosis was observed for major cryptocurrencies [12, 18], implying that their return distributions have heavier tails than Gaussian distribution. In particular, in Fung's comprehensive analysis covering 254 cryptos, it was observed that the kurtosis ranges from 4.63 up to 283.61 [11].
* **Strong Intra-market Correlation:** evidence of volatility co-movement has been identified among pairs of major cryptoassets, but as illustrated in Figure 5 such the direction of conditional correlation is unstable [19]. This may explain the poorer out-of-sample forecasting power of multi-variate
GARCH model observed in Chi et al's work [15], where the model including interactive effects between BTC and ETH had lower adjusted R-square than the univariate GARCH model. * **Structural Break:** due to exogeneous factors, such as geopolitical factors, social events, sudden changes in unconditional variance of asset returns occur, implying structural breaks in the volatility model. While researches found evidence of structural breaks in cryptocurrency markets via relevant statistical tests, they also noted that failure to account for structural breaks lead to overestimation of volatility persistence [17, 20].
#### 2.3.2 EWMA model
Implemented as part of RiskMetrics forecasting methodology [21], Exponentially Weighted Moving Average (EWMA) is a simple but powerful measure to capture the dynamics in volatility and covariance. Since volatility reacts quickly to shocks, and the effect of shock declines exponentially as time passed, EWMA estimator incorporates such mechanism by applying higher weights to more recent observations and lower weights to those further away, through a decay factor \(\lambda\):
\[\sigma^{2}_{T+1|1,2,\ldots T}=(1-\lambda)\sum_{t=1}^{T}\lambda^{t-1}(r_{t}- \bar{r})^{2} \tag{3}\]
where \(0<\lambda<1\). The estimator can also be expressed in a recursive format for forecasting volatility:
\[\sigma^{2}_{t|t-1}=\lambda\sigma^{2}_{t-1|t-2}+(1-\lambda)r_{t-1}^{2} \tag{4}\]
For covariance forecast, a similar estimator can be applied to each pair of market variables:
\[\sigma^{2}_{ij}=(1-\lambda)\sum_{t=1}^{T}\lambda^{t-1}(r_{it}-\bar{r}_{i})(r_ {jt}-\bar{r}_{j}) \tag{5}\]
with an equivalent recursive form.
\[\sigma^{2}_{ij,t|t-1}=\lambda\sigma^{2}_{ij,t-1|t-2}+(1-\lambda)r_{it}r_{jt} \tag{6}\]
#### 2.3.3 GARCH model
The consensual model to capture the volatility pattern of financial time series is the generalised autoregressive conditional heteroskedasticity (GARCH) model proposed by Bollerslev [22] and its variants. In the standard GARCH(\(p\), \(q\)) model, the conditional variance \(\sigma^{2}_{t}\) is a weighted average of a constant, conditional variance in previous \(p\) periods and error term which is also referred as innovation in previous \(q\) periods.
Figure 5: Conditional correlation between the price returns of Bitcoin and Ethererum ranging from -0.70 to 0.96 [19].
#### GARCH(p,q)
\[r_{t}=\mu_{t}+\varepsilon_{t} \tag{7}\] \[\varepsilon_{t}=\sigma_{t}z_{t},z_{t}\sim\mathcal{N}(0,\,1)\] (8) \[\sigma_{t}^{2}=\omega+\sum_{i=1}^{p}\alpha_{i}\epsilon_{t-i}^{2}+ \sum_{j=1}^{q}\beta_{j}\sigma_{t-j}^{2} \tag{9}\]
where \(\omega>0,\alpha_{i}\geq 0\) and \(\beta_{j}\geq 0\). The parsimonious model used in most empirical study for crypto volatility is GARCH(1, 1) where \(p\)=1 and \(q\)=1.
#### asymmetric GARCH
Such variants of GARCH model are particularly popular in volatility forecast for cryptocurrency due to the well-recognised phenomenon of asymmetric behaviour. Common asymmetric GARCH models are presented in Table 1.
The relative performance among these asymmetric models is dataset dependent. While GJR-GARCH showed better performance in terms of log likelihood than EGARCH in the data from August 2015 to December 2018 analysed by Cheikh et al [13], in the study by Fung et al [11] using March 2019 to March 2021 data, TGARCH showed better results.
#### error term in GARCH model
Despite GARCH with Gaussian error term presents a heavy-tailed behaviour in comparison to the Gaussian distribution, it often fails to incorporate such observed style in returns sufficiently [26]. Therefore, it is common to assume error terms follow non-Gaussian distributions, such as Student-t, skewed Student-t and generalised normal distribution.
In particular, the cross-sectional analysis by Fung et al [11] concluded that Student's t error distribution in general have a better performance in describing dynamics in cryptoassets, followed by its skewed version. This is also consistent with the observation by Peng et al [27]that heavy-tailed distribution produces better results than Gaussian distribution.
#### HAR model
With the availability of high-frequency market data, a popular alternative to GARCH-type estimators is the heterogeneous autoregressive (HAR) model proposed by Corsi [28], which utilises realised variance (RV) as a non-parametric estimator to model latent quadratic variance.
#### from Quadratic Variance to Realised Variance
Let the logarithmic price \(P\) of an asset follows a stochatic process defined as:
\[dP_{t}=\mu(t)dt+\sigma(t)dW_{t} \tag{10}\]
, where \(\mu(t)\) and \(\sigma(t)\) are the corresponding drift and volatility processes respectively and \(W_{t}\) is a standard Brownian motion [29]. Since the logrithmic transformation linearises the compounding effect in cumulative return, the return over time interval from \(t\) to \(t+\tau\) is:
\[r_{t,\tau}=\int_{t}^{t+\tau}\mu(\tau)d\tau+\int_{t}^{t+\tau}\sigma(\tau)d\tau \tag{11}\]
and the corresponding quadratic variance (QV) is
\[QVt,\tau=\int_{t}^{t+\tau}\sigma^{2}(\tau)d\tau. \tag{12}\]
\begin{table}
\begin{tabular}{c c} \hline \hline Model & Volatility Equation \\ \hline EGARCH [23] & \(\log(\sigma_{t}^{2})=\omega+\alpha(|\varepsilon_{t-1}|-E(|\varepsilon_{t-1}|))+ \gamma\varepsilon_{t-1}+\beta\log(\sigma_{t-1}^{2})\) \\ GJR-GARCH [24] & \(\sigma_{t}^{2}=\omega+\alpha\varepsilon_{t-1}^{2}+\gamma I(\varepsilon_{t-1} <0)\varepsilon_{t-1}^{2}+\beta\sigma_{t-1}^{2}\) \\ TGARCH [25] & \(\sigma_{t}=\omega+\alpha|\varepsilon_{t-1}|+\gamma|\varepsilon_{t-1}|I( \varepsilon_{t-1}<0)+\beta\sigma_{t-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of asymmetric GARCH models.
Though \(QVt,\tau\) cannot be observed directly, realised variance which can be treated as an observed variable in the presence of high frequency data is a consistent estimator of QV, as shown in the seminal work by Anderson and Bolleslev [30]. Realised variance refers the sum of squared intraday returns over a defined time interval, with returns calculated at finer intervals [29]. For example, to calculate the RV over 1 day, we can divide the daily trading hours into 144 periods of 10-min and measure the return over each of them, thus RV is the sum of the 144 squared returns.
Har-RvThe standard HAR model presented in Corsi's work specified that the 1-step-ahead daily \(RV_{t+h}\) can be modelled by HAR(3)-RV as:
\[logRV_{t+1d}=\beta_{0}+\beta_{d}logRV_{t}^{(d)}+\beta_{w}logRV_{t}^{(w)}+\beta _{m}logRV_{t}^{(m)}+\epsilon_{t} \tag{13}\]
where \(RV_{t}^{(d)}\), \(RV_{t}^{(w)}\), \(RV_{t}^{(m)}\) are the daily, weekly and monthly observed realised variance respectively [28]. Log of RVs is used to ensure the positiveness.
Though HAR-RV model is a simple model which can be fitted with Ordinary Least Squares (OLS) estimators, it is capable of capturing the high persistence in most realised variance series and producing sound volatility forecast [28]. Previous work by Aalborg et al [31] have found robust performance of HAR models in the context of Bitcoin volatility. In addition to these, Bergsli et al [32] compared HAR models with GARCH models and reported superior performance of HAR models over any GARCH models as measured by mean squared error (MSE) and such difference in performance is the largest for short-term forecasting horizons, such as 1 day.
Later studies have also explored other variants of the HAR model. For example, to correct for measurement errors in daily realised variance, Bollerslev et al proposed a model incorporating realized quarticity, which is referred to the HARQ model [33]. HARQ-type models improve the forecast performance by placing less weight on historical values of RV when the measurement error is higher. Furthermore, Yu also explored including leverage effect and jump components, and reported that leverage effect contributes more than jump components for out-of-sample volatility forecasting for BTC [34].
realised covariance estimatorIn parallel with the univariate semimartigale process of logarithmic price, the realised covariance estimator is derived from the covariation of the processes for two logarithmic prices. For example, Bollerslev et al have extended the univariate HARQ model to the multivariate space as the MHARQ model [35].
Two issues arise as we move into multivariate realised measures: 1) asynchronous in transactions in different assets and 2) the Epps effect which describes the bias towards zero in realised correlation as the sampling frequency increases [36]. Possible solutions to these issues include the work by Hayashi and Yoshida [37], which uses a sampling scheme include all overlapping intraday returns based on the actually observed price series, and the multivariate kernel estimator introduced by Barndorff-Nielsen et al [38].
## 3 Design & Implementation
This section details the design considerations and the associated implementation decisions of the system.
### Architecture Overview
As shown in Fig 6, the processes in the computation workflow can be categorised into three groups:
* data service responsible for collecting, processing and storing market data
* calculation service responsible for calculating VaR estimate in real time
* user interface responsible for delivering calculation result and other market data analytics
### Real-time Data Sourcing with kdb+
Regarding the primary data source, we employed the Deribit API service, which supports the subscription to tick level data over Websockets. Tick data refers to timestamped entries of market data including bid and ask prices, last traded price, open interest, among other relevant metrics. Each transaction prompts the generation of a tick record associated with the traded product. For the period from 2023.04.11 to 2023.08.31, the average number of tick records received per day is \(3.4\times 10^{7}\). To handle such enormous volume of high-frequency data and create a real-time market data stream for Value at Risk (VaR) calculation, we used the aforementioned kdb+ tick architecture in conjunction with this API service [7]. Below we introduce the components within this architecture.
#### 3.2.1 Feedhandler
For Deribit data to be consumed by kdb+, we implemented a Python script as the feedhandler. It subscribes to tick-level data of all available futures and options products on Deribit as well as the price indices for BTC and ETH. The tick data are transmitted via a Websocket connection in JSON format. Upon receipt, the script maps the received data into the suitable table schema within tickerplant and then pushes them to the corresponding table in tickerplant for downstream distribution.
#### 3.2.2 Tickerplant
As data are pushed in by the feedhandler, the tickerplant process acts as a publisher, distributing the received data to both real-time databases and subscribers. Simultaneously, it writes records to a log file on disk for data recovery purpose. Since we would like to use the latest market data in calculation workflow, we implemented the standard zero-latency tickerplant structure through which each update from the feedhandler gets written to disk and published to subscribers independently.
Figure 6: Architecture Diagram
#### 3.2.3 Real-time Database and subscribers
As complementary components of the publisher process, we implemented subscriber processes that subscribe to tables published by the tickerplant. To maintain a clear separation of responsibilities, we have implemented two distinct instances of real-time subscribers: namely, the price subscriber and the streaming subscriber.
Price subscriberPrice subscriber plays a pivotal role in supporting the core VaR calculation process. This subscriber is responsible for handling all queries on intraday market data relevant for VaR calculation. It achieves this by maintaining keyed tables that capture time-weighted market price of underlying cryptocurrency indices and individual products throughout the intraday trading period, namely indextwap, futuretwap and optiontwap. Keyed tables in kdb+ are dictionaries that, instead of mapping keys to values, link one table of keys to another table of values, such that each row in the keys table is mapped to a corresponding row in the values table[5]. The choice of keyed tables over simple tables is motivated by memory saving in using dictionary lookup and also to facilitate the pj used in the price subscriber.
For cryptocurrency indices table indextwap, prices used are the median of mid prices sourced from several major exchanges [39]. For futures and options products offered by Deribit, prices used are mark prices, which is a Deribit-specific metric representing the fair value of the specific products and being used in derivatives contracts valuation.
The price subscriber pre-processes the incoming tick level data by aggregating it into the corresponding 1-min interval record before inserting it into the relevant table. The key reason for this preprocessing is to use the pre-averaged prices to mitigate the impact of market microstructure noise, a topic we will delve in further in Section 3.3.2. In addition, since the tick data for different indices and products arrive at random times, the averaging process synchronises market data. Furthermore, this significantly reduces the amount of intraday data held in memory.
```
0:\(d\), INDEXTNAP functionupIndex(\(d\)) if\(d\)'s type \(=\) listthen \(\triangleright\)\(d\)'s type is list if it's replayed from tickerplant logfile \(d\leftarrow\textsc{listToTable}(d)\) endif \(d\leftarrow\textsc{plusjoin}(d,\textsc{INDEXTNAP})\) \(d\leftarrow\textsc{updateAverage}(d,\textsc{INDEXTNAP})\) \(\textsc{tpserrtToTable}(d,\textsc{INDEXTNAP})\) endfunction
```
**Algorithm 1** Preprocess tick data to 1-min TWAP data
In Algorithm 1 above we use pseudocode to illustrate the pre-processing function for published index tick data. Similar functions exist for published futures and options tick data. The update for TWAP data is delivered through the plusjoin and updateAverage steps. The plus-join \(\backslash\)pj is a native semantic in kdb+ which left joins INDEXTNAP to \(d\) and applies addition to duplicate columns.
At the end-of-day, the price subscriber will unkey these tables and save them to Historical Database (HDB) on disk using the built-in function.Q.dpfts. This archival process ensures their availability for subsequent inference procedures from the following trading day onwards.
Streaming subscriberThis subscriber process serves as a publisher of real-time market analytics designed for consumption by downstream dashboards. Additionally, it facilitates data queries originating from the KX dashboard. At 1-second intervals, this subscriber extracts implied volatility values for each option product categorised by maturity and strike. These values are then used to generate the volatility surface for individual underlying cryptocurrencies, subsequently publishing them to the downstream KX dashboard. Furthermore, as it receives tick records from the upstream publisher tickerplant, the subscriber maintains future and option tables of raw tick data to support additional analytics queries from the dashboard, such as open,low,
high, close prices over specified intervals. To optimise memory usage, the streaming subscriber routinely purges earlier entries in its tables over the course of the day.
#### 3.2.4 Historical Database
Historical market data are used in the inference procedure of VaR calculation as well as in the backtests performed in evaluation. They are accessed via Historical Database (HDB) process. HDB process holds data before the current date with tables stored on disk [40].
For best efficiency for search and retrieval, data are stored partitioned by date and splayed by column, such that they are divided into separate partition directories named after the date. Inside each date directories are directories named after the table names, which contains separate files for each splayed column.
The partitioned structure limits operations to relevant columns. For example, when performing inference procedures detailed in Section 3.3.2, we only require columns of sym, time and twap from INDEXTNAP table, therefore the data query will deserialise into memory only files for the columns it requires.
#### 3.2.5 Interprocess Communication among kdb+ instances
During real-time calculation, the calculation process needs to query data processes to obtain historical and real-time market data to perform calculation procedures. This is achieved by the built-in IPC functionality in q. Since in our implementation, the processes run on the same machine, Unix domain sockets are preferred over localhost IPC connections as it often has lower latency and higher throughput.
For a kdb+ process, such as real-time subscriber and HDB, to listen on a port, it is started with command line parameter -p. These allow data processes to wait on incoming connections. For the calculation process to communicate with data processes, we used hopen to open connections to real-time subscriber and HDB and obtain the respective connection handles from return values. As data query is required, we use the appropriate connection handle to message the data processes. In our implementation, these remote queries were wrapped in separate utility functions, such as.util.getidxtwap for obtaining twap data for inference procedures.
### Value-at-Risk Calculation
To calculate VaR for a portfolio \(P_{i}\), the system first queries historical market data from HDB process to perform inference on the conditional distribution of underlying cryptocurrencies, then uses real-time market data from market price subscriber to map and transform it to the conditional distribution of portfolio return. Below we provide an overview of the calculation process, followed by details on the implemented inference, mapping and transformation procedures.
#### 3.3.1 Calculation Workflow
The key inputs for portfolio VaR include portfolio holdings, confidence level and time horizon. We maintain a table portfolio in the calculation process to track holdings added to each portfolio which is identified by column 'pid. Anticipating changes in portfolio holdings from time to time, it is essential to acknowledge that the duration of this holdings table is tied to the specific calculation process. This table only resides in the memory associated with the process and gets erased when the process terminates.
The workflow starts by identifying portfolio holdings and relevant cryptocurrency indices for which it needs to perform inference procedures. Derivative products on Deribit follow the naming convention of crypto-maturity(-strike-optiontype). For example, for BTC future maturing on 2023.12.29, its identifier is BTC-29DEC23. This allows us to parse the corresponding underlying index from their identifiers. For the scope of this work, we only have derivatives on BTC and ETH. This implies at most two indices are relevant for the inference procedure.
The process then communicates with data processes to query for historical market data. Depending on the inference algorithms chosen, the time series data for TWAP are processed into log return series or realised
variance series, then get passed to inference algorithms, together with VaR time horizon, to model and forecast the conditional distribution of underlying cryptocurrency indices. For data used for mapping procedures, they are maintained by keyed tables LatestProduct and LatestIndex residing in the calculation process. The process can query these tables directly for real-time market data on current price, greeks for options and pass them to mapping algorithms to establish a tractable representation of portfolio returns in terms of indices returns.
The output from these two algorithms, namely the forecast and the portfolio mapping are used as inputs in transformation algorithms to generate an estimation for the corresponding quantile of portfolio return as specified in confidence level. Finally, this quantile of return is transformed to absolute value, which is the VaR estimate.
We summarise the calculation workflow in Algorithm 2 below.
```
\(pid\), \(ci\), \(t\), portfolio function.VAR.estimate(\(d\)) \(p\leftarrow\)getPortfolioPositions(\(pid\)) \(idx\leftarrow\)extractIndex(\(p\)) \(idxdata\leftarrow\)getInferenceData(\(idx\)) \(tickdata\leftarrow\)getReaTimeTick(\(p\)) \(cov\leftarrow\)Inference(\(idxdata\), \(t\)) \(greeks\leftarrow\)Mapping(\(p,tickdata\), \(quantile\leftarrow\)transformation(\(cov,greeks,ci\)) return\(quantile\)\(*marketValue\) endfunction
```
**Algorithm 2** Portfolio VaR calculation
#### 3.3.2 Inference Procedure
The purpose of inference procedure is to characterise the distribution of factors motivating portfolio value changes, conditional on all information available as of time \(t\). Within the scope of this work, underlying cryptocurrency prices, namely the btc_usd and eth_usd index prices on Deribit were employed as such factors.
A widely used assumption in the inference procedure is that time series of cryptocurrency returns exhibit a conditional expectation of zero. While this assumption may be violated over longer time horizon, its applicability remains pertinent for the context of VaR calculation which often focuses on time horizon ranging from intraday to a maximum of 2 weeks. This assumption implies the key output of the inference procedure is a covariance matrix for crypto indices relevant to the portfolio holdings.
Analysis of tick dataBefore introducing the inference models, we start by examining the return series of Bitcoin (BTC) and Ethereum (ETH) to understand the statistical properties of these time series data and the associated challenges in modelling their volatility dynamics.The data used in this exploratory analysis were collected with the aforementioned kdb+ tick architecture. The dataset covers the period from 2023.04.11 to 2023.07.311.
Footnote 1: Due to network issues, data were incomplete for certain days within the period. Since we use log returns over a fixed interval as samples, raw data have been processed such that missing data only reduced the total number of samples for analysis and did not distort the observed statistical properties.
The return series are calculated as the natural logarithmic differences of the TWAP of different averaging intervals as below:
\[R_{it}=\ln\frac{P_{i,t}}{P_{i,t-\tau}} \tag{14}\]
where
\[P_{i,t}=\text{TWAP in USD of crypto i by averaging ticks received between ($t-\tau$,$t$)}\]
In Table 2, we report the statistical properties of BTC and ETH return series calculated from TWAP at 1-min, 5-min, 10-min, 30-min, 1-hr, 2-hr, 6-hr, 12-hr and daily sampling frequencies.
As the interval increases, we observe that scale of mean increases naturally. At daily interval, mean return of indices is at the scale of basis points2, -0.0036% for BTC and -0.0552% for ETH. This supports the zero conditional expectation assumption over short forecast horizon. We also observe that with time-weighted average price, the annualised volatility peaked at sampling interval of 1-hour, at \(41.4\%\) for BTC and \(45.3\%\) for ETH. ETH return series shows consistently higher volatility than BTC return series.
Footnote 2: 1 basis point \(=0.01\%\)
In addition, we can see negative skewness in return series for shorter sampling intervals, suggesting that negative returns are more often than positive returns when focusing on short intervals. For sampling intervals for 6-hr and 12-hr, we observed that skewness turned positive for BTC. This observation of positiveness skewness at longer sampling interval for BTC is similar to those obtained by other researchers, such as Liu and Tsyvinski [41].
At all sampling intervals, we observe consistently large kurtosis for both cryptocurrencies, in excess of the kurtosis of 3 for standard Gaussian distribution. As sampling interval gets more granular, kurtosis in return series increase significantly. The evidence for leptokurtic distribution is consistent with the stylised empirical facts of cryptocurrency time series as discussed in Chi and Hao [15] and Tan et al [18].
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Crypto & Interval & Observations & Mean & Vol.(p.a) & Skewness & Kurtosis \\ \hline \multirow{8}{*}{BTC} & 1-min & 102871 & 0.0000\% & 32.1\% & -1.83 & 75.75 \\ & 5-min & 20579 & 0.0002\% & 39.9\% & -1.34 & 48.7 \\ & 10-min & 10290 & 0.0004\% & 39.3\% & -1.42 & 43.4 \\ & 15-min & 6860 & 0.0005\% & 39.5\% & -0.40 & 30.0 \\ & 30-min & 3428 & 0.00119 & 41.0\% & -1.03 & 34.2 \\ & 1-h & 1712 & 0.0019\% & 41.4\% & -1.98 & 38.8 \\ & 2-h & 857 & 0.0026\% & 39.4\% & -1.12 & 23.0 \\ & 6-h & 286 & 0.0072\% & 36.3\% & 0.16 & 7.4 \\ & 12-h & 143 & 0.0185\% & 34.9\% & 0.47 & 5.2 \\ & 1-d & 73 & -0.0036\% & 37.7\% & \textless{}0.00 & 4.5 \\ \hline \multirow{8}{*}{ETH} & 1-min & 102871 & -0.0001\% & 35.2\% & -0.51 & 138.9 \\ & 5-min & 20579 & -0.00039\% & 42.7\% & -0.35 & 59.6 \\ & 10-min & 10290 & -0.0006\% & 42.1\% & -0.63 & 49.4 \\ & 15-min & 6860 & -0.0009\% & 42.9\% & 0.03 & 42.2 \\ & 30-min & 3428 & -0.0018\% & 44.1\% & -0.88 & 34.3 \\ & 1-h & 1712 & -0.00399\% & 45.3\% & -2.02 & 37.9 \\ & 2-h & 857 & -0.0078\% & 44.0\% & -0.86 & 23.2 \\ & 6-h & 286 & -0.0254\% & 41.8\% & -0.38 & 78.9 \\ & 12-h & 143 & -0.0217\% & 41.5\% & -0.14 & 5.6 \\ & 1-d & 73 & -0.0552\% & 44.9\% & -0.36 & 4.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Descriptive statistics of BTC and ETH return series measured in USD. Vol(.p.a) represents volatility annualised by scaling raw standard deviation with \(\sqrt{\frac{\text{no. of minutes in the interval}}{\text{no.}}}\)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Crypto} & \multirow{2}{*}{Interval} & \multirow{2}{*}{Jacque-Bera} & \multirow{2}{*}{Dickey-Fuller} & \multicolumn{3}{c}{Lijung-Box} & \multirow{2}{*}{Durbin-Watson} \\ \cline{5-6} & & & & lags\(=2\) & lags\(=5\) & lags\(=10\) \\ \hline \multirow{8}{*}{5-min} & 1-min & 22740340.0 & -43.01 & 11380.79 & 15437.40 & 19124.01 & 1.43 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & 5-min & 1793323.0 & -23.55 & 1148.95 & 1692.41 & 2016.72 & 2.05 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ & 10-min & 702944.0 & -24.33 & 434.69 & 677.76 & 780.83 & 1.89 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Diagnostic tests of BTC and ETH return series measured in USD. p-values are reported in brackets.
The non-normality in return series is further confirmed by the result of Jarque-Beta tests in Table 3 which reports statistically significant evidence to reject the null hypothesis of normality. These suggest the use of volatility model that captures non-Gaussian distributions.
We also applied Dickey-Fuller test to return series to test for the presence of unit root. We observe that at all sampling intervals, the null hypothesis is rejected in favour of stationarity. Moreover, serial correlations between the squared returns is tested by Durbin-Watson (DW) statistic ranging from 0 to 4. DW \(<2\) indicates the presence of a positive auto-correlation while DW \(>2\) indicates the presence of a negative auto-correlation. Comparing the DW statistics to critical value at 5% significance level, we observe statistically significant evidence for positive auto-correlation at 1-min sampling interval. For other longer sampling intervals, DW statistics range between 1.8 and 2.2 for both BTC and ETH, which does not allow us to conclude on the nature of serial correlation.
Furthermore, we applied Ljung-Box test on squared return series to assess the evidence of auto-correlation of up to 2, 5 and 10 lags respectively. We observed statistically significant evidence for serial correlation up to 12-hr interval for BTC and up to 2-hr interval for ETH. This demonstrates the existence of volatility clustering effect in the return series of BTC and ETH at finer sampling intervals, as illustrated in Figure 7. This autocorrelated feature in squared return series motivates the inclusion of memory component in volatility forecasting when using intraday measures.
Last but not least, we analysed the statistical properties of daily realised variance and correlations of the sampled data. We observed that the realised variances for BTC and ETH significantly positively skewed, as
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{\(\downarrow\)Jung-Box} & \multirow{2}{*}{Durbin-Watson} \\ \cline{4-7} Crypto & \multirow{2}{*}{Interval} & \multirow{2}{*}{Jacque-Bera} & \multirow{2}{*}{Dickey-Fuller} & \multicolumn{2}{c}{ lags\(=2\)} & \multirow{2}{*}{ lags\(=5\)} & \multirow{2}{*}{ lags\(=10\)} \\ \hline & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 15-min & 208781.1 & -25.14 & 306.89 & 498.43 & 608.39 & 1.89 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 30-min & 140043.6 & -20.74 & 209.51 & 246.02 & 456.80 & 2.00 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 1-hr & 92473.8 & -15.24 & 8.10 & 44.63 & 115.74 & 2.15 \\ & & (0.00) & (0.00) & (0.02) & (0.00) & (0.00) & \\ & 2-hr & 14463.7 & -8.41 & 13.08 & 36.75 & 37.12 & 2.10 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 6-hr & 228.4 & -10.8 & 11.43 & 21.60 & 24.66 & 2.09 \\ & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 12-hr & 33.6 & -10.95 & 11.55 & 15.68 & 24.15 & 1.84 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 1-d & 7.00 & -8.99 & 1.99 & 5.80 & 7.85 & 2.14 \\ & & (0.03) & (0.00) & (0.08) & (0.33) & (0.64) & \\ \hline & 1-min & 79131540.0 & -41.76 & 6090.82 & 7263.51 & 7827.83 & 1.48 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 5-min & 2744632.0 & -24.32 & 324.49 & 477.98 & 538.76 & 1.99 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 10-min & 922648.8 & -24.52 & 116.75 & 189.74 & 223.52 & 1.86 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 15-min & 43895.1 & -14.88 & 88.61 & 112.52 & 132.46 & 1.88 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ & 30-min & 140427.5 & -23.92 & 65.22 & 75.32 & 169.13 & 1.97 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & \\ ETH & 1-hr & 87818.2 & -28.76 & 7.41 & 21.11 & 29.30 & 2.10 \\ & & (0.00) & (0.00) & (0.02) & (0.00) & (0.00) & \\ & 2-hr & 14640.2 & -11.68 & 5.25 & 7.45 & 7.84 & 2.01 \\ & & (0.00) & (0.07) & (0.00) & (0.19) & (0.64) & \\ & 6-hr & 420.5 & -9.30 & 2.07 & 3.00 & 4.65 & 2.00 \\ & & (0.00) & (0.00) & (0.35) & (0.70) & (0.71) & \\ & 12-hr & 40.9 & -8.95 & 1.45 & 3.59 & 12.70 & 1.80 \\ & & (0.00) & (0.00) & (0.47) & (0.61) & (0.24) & \\ & 1-d & 13.1 & -9.41 & 1.87 & 3.83 & 5.79 & 2.22 \\ & & (0.001) & (0.00) & (0.39) & (0.57) & (0.83) & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Diagnostic tests of BTC and ETH return series measured in USD. p-values are reported in brackets.
demonstrated in Figure 8. This motivates the use of log transformation in the inference procedures later. For realised correlations, no statistically significant autocorrelation structure was observed.
EwmaAs a benchmark and also for its simplicity, we implemented the EWMA estimator to characterise the conditional distribution of return series of underlying indices. To benefit from the information in intraday data, we applied the EWMA model to return series derived from the 30-min TWAPs from previous 5 days. The choice of 30-min was to be consistent with the derivative contracts terms on Deribit, which specifies the underlying price used for settlement is the 30-min TWAP before expiry. The return series are calculated from TWAP data as natural logrithmic differences between TWAP at 30-min interval:
\[r_{t}=\ln\frac{TWAP_{t}}{TWAP_{t-30min}} \tag{15}\]
Figure 7: Autocorrelations in squared returns calculated from TWAPs at different sampling intervals for BTC and ETH. Statistically insignificant \(\rho\)s are indicated by faded bars.
To construct the covariance matrix of underlying indices, the EWMA model of
\[\sigma^{2}_{ij,t|t-1}=\lambda\sigma^{2}_{ij,t-1|t-2}+(1-\lambda)r_{it}r_{jt} \tag{16}\]
are applied to different combinations of index return series. When \(eps1\), \(eps2\) are the same, the EWMA algorithm produces an estimation for the variance term. When they differ, the EMWA algorithm acts as an estimator for the covariance term.
Since only BTC and ETH have tradable derivatives on Deribit and are therefore in the scope of this work, we expect the largest covariance matrix to be \(2\times 2\). Given the symmetricity of covariance matrix, this implies the recursions are applied to a maximum of 3 pairs of indices: btc-btc, btc-eth, eth-eth.
Figure 8: Histograms for 1-day realised variance and log-transformed 1-day realised variance.
Figure 7: Autocorrelation in squared returns calculated from TWAPs at different sampling intervals for BTC and ETH
For the first term in the recursive function of EWMA estimator, we initialised it with the sample variance/covariance of all returns in the lookback period. This recursion detailed in Algorithm 3 is implemented via the accumulator over(/) in q.
```
0:\(eps1\), \(eps2\), \(\lambda\), \(t\) function.ewma.forecast(\(eps1\), \(eps2\), \(\lambda\), \(t\)) \(epscross\gets eps1*eps2\) ifeps1=eps2 then \(sigma\leftarrow\textsc{sampleVariance}(eps1,eps2)\) else \(sigma\leftarrow\textsc{sampleCovariance}(eps1,eps2)\) endif for each item \(eps\) in \(epscross\)do \(sigma=\lambda*sigma+(1-\lambda)*eps\) endfor \(term\gets 48*t*term\)\(\triangleright\) 30-min estimation is scaled up to relevant forecast horizon return\(term\) endfunction
```
**Algorithm 3** EWMA estimator
The raw variance and covariance terms calculated from the recursion section are for a period of 30 minutes. To use it for the requested VaR calculation, we scale it up linearly to the corresponding time horizon. For example, if the VaR calculation is for \(2\)-day, the term will be multiplied by \(96\).
In our implementation, the primary contributor to the computation duration is the iterative process involving 240 returns. If the portfolio expands to have derivatives associated with other cryptocurrency indices or digital assets, the decisive factor impacting the complexity of computations will be the quantity of underlyings. This is because the number of recursion for each term in the covariance matrix will remain fixed but the number of terms in the covariance matrix grows quadratically in correspondence with the number of underlyings. While this increase in dimension could bring challenges in terms of computation latency, kdb+ is capable of supporting parallel processings with threads and multi-processes. In this work, we could only exploit this benefit to a minimal extent, given there are only two indices within the scope of this work.
multivariate GARCHThe other common family of models to capture volatility dynamic is GARCH models. In this work, we also implemented the DCC-GARCH(1,1) model with Student-t distribution to capture the dynamic of covariance matrix and accommodate the heave-tails observed in the return series. Since the model needs to be fitted with maximum likelihood estimation, we employed the embedPy package which calls on a Python process and performs computations in Python. The return series and relevant parameters are wrapped as Python objects which then get processed by the mgarch package that leverages scipy.optimize to minimise the negative log likelihood function of the model.
Taking into consideration that the underlying price used in settling derivative contracts is the 30-min TWAP of corresponding index before expiry, returns series used for model fitting are based on 30-min TWAP of the underlying indices. Consequently, the covariance matrix forecast will apply the same processing to TWAP data to obtain return series and at the end of estimation, goes through the same scaling process as EWMA model to adjust for the appropriate VaR time horizon.
HARAs an alternative to the previous inference algorithms which attempt to model the dynamics of return series, the Heterogeneous Autoregressive (HAR) model is used to capture the volatility dynamics directly.
For realised variance to be used as a consistent estimator of quadratic variance (QV), an important assumption is a frictionless market, or in stochastic process term, the log-price process must be a continuous semimartingale. However, as sampling frequency increases, this assumption is violated due to the presence of market microstructure noise, such as bid-ask spreads, rounding errors, price discreteness, etc [42]. To attenuate the impact of microstructure noise, we adopted the pre-averaging approach by Podolskij and Vetter [43]. In this approach, we see the log price semimartingale \(P_{t}\) as a latent process and the observed price
process \(X_{t}\) include a noise term \(\epsilon_{t}\):
\[X_{t}=P_{t}+\epsilon_{t}. \tag{17}\]
When we average the observed prices around \(t\), the noise term of the averaged price has a lower variance than those of individually observed prices. Thus using the averaged price to calculate realised variance will produce an estimate that is closer to the optimal estimate obtained from the true latent semimartingale. In our implementation, this averaging process is delivered in the real-time price subscriber discussed in Section 3.2.3 which pre-processes high frequency tick level data to the corresponding 1-min TWAP.
To estimate RV as a variance proxy, we used the return series sampled at 5-minute intervals, such that the realised variance over a period of \(m\) minutes can be computed as
\[RV_{t+1}^{m}=\Sigma_{i=1}^{m/5}r_{t+i}^{2}, \tag{18}\]
where \(r_{t+i}\) is the \(i\)th 5-minute return within the time interval of \(m\) minutes. In the case of 12-h RV, \(m=144\).
Similarly, the realised covariance and realised correlation for underlying indices \(1\) and \(2\) over a period of \(m\) minutes can be computed as
\[RCov_{12,t+1}^{m}=\Sigma_{i=1}^{m}r_{1,t+i}r_{2,t+i} \tag{19}\]
and
\[RCorr_{12,t+1}^{m}=\frac{RCov_{12,t+1}^{m}}{\sqrt{RV_{1,t+1}^{m}RV_{2,t+1}^{m }}}. \tag{20}\]
respectively.
To forecast the covariance matrix, we adopted a modified version of HAR-DRD model proposed by Oh and Patton [44]. In the original model, the covariance matrix is decomposed into diagonal matrix of variance and correlation matrix. Each individual log-variance elements is estimated with separate univariate HAR models and the correlation matrix adopts a DCC-type estimator. The covariance matrix forecast is obtained by compositing these forecasts. While we kept the decomposition approach in our implementation, we employed HARQ model with leverage term (LHARQ) to perform forecast for variance terms and HAR model to forecast for correlation terms as detailed below.
For variance terms, we specify the LHARQ model using 12-hr RV as
\[\begin{split} logRV_{t+1}=&\beta_{0}+\beta_{1}r_{t }^{-}+\beta_{2}logRV_{t}+\beta_{3}log(\sqrt{RQ_{t}}RV_{t})\\ &+\beta_{4}\overline{logRV_{t,t-1}}+\beta_{5}\overline{logRV_{t, t-4}}+\epsilon_{t}\end{split} \tag{21}\]
,where \(r_{t}^{-}=\)min\((r_{t},0)\), \(\overline{logRV_{i,j}}\) represents the log of averaged 12-hr RV from \(t=j\) to \(t=i\), \(RQ_{t}\) is the realised quarticity term employed to account for estimation error in RV terms, defined as [33]
\[RQ_{t+1}^{m}=\frac{m/5}{3}\Sigma_{i=1}^{m/5}r_{t+i}^{4}. \tag{22}\]
Instead of the standard HAR approach which models daily RV directly, we applied it to model the 12-hr RV, which is then scaled up to the relevant forecasting periods. This design was motivated by the analysis on the autocorrelations of squared return series shown in Figure 7. When returns were sampled at 12-hr interval, there was statistically significance evidence for serial-correlation; however, when we extend the sampling period to 1 day, the evidence became insignificant.
We use the last 12-hr RV and the average of 12-hr RV over past 2 and 5 days to parsimoniously captures the high persistence in volatility. In addition, the logarithmic transformation is applied to ensure the positiveness of forecasts. At the same time, logged transformed RVs are closer to standard Gaussian distribution based on metrics of skewness and kurtosis, therefore more suitable for using OLS estimators. The inclusion of the leverage term was motivate by the empirical evidence obtained when applying the model with and without leverage term to the data used for exploratory analysis: firstly, the inclusion of leverage term improved the \(R^{2}\) goodness-of-fit from 0.281 to 0.394 for BTC and from 0.182 to 0.245 for ETH; secondly, the coefficient of
the leverage term is statistically different from 0 at 95% confidence interval. These evidence are consistent with the aforementioned work by Yu [34] which found that leverage effect has significant impacts on the future BTC volatility.
In order to use this model for forecasting, we fit it using OLS estimator. The parameters are estimated by solving the problem of minimisation for the sum of squared error:
\[\hat{\beta}=\underset{\beta_{0},\beta_{1},\beta_{2},\beta_{3},\beta_{4},\beta_{5 }}{\text{argmin}}\Sigma(logRV_{t}-log\overline{RW_{t}})^{2} \tag{23}\]
A key advantage of using OLS estimator is the existence of a tractable representation of estimated parameters. In the context of implementation, as each VaR calculation is triggered, this model is fitted with OLS estimator in real-time using the built-in lsq function in q which leverages Cholesky decomposition for matrix inversion.
For correlation terms, we do not apply any transformation to the realised correlation data as its distribution is not significantly skewed nor displaying heavy tails. The HAR model for realised correlation is specified as
\[RCorr_{t+1}=\beta_{0}+\beta_{1}RCorr_{t}+\beta_{2}\overline{RCorr_{t,t-1}}+ \beta_{3}\overline{RCorr_{t,t-4}}+\epsilon_{t}. \tag{24}\]
While we can use OLS estimator in this model directly, there is no constraint imposed on the range of forecasted \(RCorr_{t+1}\). We noted instances where the forecast failed outside the valid range of -1 to 1. As a simple remedy, we implemented an additional check in the forecast process. If invalid forecast for correlation is produced, we replaced it with the average realised correlation over past 5 days. The correlation forecast is transformed to covariance forecast by multiplying the squared roots of the corresponding variance terms.
Real-time inference with cachingRecall that the primary objective of the system is to facilitate the real-time calculation of portfolio VaR. For a more accurate representation of the current market conditions, inference procedure should be executed every time a VaR calculation is triggered, incorporating the latest information from the market. In the context of cryptocurrencies, the market operates on the basis of 24/7. The conventional notion of daily opening and closing is of limited relevance within the scope of this work. When we refer to "querying historical market data of \(t\) days", we are specifically referring to data spanning the past \(24t\) hours from the present moment.
In the case of HAR model, real time inference includes queries to real-time subscriber for all 1-min TWAP data from the start of the day and to HDB for data pertaining to previous 15 days. Consequently, there should only be a subtle variation in the dataset employed for real-time inference between consecutive VaR calculation requests. To reduce the latency in data sourcing, we maintained a table named cachedTwapHAR in calculation process to serve as cache. Each time when an inference procedure runs, it first queries the local table for data within the previous 15 days. If there were any remaining data needed, it then queried the price subscriber and HDB only on those remaining data that did not exist in cachedTwapHAR yet. Considering the case where inferencing for both BTC and ETH is required, while the first inference request would take about 70ms to source all the data and save them in cache, the subsequent requests take 14ms on average.
#### 3.3.3 Mapping Procedure
The mapping procedure is responsible for identifying the relation between portfolio returns and underlying indices returns. Below we introduce the mapping algorithms used for a single holding, followed by an aggregated version which is implemented for the entire portfolio. The entire mapping algorithm is to produce \(\tilde{\delta}\), \(\tilde{\Gamma}\) and \(\tilde{\theta}\) defined later in Equation 31.
Holding LevelFor linear products, changes in holding value can be represented as a linear function of change in underlyings values. Specifically in the case of crypto futures products, this mapping has a coefficient of 1.
\[V_{t+\tau}-V_{t}=P_{t+\tau}-P_{t} \tag{25}\]
For non-linear products, such as crypto options, we use a quadratic mapping whereby the change in options value is approximated via Taylor series expansion of order 2. This is also known as delta-gamma approach [45]. Since the value of options decreases naturally as time passes, we further included theta factor in this mapping procedure.
Change in options value is represented as a quadratic function of changes in underlying value
\[V_{t+\tau}-V_{t}=\delta(P_{t+\tau}-P_{t})+\frac{1}{2}\Gamma(P_{t+\tau}-P_{t})^{ 2}+\theta\tau \tag{26}\]
Denoting the option return and underlying return as \(r_{t}\) and \(R_{t}\) respectively, the above can be expressed in returns terms [21]
\[r_{t}=\delta\frac{P_{t}}{V_{t}}R_{t}+\frac{1}{2}\Gamma\frac{P_{t}^{2}}{V_{t}}R _{t}^{2}+\theta\frac{\tau}{V_{t}} \tag{27}\]
Here we note that linear mappings in Equation 25 can be expressed as a simplified version of Equation 27 with \(\delta=1\), \(\Gamma=0\) and \(\theta=0\).
#### 3.3.4 Portfolio Level
For a portfolio of \(n\) holdings, the portfolio return is a weighted average of returns on each holding
\[r_{p,t}=\sum_{i=1}^{n}w_{i}r_{i,t} \tag{28}\]
where
\[w_{i}=\frac{V_{i}}{\sum_{i=1}^{n}V_{i}} \tag{29}\]
Defining the coefficient terms in Equation 27 as following [21]
\[\begin{split} R_{t}&=\begin{bmatrix}R_{1,t}&R_{2,t}&\cdots&R_{n,t}\end{bmatrix}^{T}\\ \tilde{\delta}&=\begin{bmatrix}w_{1}\frac{P_{t}}{V_{1}}\delta_{1}&w_{2} \frac{P_{t}}{V_{2}}\delta_{2}&\cdots&w_{n}\frac{P_{t}}{V_{n}}\delta_{n}\end{bmatrix} ^{T}\\ \tilde{\Gamma}&=\begin{bmatrix}w_{1}\frac{P_{t,t}^{2}}{V_{1,t}} \Gamma_{1}&0&\cdots&0\\ 0&w_{2}\frac{P_{t,t}^{2}}{V_{2,t}}\Gamma_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&w_{n}\frac{P_{n}^{2}}{V_{n,t}}\Gamma_{n}\end{bmatrix}\\ \tilde{\theta}&=\begin{bmatrix}w_{1}\frac{w_{2}}{V_{1,t}}\theta_{1}&\frac{w_{2 }}{V_{2,t}}\theta_{2}&\cdots&\frac{w_{n}}{V_{n,t}}\theta_{n}\end{bmatrix}^{T} \\ \tau&=\text{time horizon for VaR calculation}\end{split} \tag{30}\]
We could represent the portfolio return in matrix algebra as:
\[r_{p,t}=\tilde{\delta}^{T}R_{t}+\frac{1}{2}\tilde{\Gamma}R_{t}R_{t}^{T}+\tau \sum_{i=1}^{n}\tilde{\theta_{i}} \tag{31}\]
As shown above, \(\tilde{\Gamma}\) is a diagonal matrix. Since derivative products in the scope of this work are based on a single underlying, cross gamma terms always equal to zero. Taking into account the required operation on \(\tilde{\Gamma}\) in the transformation algorithm later, instead of maintaining a matrix, we used a vector to represent the diagonal elements without losing any information.
#### 3.3.5 Real-time mapping with latest market data
From the implementation perspective, the price sensitivities used in mapping algorithms are common metrics for options. They are part of the data feed from Deribit API. While these metrics are available from the price subscriber, to harness the benefit that q, as an integrated programming language, can operate on data directly and to avoid the latency brought by transferring the data between processes, especially in the case of portfolios with large number of holdings, we created two dedicated keyed tables -LatestIndex and LatestProduct - in the calculation process, to maintain the latest prices and greeks for each product and index.
By setting it up as another subscriber to the tickerplant, the calculation process listens to updates from the tickerplant and updates the corresponding table with the latest market data. To illustrate the efficiency gain from maintaining these two tables, we compared the execution time of a simple select query for 1,000 products on the local LatestProduct table to the execution time of a remote query for the same group of products to the price subscriber process. Taking the average time of 100 executions, the local query takes merely 1ms to return while the remote query needs over 40ms to complete. Given the derivative universe on Deribit contains about 1,300 products and only prices and greeks data are needed for mapping, maintaining these two tables in the calculation process does not interfere with the calculation latency.
Following the definition in Equations 30, the relevant market data get transformed to the target output \(\tilde{\delta}\), \(\tilde{\Gamma}\) and \(\tilde{\theta}\) via the dedicated function.VaR.adjustgreeks. Since these coefficients will go through matrix algebra in transformation algorithm later, it would bring further efficiency gain if we could reduce their dimensions. Leveraging the fact that some derivatives in portfolio holdings may share the same underlying crypto index, we included an additional step to simultaneously compress all coefficients by aggregating entries in coefficients if the corresponding assets share the same underlying. Since we only have BTC and ETH indices within the scope of this work, all the coefficient terms can be reduced to vectors of dimension 2*1.
#### 3.3.6 Transformation Procedure
One of the key objectives of this work is to minimise the latency in VaR calculation. The calculation should be completed within the time period of milliseconds. To achieve this, we require an analytical procedure to transform the characterisation of conditional distribution of market factors to that of portfolio value at \(t+1\).
Due to the presence of options positions in the portfolio, the mapping from underlying return to portfolio return is non-linear. It is difficult to characterise the distribution of \(r_{p,t}\) with a tractable form. However, in order to calculate VaR of the portfolio, we do not require the probability density function or cumulative density function of the true distribution. Instead, the focus is on the specific quantiles of the true distribution, such as 1% or 5%.
To calculate the quantile function evaluated at these values, we implemented the Cornish-Fisher expansion, which estimates standardised quantile of the true distribution as a polynomial of the corresponding quantile of the standardised Gaussian distribution, with coefficients being functions of the moments of the true distribution [46].
The standardised \(\alpha\)th quantile of the true distribution can be estimated as
\[z_{v,\alpha}=z_{\alpha}+\frac{1}{6}(z_{\alpha}^{2}-1)S+\frac{1}{24}(z_{ \alpha}^{3}-3z_{\alpha})(K-3)-\frac{1}{36}(2z_{\alpha}^{3}-5z_{\alpha})S^{2} \tag{32}\]
where \(z_{\alpha}\) denotes the standardised Gaussian quantile, S and K denotes skewness and kurtosis respectively2. S and K used in the estimation can be obtained from central moments calculated for the true distribution. Since we have expressed the true distribution as a quadratic function of random variables \(R_{t}\) in Equation 31, the
central moments and the S and K parameters used for calculating portfolio return are as following [47, 48]:
\[\mu_{1}=E[r_{p,t}]=\frac{1}{2}tr(\tilde{\Gamma}\Sigma_{t})+\tau\sum _{i=1}^{n}\tilde{\theta_{i}}\] \[\mu_{2}=E[(r_{p,t}-\mu_{1})^{2}]=\tilde{\delta}^{T}\Sigma_{t} \tilde{\delta}+\frac{1}{2}tr(\tilde{\Gamma}\Sigma_{t})^{2}\] \[\mu_{3}=E[(r_{p,t}-\mu_{1})^{3}]=3\tilde{\delta}^{T}\Sigma_{t} \tilde{\Gamma}\Sigma_{t}\tilde{\delta}+tr(\tilde{\Gamma}\Sigma_{t})^{3}\] \[\mu_{4}=E[(r_{p,t}-\mu_{1})^{4}]=12\tilde{\delta}^{T}\Sigma_{t}( \tilde{\Gamma}\Sigma_{t})^{2}\tilde{\delta}+3tr(\tilde{\Gamma}\Sigma_{t})^{4}+ 3\mu_{2}^{2}\] \[S=\frac{\mu_{3}}{\mu_{1}^{2.5}}\] \[K=\frac{\mu_{4}}{\mu_{2}^{2}}\]
With the compression step in previous mapping algorithm, the calculation for central moments at most involves matrix multiplication for dimension up to 2, which only brings negligible calculation latency.
The corresponding \(\alpha\)th quantile is then calculated from [48]
\[q_{v,\alpha}=\mu_{v}+\sigma_{v}z_{v,\alpha} \tag{33}\]
, where \(\sigma_{v}\) is the square root of \(\mu_{2}\). The VaR of interest is obtained by transforming return to market value as \(q_{v,\alpha}V_{p,t}\).
### Visualisation
The VaR calculation is delivered to users through published workspaces in KX dashboards, which is an interactive data visualisation tool developed by KX. It offers a seamless integration with the kdb+ processes by supporting kdb+/q queries as well as real-time streaming queries.
For this work, we have built a workspace comprising of three tabs: Futures and Options for displaying streamed analytics from data service component and VaR Calculations which serves as the user interface for the VaR calculation process. Below we introduce their functionalities in details.
#### 3.4.1 Streaming analytics
In order to offer contextual understanding for VaR estimates, we have created components displaying common market metrics. On a per-product basis, the tab streams information such as minute-by-minute open, low, high, close prices (OLHC), implied volatility surface, etc. As the user selected different products from the data table as shown in the upper section of Figures 9 and 10, the OLHC chart and the 3D volatility surface chart will update the displayed data for the corresponding products.
The interactivity of the dashboard is facilitated through View State variables which store values and make them accessible across tabs and components of the workspace. Each entry in the data table is associated with a specific product ticker. Upon clicking a particular row, the product ticker is stored to the view state variable Futures/sym for futures tab and Options/sym for options tab. In the data sourcing query for OLHC chart, we implemented a dynamic query with the required view state variable as parameters. As the view state variable updates on user click, the kdb+ query update to obtain data for the corresponding product.
In both the futures and options tabs, products are initially grouped by their underlying assets, as depicted in the navigation bar situated on the left part of Figures 9 and 10. Recognising the diverse spectrum of products available for options, an additional layer of grouping by maturity has been introduced. Users need to select a maturity date to view the options available at different strike levels for that maturity. Similar to the aforementioned linkage established between the data table and the OLHC chart, the radio buttons responsible for selecting maturity dates and the data table are connected through the view state variable Options/maturity.
#### 3.4.2 VaR calculation
VaR calculation tab serves as the interface for the core functionality of the system, which is the real-time computation of portfolio VaR.
The workflow starts with users adding positions for their portfolios. The portfolios are identified by portfolio ID. As positions are added, the data table in the bottom half of Figure (a)a updates to display the latest holdings. This automatic update is achieved through polling whereby the dashboard triggers a client-side poll of the database at a predefined interval.
As portfolios are constructed, users switch to Calculate tab using the navigation bar on the left. Within the Calculate tab, as displayed in Figure (b)b, users can select the portfolio for which the VaR calculation is required from the dropdown, then provide the confidence level and time horizon parameters together with the chosen volatility model.
Upon clicking the Calculate button, the dashboard triggers a request to the calculation process, calling the.VaR.estimate function with 4 parameters: portfolio id, confidence interval, time horizon and inference
Figure 10: Workspace tab for options
Figure 9: Workspace tab for futures
model. The choice of inference model is defaulted to HAR. The process will return the result as a dictionary atom, which is mapped to corresponding view states in the dashboard to be displayed to users.
## 4 Evaluation
Within this section, we will evaluate the system against the objectives of this work. The assessment has two parts: firstly, we assess the performance of the system in terms of computation latency; following that, we empirically examine the accuracy of VaR estimates using different backtest strategies.
### Latency Performance
#### 4.1.1 Evaluation Setup
For the evaluation on calculation latency, we utilised the system command vts in kdb+ which executes the calculation and records the execution time in milliseconds and space used in bytes. The assessment encompassed portfolios of different numbers of holdings, ranging from 1 to 1000. For this evaluation, we run all the processes on a single server with AMD EPYC processor of 4 cores, 8GB memory.
As outlined in Figure 4 and Algorithm 2, the overall workflow of VaR computation includes three key calculation steps: volatility inference on underlying crypto indices, portfolio mapping through delta-gamma-theta approximation and transformation via Cornish-Fisher expansion. For this evaluation, we define the time consumption for inference to be \(t_{1}\), the time for portfolio mapping to be \(t_{2}\), the time for transformation to be \(t_{3}\), and the time for other miscellaneous steps as \(t_{\epsilon}\), thus we have the overall system response latency as:
\[t=t_{1}+t_{2}+t_{3}+t_{\epsilon}. \tag{34}\]
Figure 11: Workspace tab for VaR calculation
#### 4.1.2 Result
We started with the evaluation on the calculation latency of different inference models. Given there are only BTC and ETH derivatives available on Deribit, the maximum number of underlying indices to perform inference on is two, leaving the effective calculation complexity at \(\mathcal{O}(1)\) against the number of holdings.
A comparative analysis of the latency of three inference methodologies shows that EWMA has superior performance than GARCH and HAR in terms of calculation latency. From the breakdown presented in Table 4, we noted that the higher latency in HAR model is largely driven by the inference algorithm. This is caused by the additional steps to transform log returns into realised measures for inference: while EWMA and GARCH models merely requires log return data over 5 days with 48 data points per day which can be used directly in the inference and forecast steps, HAR model needs to transform per-minute data over 15 days with 1440 data points per day before using them for inference. In addition, in comparison to EWMA model which performs iterations of simple algebra, HAR model involves solving LU decompositions for model fitting and consequently has a higher computation latency. Another point to highlight is the high latency in DCC-GARCH model, which is caused by kdb+ process calling python engines to fit the model with maximum likelihood estimation.
To assess the calculation latency from mapping procedures, we broke down it into time taken to source real time market data and time taken to apply the mapping algorithm. To further illustrate the efficiency gain from maintaining the LatestProduct and LatestIndex tables in the calculation process, we compared the latency caused by using local and remote queries. As indicated in Tables 5, as portfolio holds more products, the latency and space used for data sourcing increases naturally as more data are required, resulting in more efficiency gain for using local tables in the calculation process.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Steps & EWMA & DCC-GARCH & HAR \\ \hline Data Sourcing & 12.4 & 6.3 & 20.5 \\ Inference & 1.3 & 7932 & 18.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Execution time in milliseconds for volatility inferencing with different volatility models, assuming the inference needs to be performed for both BTC and ETH. Data sourcing latency for HAR assumes the use of cache table introduced in Section 3.3.2.
Figure 11: Workspace tab for VaR calculation
Recall that in Section 3.3.3, we compressed to coefficients used in transformation algorithm to vectors of dimension \(2^{*}1\). The calculation in this step only involves simple matrix algebra, followed by the analytical formula of Cornish-Fisher expansion. The number of holdings does not affect the calculation latency in this step. We reported the average execution time for 100 times of transformation algorithm as 0.03ms.
The examination of per-stage execution times reveals that a predominant portion of the latency observed can be attributed to the inference procedures. In the context of EWMA model, almost all the latency arise from sourcing historical data for performing the inference procedures. For HAR model, the latency is evenly split between data sourcing and performing inference and forecast.
We finally tested on the full workflow calculation latency using a portfolio holding every derivative products available on Deribit, with total number of holdings at nearly 1300. Overall, with Unix socket for IPC, using EWMA model for inference produces the smallest calculation latency of 14.2ms, followed by 38.2ms for HAR model.
### Accuracy Performance
To assess the performance in terms of accuracy, we conducted VaR backtests with portfolios for the periods from 2023.07.25 to 2023.08.12, with 24 equally spaced timestamps on each day, totaling to 413 samples3. Portfolio holdings for each sample were drawn from the derivative products available in the corresponding time horizon. The backtests for VaR focus on two properties: unconditional coverage property and independence property [49]. We could only conclude on the correctness of conditional coverage by the VaR forecast when both properties are satisfied.
Footnote 3: Data in certain intervals within the periods are missing, we removed samples affected by missing data, thus total number of samples is less than 19 days * 24 samples per day = 456.
#### 4.2.1 Unconditional Coverage Test
Given VaR is a downside risk metric, the focus of the evaluation is on the underestimation of risk. Let a violation event \(i\) be defined as
\[i=\begin{cases}1,&\text{if realised loss for portfolio $i$, time horizon $t\geq$ estimated VaR}\\ 0,&\text{else}\end{cases} \tag{35}\]
The most succinct test is to compare the observed number of violations to the expected number of violations through the binomial distribution test. A key assumption in this test is that violation events are identically and independently distributed, such that the total number of violations \(I\) can be seen as following a binomial
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Holdings} & \multicolumn{4}{c}{Data Sourcing} & \multirow{2}{*}{Mapping} \\ \cline{2-2} \cline{4-7} & \multicolumn{2}{c}{Local} & \multicolumn{2}{c}{Remote} \\ \cline{2-7} & Time & Space & Time & Space & Time & Space \\ \hline
1 & \(<\) & 12.5 & 21.2 & 8.8 & \(<\) & 2.9 \\
5 & \(<\) & 13.2 & 22.8 & 9.2 & \(<\) & 3.3 \\
10 & \(<\) & 13.2 & 24.6 & 9.6 & \(<\) & 4.5 \\
20 & \(<\) & 15.1 & 23.9 & 10.1 & \(<\) & 6.5 \\
50 & 0.1 & 16.6 & 25.6 & 30.1 & \(<\) & 10.4 \\
100 & 0.1 & 16.7 & 30.4 & 57.1 & 0.1 & 18.6 \\
200 & 0.2 & 21.1 & 31.5 & 59.3 & 0.1 & 35.0 \\
500 & 0.5 & 37.5 & 38.7 & 156.9 & 0.3 & 67.7 \\
1000 & 1.0 & 43.4 & 45.9 & 1256.3 & 0.5 & 133.3 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Execution time and space used for mapping procedure with local and remote queries. Run time and space used are computed as the average of 100 times of execution and presented in milliseconds and KB respectively. \(<\) where execution time is less than 0.1 ms.
distribution, with \(p\) corresponds to the defined quantile \(\alpha\) of the VaR estimates. The null hypothesis is as:
\[H_{0}:I\sim\mathcal{B}(413,\,p) \tag{36}\]
For this backtest, we considered 95% 1-day VaR, \(97.5\%\) 1-day VaR and 99% 1-day VaR. This leads to \(p\) of \(5\%\), \(2.5\%\) and \(1\%\) for the binomial distribution respectively. We run the VaR calculation procedure on 413 sample portfolios. In Table 6 below, we report the results of the coverage test.
In general, when number of violations is higher than the expected value indicated in column \(I_{\text{expected}}\), it implies that VaR measure underestimates the risk; when actual violations are lower than the expected value, it indicates overestimation. For a VaR model to be accepted at the confidence level of \(95\%\), we expect the number of violations to be within range of 2 standard deviations from the expected value.
By assessing the p-values, we observe statistically significant evidence at \(99\%\) confidence level to reject EWMA and GARCH estimators for all three VaR levels. In particular, we observe that the issue of underestimation is more profound in DCC-GARCH estimator than EWMA estimator as indicated by the larger number of violations across the three VaR levels. For HAR estimator, while it showed a robust performance for \(95\%\) 1-day VaR metrics, it should be rejected at \(99\%\) confidence level for \(97.5\%\) and \(99\%\) 1-day VaR.
Due to the known limitation of using delta-gamma-theta approach to approximate the tail behaviour of the return distribution, we created an additional benchmark by applying the ex-post realised covariance matrix to the mapping and transformation procedures. The result of this benchmark is presented in column \(I_{\text{realised}}\). By evaluating the violations data for the implemented inference procedures against this benchmark, we can assess the performance of inference estimators in isolation.
The results of the coverage test is further illustrated in Figure 12. We compare the VaR estimates against the actual loss in terms of returns. Violations from EWMA, DCC-GARCH and HAR estimators as well as ex-post realised measures are indicated as coloured vertical lines above the trend lines. Notably, EWMA and DCC-GARCH violations are concentrated around 2023.07.31 and 2023.08.02. This prompts us to apply another set of test for assess the dependence between violations in the next section.
#### 4.2.2 Independence Test
Another crucial aspect to evaluate when validating VaR models is the independence property. In order for VaR forecasts to exhibit accurate conditional coverage, it is essential that past violations do not convey any information about future violations. The failure to meet this criterion can lead to the clustering of violations, which suggests a time lag in the responsiveness of VaR measures to changing market conditions.
In this section of evaluation, we adopted two tests to assess the independence property: Christoffersen's independence test and the multivariate linear regression procedure with F-test. The former was proposed by Christoffersen as the first approach to test for unconditional coverage and independence separately [49]. In this test, relations between VaR violations in successive periods are modelled via a first-order Markow chain. One deficiency in this initial attempt was its limited power in identifying general forms of clustering, as only one lag was considered. The latter belongs to the family of regression-based backtesting procedures which were proposed to overcome the aforementioned limitation by incorporating violations with different lags as
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline VaR Level & Observations & \(I_{\text{expected}}\) & \(I_{\text{realised}}\) & \(I_{\text{EWMA}}\) & \(I_{\text{GARCH}}\) & \(I_{\text{VAR}}\) \\ \hline \multirow{2}{*}{\(95\%\)} & \multirow{2}{*}{413} & \multirow{2}{*}{21} & 13 & 23 & 33 & 17 \\ & & & (0.974) & (0.328) & (0.006) & (0.825) \\ \multirow{2}{*}{\(97.5\%\)} & \multirow{2}{*}{413} & \multirow{2}{*}{11} & 8 & 19 & 30 & 10 \\ & & & (0.811) & (0.009) & (0.000) & (0.58) \\ \multirow{2}{*}{\(99\%\)} & \multirow{2}{*}{413} & \multirow{2}{*}{4} & 7 & 14 & 19 & 6 \\ & & & (0.124) & (0.000) & (0.000) & (0.235) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Coverage test for VaR calculated with EWMA, GARCH, HAR estimators, benchmarked to VaR calculated with ex-post realised covariance matrix. \(I_{\text{expected}}\) has been rounded to interger. p-values for binomial test are reported in brackets.
Figure 12: Coverage Test for 1-day VaR
Figure 12: Coverage Test for 1-day VaR
well as other information available at the time of forecast. We chose this test because the research by Berger and Moys showed that with small sample sizes, the parsimonious F-test is capable of providing an adequate evaluation of VaR measures [50].
Given that our 413 samples for 1-day VaR estimates were drawn at 1-hour interval over the period of 19 days, it is important to account for the correlation structure in these overlapping observations before applying the independence tests. To mitigate this issue of inherent correlation structure arising from overlapping observations, we separate the samples into distinct groups where samples within each group are drawn from more distant time periods. We acknowledge that the optimal separation strategy is to have 24 groups with each group only contain samples of same timestamp, which will completely mitigate the issue of overlapping observations. Due to the limited number of samples available, such separation will result in multiple groups of no violations for which the F-test cannot be applied to. Here we split the data into 6 groups, with group \(i\) contains observations with timestamp at \(i\)th, \(i+6\)th, \(i+12\)th, \(i+18\)th hours of each day, totaling to nearly 70 samples within each group.
#### 4.2.3 Christoffersen's independence test
Following the definition of violation event \(i\) in the binomial test, we define \(N_{ij}\) as the number of days in which state \(j\) occurred in the subsequent period of state \(i\). For example, \(N_{01}\) represents the number of days for which non-violation is followed by a VaR violation.
Let \(\pi_{0}\) represents the conditional probability of a violation occurring given there was not violation in the previous period, \(\pi_{1}\) represents the conditional probability of a violation occurring given there was a violation in the previous period. They can be estimated through \(\pi_{0}=\frac{N_{01}}{N_{00}+N_{01}}\) and \(\pi_{1}=\frac{N_{11}}{N_{10}+N_{11}}\). Let \(\pi\) represents the unconditional probability of a violation occurring, it can be estimated as \(\pi=\frac{N_{01}+N_{11}}{N_{00}+N_{01}+N_{10}+N_{11}}\).
With the null hypothesis for independence as
\[H_{0}:\pi_{0}=\pi_{1}=\pi \tag{37}\]
, the test statistic for this test is
\[LR=-2\ln\frac{(1-\pi)^{N_{00}+N_{10}}(\pi)^{N_{01}+N_{11}}}{(1-\pi_{0})^{N_{00 }}\pi_{0}^{N_{01}}(1-\pi_{1})^{N_{10}}\pi_{1}^{N_{11}}} \tag{38}\]
This likelihood ratio is asymptotically distributed as a chi-square distribution with 1 degree of freedom, leading to a critical value of 2.706, 3.841 and 6.635 for \(10\%\), \(5\%\) and \(1\%\) significance levels. We present the test statistics and result at \(95\%\) confidence level in Table 7.
At \(5\%\) significance level, we observed evidence to reject the independence hypothesis for certain groups in EWMA estimator. Such evidence is statistically significant at all three VaR levels tested, appearing in group 3 for VaR levels of 95% and 97.5%, in group 5 for VaR level 99%. For GARCH and HAR estimator, there was
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{VaR Level} & \multirow{2}{*}{Estimator} & \multicolumn{6}{c}{Likelihood Ratio} & \multirow{2}{*}{Result} \\ \cline{3-3} \cline{5-10} & & Group 0 & Group 1 & & Group 2 & Group 3 & Group 4 & Group 5 & average \\ \hline \multirow{3}{*}{95\%} & EWMA & 0.28 & 2.91 & 0.79 & 10.44\({}^{**}\) & 1.16 & 2.88\({}^{*}\) & 3.07\({}^{*}\) & accept \\ & GARCH & 0.50 & 3.29\({}^{*}\) & 0.92 & 1.25 & 1.16 & 1.68 & 1.47 & accept \\ & HAR & 0.28 & 0.28 & 1.71 & 0.03 & 0.50 & 0.12 & 0.49 & accept \\ \hline \multirow{3}{*}{97.5\%} & EWMA & 0.28 & 4.91\({}^{**}\) & 0.50 & 10.44\({}^{**}\) & 0.50 & 2.88\({}^{*}\) & 3.25\({}^{*}\) & accept \\ & GARCH & 0.50 & 3.28\({}^{*}\) & 0.92 & 2.12 & 0.50 & 1.68 & 1.50 & accept \\ & HAR & 0.28 & 0.03 & 0.12 & N/A & 0.50 & N/A & 0.15 & accept \\ \hline \multirow{3}{*}{99\%} & EWMA & 0.50 & 0.03 & 0.28 & 0.03 & 0.28 & 4.88\({}^{**}\) & 0.99 & accept \\ & GARCH & 0.28 & 2.91\({}^{*}\) & 0.92 & 0.12 & 0.28 & 2.88\({}^{*}\) & 1.23 & accept \\ \cline{1-1} & HAR & 0.12 & N/A & 0.12 & N/A & 0.12 & N/A & 0.06 & accept \\ \hline \multicolumn{3}{l}{\({}^{**}\)\(p<0.01\), \({}^{**}\)\(p<0.05\), \({}^{*}\)\(p<0.1\); N/A where no violations hence no data for LR statistics} & & & & \\ \end{tabular}
\end{table}
Table 7: Independence test by group for VaR violations observed with EWMA, DCC-GARCH and HAR estimators. The last column reports the average likelihood ratio weighted by number of samples in each group.
no significant evidence to reject the null hypothesis of independence. When we look at the averaged result, we will not reject any models for failing the independence test at \(95\%\) confidence level.
#### 4.2.4 Regression test
In this test, violations data were fitted to the multivariate linear model proposed by Christoffersen and Diebold [51]:
\[I_{t}=\alpha+\Sigma_{i=1}^{k}\beta_{1,i}I_{t-i}+\Sigma_{j=1}^{l}\beta_{2,j}g( \cdot)+u_{t} \tag{39}\]
, where \(g(\cdot)\) represents a function on the information set available as of time \(t\). In our evaluation, we used \(k=4\) and \(l=0\). The independence property can be assessed by a simple F-test using the hypothesis
\[H_{0}:\beta_{1,1}=\beta_{1,2}=\beta_{1,3}=\beta_{1,4}=0. \tag{40}\]
In Table 8 we recorded the test statistics for each group at different VaR levels and with different volatility estimators. In contrast to the conclusion from the previous independence test, we observed statistically significant evidence to reject the independence property for EWMA estimator for VaR levels of \(95\%\) and \(97.5\%\). For GARCH estimator, F-test applied to group 3 and group 5 found evidence to reject the null hypothesis for \(97.5\%\) and \(99\%\) VaR. But this evidence becomes insignificant when we assessed it at the averaged level. For HAR estimator, there was no statistically significant p-value to reject the null hypothesis of Independence. We noted the per-estimator and per-group result from F-test were reasonably consistent with the result from the previous independence test.
### Summary of Accuracy Performance
We summarise the results of the backtests applied to assess the unconditional coverage and independence properties of VaR forecast in Table 9 below. Overall, HAR has the best performance in terms of accuracy.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{VaR Level} & \multirow{2}{*}{Estimator} & \multicolumn{6}{c}{Test statistics} & \multirow{2}{*}{Result} \\ \cline{3-3} \cline{5-8} & & Group 0 & Group 1 & Group 2 & Group 3 & Group 4 & Group 5 & average \\ \hline \multirow{3}{*}{\(95\%\)} & EWMA & 0.17 & 1.90 & 0.80 & 17.30\({}^{\text{\textminus}}\) & 1.27 & 3.13\({}^{\text{\textminus}}\) & 4.10\({}^{\text{\textminus}}\) \\ & GARCH & 0.78 & 1.83 & 0.78 & 1.69 & 0.41 & 1.19 & 1.11 \\ & HAR & 0.17 & 0.17 & 0.87 & 0.02 & 0.78 & 0.07 & 0.35 \\ \hline \multirow{3}{*}{\(97.5\%\)} & EWMA & 0.17 & 9.56\({}^{\text{\textminus}}\) & 0.78 & 17.30\({}^{\text{\textminus}}\) & 0.36 & 3.31\({}^{\text{\textminus}}\) & 5.25\({}^{\text{\textminus}}\) \\ & GARCH & 0.78 & 1.83 & 0.78 & 2.78\({}^{\text{\textminus}}\) & 0.78 & 1.19 & 1.35 \\ & HAR & 0.17 & 0.07 & 0.07 & N/A & 0.78 & N/A & 0.27 \\ \hline \multirow{3}{*}{\(99\%\)} & EWMA & 0.34 & 0.02 & 1.54 & 0.02 & 0.17 & 9.39\({}^{\text{\textminus}}\) & 1.90 \\ & GARCH & 0.17 & 1.90 & 0.78 & 7.18\({}^{\text{\textminus}}\) & 0.17 & 3.13\({}^{\text{\textminus}}\) & 2.21 \\ \cline{1-1} & HAR & 0.07 & N/A & 0.07 & N/A & 0.07 & N/A & 0.07 \\ \hline \multicolumn{3}{l}{\({}
## 5 Conclusion & Future Work
### Summary
In this work, we have presented a real-time VaR calculation workflow for portfolios of cryptocurrency derivatives.
From the perspective of workflow design, we applied a parsimonious volatility forecast model which can be fitted with OLS estimators to ensure computation efficiency. To further reduce the calculation latency, delta-gamma-theta approach is chosen as a replacement of the commonly used historical or Monte Carlo simulation approach, to approximate the non-Gaussian distribution of portfolio returns. As the final step in this computation workflow, we applied the Cornish-Fisher expansion to enhance the estimation for tail quantiles of the distribution with higher order moments.
From the perspective of implementation, we leveraged the column-oriented approach of kdb+ database and in-memory compute engines to ensure efficiency in dealing with high-frequency market data. We developed a customised kdb+ tick architecture that pre-processes tick level data to facilitate the calculation workflow. For the calculation process, we included tables for caching and latest market data to decrease the latency from IPC. Where effective, parallel processing is used to further enhance the calculation efficiency. To improve the usability of the system, we developed a complementary web-based interface in KX dashboard, which provide users with access VaR calculation within a few clicks and other relevant market metrics to assist the risk management practice.
As part of this work, we also conducted a comparative analysis involving three distinct families of volatility models: EWMA, GARCH and HAR. These inference models were evaluated based on two critical dimensions: calculation latency and VaR estimation accuracy. While EWMA exhibited superior performance in terms of calculation latency due to its computational simplicity, it fell short in adequately capturing the dynamics of volatility, resulting in clustered violations and underestimations of the risk level. In contrast, the HAR model emerged as the top performer in terms of inference accuracy by successfully passing the backtests on unconditional coverage and independence.
### Future Work
#### 5.2.1 Positive semi-definiteness of covariance matrix
In the context of our inference algorithm, the elements within the covariance matrix are forecasted independently. We have observed instances in which the forecasted correlation terms deviate from the valid range of -1 to 1, resultin in a covariance matrix that is not positive semi-definite. This statistically incoherent covariance matrix leads to negative values for second order central moments, consequently rendering the skewness parameter unsuitable for the Cornish-Fisher expansion.
One potential correction procedure involves projecting the symmetric matrix onto the space of positive semi-definite matrices, as illustrated in the study by Fan et al [52]. Other systematic remedies have been proposed by researchers. For instance, the reparameterisation approach introduced by Archakov and Hansen transforms the \(n\times n\) correlations matrix into unrestricted vectors of length \(n(n-1)/2\) for modelling correlations. This ensures the correlations matrix forecast obtained through inverse mapping retains the intrinsic property of positive definiteness [53].
#### 5.2.2 High dimensional covariance matrix
The challenge from forecasting high dimensional covariance matrix is twofolds: firstly, number of covariance or correlation terms increases quadratically as number of underlyings increases, resulting in issue on computation efficiency; secondly, considering the growth in the number of digital assets the number of samples available could be less than the number of covariance terms to forecast.
#### 5.2.3 Validity of Cornish-Fisher expansion
While Cornish-Fisher expansion provides a relatively easy and parsimonious way of dealing with non-normal return distributions, its usefulness may be compromised by two pitfalls as discussed in the work of Maillard [54]:
* **Domain of validity**: For Cornish-Fisher expansion to produce a well-defined monotonic quantile function, it requires the actual skewness \(S\) and excess kurtosis \(K\) of the return distribution to satisfy conditions of [55]: \[\frac{S^{2}}{9}-4(\frac{K}{8}-\frac{S^{2}}{6})(1-\frac{K}{8}-\frac{5S^{2}}{36})\leq 0\] (41) When \(S\) and \(K\) fail outside of this domain, the Cornish-Fisher expansion is no longer applicable. We can consider applying the rearrangement procedure introduced by Chernozhukov et al to restore the monotonic property inherent in the quantile functions [56].
* **Skewness and kurtosis parameter**: It is important to differentiate between the actual skewness \(S\) and excess kurtosis \(K\) of the true distribution and the \(\hat{S}\) and \(\hat{K}\) parameters applied in the transformation, as they only coincide when their values are small [54]. Maillard presented actual skewness and excess kurtosis as polynomials of \(\hat{S}\) and \(\hat{K}\) parameters: \[S=f(\hat{S},\hat{K})=\frac{6S-76S^{3}+510S^{5}+36SK-468S^{3}K+108 SK^{2}}{(1+6K^{2}-24S^{2}K+25S^{4})^{1.5}}\] (42) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad 3+3348K^{4}-28080S^{2}K^{3}+1296K^{3}+252K^{2}+24K\] \[K=g(\hat{S},\hat{K})=\frac{-123720S^{6}K+8136S^{4}K-504S^{2}K-60 48S^{2}K^{2}}{(1+6K^{2}-24S^{2}K+25S^{4})^{2}}-3\] (43) and demonstrated that to appropriately apply the Cornish-Fisher expansion, it is necessary to reverse these polynomials to solve for \(\hat{S}\) and \(\hat{K}\). While analytical expressions for the reversed relations are not available, to improve the quality of transformation, we could use numeric solvers for these polynomials, such as the modified Newton's method proposed in the work of Lamb et al [57].
| 仮想通貨市場は、伝統的な資産クラスよりもSignificantly高い変動性を示すことで知られています。効率的で適度なリスク計算は、極端な価格変動が発生する短期間の市場環境でのリスク exposicionesを管理するために重要です。この論文の目的は、リアルタイムの計算ワークフローを構築することです。これは、非線形な暗号資産 derivadosのVaR推定を提供することを目的としています。多くの研究者が、暗号資産のコンテキスト内で時間系列モデルの予測能力を検討してきました。この研究では、EMWA、GARCH、HARの3つの一般的なモデルを使用して、delta-gamma-thetaアプローチとCornish-Fisher拡張と組み合わせて、変動動態を捕らえ、予測し、計算効率と精度を検討しました。私たちは、高頻度市場データの情報を活用し、分析的な推定手順の簡素性を活用した計算ワークフローを提示しました。このワークフロー |
2305.00437 | Temperature-Dependent and Magnetism-Controlled Fermi Surface Changes in
Magnetic Weyl Semimetals | The coupling between band structure and magnetism can lead to intricate Fermi
surface modifications. Here we report on the comprehensive study of the
Shubnikov-de Haas (SdH) effect in two rare-earth-based magnetic Weyl
semimetals, NdAlSi and CeAlSi$_{0.8}$Ge$_{0.2}$. The results show that the
temperature evolution of topologically nontrivial Fermi surfaces strongly
depends on magnetic configurations. In NdAlSi, the SdH frequencies vary with
temperature in both the paramagnetic state and the magnetically ordered state
with a chiral spin texture, but become temperature independent in the
high-field fully polarized state. In CeAlSi$_{0.8}$Ge$_{0.2}$, SdH frequencies
are temperature-dependent only in the ferromagnetic state with magnetic fields
applied along the $c$ axis. First-principles calculations suggest that the
notable temperature and magnetic-configuration dependence of Fermi surface
morphology can be attributed to strong exchange coupling between the conduction
electrons and local magnetic moments. | Nan Zhang, Xianyong Ding, Fangyang Zhan, Houpu Li, Hongyu Li, Kaixin Tang, Yingcai Qian, Senyang Pan, Xiaoliang Xiao, Jinglei Zhang, Rui Wang, Ziji Xiang, Xianhui Chen | 2023-04-30T09:34:13 | http://arxiv.org/abs/2305.00437v1 | # Temperature-Dependent and Magnetism-Controlled Fermi Surface Changes in Magnetic Weyl Semimetals
###### Abstract
The coupling between band structure and magnetism can lead to intricate Fermi surface modifications. Here we report on the comprehensive study of the Shubnikov-de Haas (SdH) effect in two rare-earth-based magnetic Weyl semimetals, NdAlSi and CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\). The results show that the temperature evolution of topologically nontrivial Fermi surfaces strongly depends on magnetic configurations. In NdAlSi, the SdH frequencies vary with temperature in both the paramagnetic state and the magnetically ordered state with a chiral spin texture, but become temperature independent in the high-field fully polarized state. In CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\), SdH frequencies are temperature-dependent only in the ferromagnetic state with magnetic fields applied along the \(c\) axis. First-principles calculations suggest that the notable temperature and magnetic-configuration dependence of Fermi surface morphology can be attributed to strong exchange coupling between the conduction electrons and local magnetic moments.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
The Fermi surface (FS), an equipotential surface in the momentum space that marks the discontinuity in the distribution of fermions, is only rigorously defined at zero temperature (\(T\)) [1]. The thermal broadening of the distribution function at finite \(T\) causes a shift of the chemical potential \(\mu\)[2], subsequently changing the size of the FS. While such changes correspond to variations in the frequency (\(F\)) of quantum oscillations (according to the Onsager relation, \(F=\frac{\hbar}{2\pi e}A\), where \(A\) is the extremal cross-sectional area of FS) in principle, this thermal correction in \(F\) is usually too weak to be detected experimentally [3]. Hence, \(F\) is routinely treated as \(T\)-independent in quantum oscillation experiments. Intriguing exceptions do exist. For example, non-parabolic band dispersion gives rise to an additional "topological" correction of \(T\) dependence of \(\mu\), which may gives a frequency shift up to \(\Delta F/F\sim 1\%\)[4]. Another case stems from the Stoner picture in itinerant ferromagnets, considering a \(T\)-dependent \(F\) induced by the evolution of exchange splitting that continuously modifies the occupation of two spin-polarized subbands [5; 6].
Magnetic topological materials have recently become an intense focus of research. They provide a fertile playground for studying the coupling between magnetic orders and electronic band topology, as the two can change concurrently at topological transitions triggered by alternation of system symmetries [7; 8; 9; 10]. Such coupling can also lead to unusual quantum oscillations whose \(F\) depends on \(T\)[10; 11; 12; 13]; the underlying mechanisms are not well understood yet, since in most cases the magnetism is local and thus beyond the Stoner picture. In this Letter, we study the Shubnikov-de Haas (SdH) effect (quantum oscillations in electrical resistivity \(\rho_{xx}\)) in two Weyl semimetals possessing local magnetism, i.e., NdAlSi and CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\). We show that in NdAlSi the SdH effect exhibits distinct spectra in three phase regimes with different magnetic configurations, namely the high-\(T\) paramagnetic (PM) state, the low-\(T\) canted up-down-down (u-d-d) ordered state [14] and the field-induced polarized (FIP) state [15]. Pronounced \(T\) dependence of SdH frequencies can be observed in both the PM and the canted u-d-d states, but ceases to manifest itself in the FIP state. In CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\), \(T\)-dependent SdH frequencies occur in the ferromagnetic (FM) state below ordering temperature \(T_{C}\) with \(H\) applied along the crystalline \(c\) axis, yet are absent in the PM state and the FM state with \(H\) in the \(ab\) plane. Our first-principles calculations ascribe such complex FS evolution to the strong exchange splitting of the Weyl fermion bands caused by the coupling with local \(4f\) electrons.
Single crystals of NdAlSi and CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\) were obtained by the flux method (see Sec. I in Supplemental Material [16], which includes Refs.[17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]). The two compounds are isostructural: both crystallize in a noncentrosymmetric tetragonal structure [see Fig. S1(a) in [16]] with space group \(I4_{1}\)_md_ (No.109), which allows the emergence of Weyl nodes even in the paramagnetic state [14; 17]. Magnetoresistance (MR) measurements were performed using a standard four-probe configuration in
a 14 T superconducting magnet and a 33 T water-cooled Bitter magnet; as both materials are magnetic, we use \(B\) field instead of the applied \(H\) field in the analysis of quantum oscillations, considering sample magnetization and the demagnetizing effect (see Sec. II in Supplemental Material [16]). To analyze the SdH oscillations, fast Fourier transforms (FFT) were performed on the oscillatory MR (\(\Delta\rho_{xx}\)) that was obtained from a polynomial background subtraction. First-principles and structure calculations were carried out in the framework of density-functional theory (DFT) (see Sec. III in Supplemental Material [16]).
NdAlSi becomes magnetically ordered at \(T_{\rm m}\) = 7.3 K (Fig. S1 [16]), where it probably enters an incommensurate spin-density-wave (SDW) order before the establishment of a commensurate ferrimagnetic order at 3.3 K [14; 15]. Both orders manifest a chiral, canted u-d-d spin configuration [see the inset of Fig. 1(f)]. Because our experimental probe cannot determine the magnetic commensurability, we refer to the low-field magnetic ordering in NdAlSi as the canted u-d-d state in this work. This state terminates at a metamagnetic transition field \(H_{\rm m}\)[14; 15] (\(\mu_{0}H_{\rm m}\simeq\) 5.2 T at 2 K for \(H\parallel c\)), which is indicated by a sharp jump in MR (Fig. S2, [16]). At \(T\) = 2 K, the FIP state occurs immediately above \(H_{\rm m}\) with Nd \(4f\) moments completely aligned by \(H\); at higher \(T\), such full polarization is realized at \(H_{\rm p}>H_{\rm m}\) (Fig. S2, [16], note that \(H_{\rm p}\) is a characteristic field for a crossover behavior rather than a transition). In Fig. 1(a) we plot the SdH patterns measured under \(H\parallel c\) up to 14 T at various \(T\) (for several temperatures, data up to 33 T are also shown). Remarkably, single-frequency SdH oscillations [Fig. 1(b)] appear above \(H_{\rm m}\) at \(T\) = 2 K; we take the onset of this feature as the threshold \(H_{\rm p}(T)\) for spin polarization. Between \(H_{\rm m}(T)\) and \(H_{\rm p}(T)\), the SdH patterns resemble that at \(T>T_{\rm m}\); thus we assign this field interval to the PM state. An \(H-T\) phase diagram is obtained for NdAlSi, as presented in Fig. 1(f). With increasing \(T\), \(H_{\rm m}\) and \(H_{\rm p}\) decreases and increases, respectively, creating a fan-shaped PM regime in between.
FFT analysis reveals that the SdH oscillations are composed of two main branches \(\alpha\) and \(\beta\) in the canted u-d-d state [Fig. 1(d)], consistent with previous studies [14; 15]. Similarly, two FFT peaks \(\alpha^{\prime}\) and \(\beta^{\prime}\) are resolved in the PM state [Fig. 1(c)]. In the FIP state, a single component \(\delta\) and its second harmonic dominate the SdH pattern [Fig. 1(b)]. These results corroborate magnetism-controlled FS morphology in NdAlSi. More interestingly, as indicated by the arrows in Figs. 1(c) and 1(d), the SdH frequencies are \(T\)-dependent in both the canted u-d-d state and the PM states. The behaviors of \(F(T)\) for all SdH branches are summarized in Fig. 1(e). Branches \(\alpha\) and \(\beta\) in the canted u-d-d state shift to higher and lower frequencies upon increasing \(T\) towards \(T_{\rm m}\), respectively: the increase (decrease) of \(F_{\alpha}(F_{\beta})\) from 2 K (\(F_{\alpha}\) = 40 T, \(F_{\beta}\) = 77 T) to 5 K (\(F_{\alpha}\) = 46.5 T, \(F_{\beta}\) = 73.6 T), corresponds to an FS expansion(shrinkage) of approximately 16% (4.5%). Above \(T_{\rm m}\), \(\alpha\) and \(\beta\) smoothly evolve into \(\alpha^{\prime}\) and \(\beta^{\prime}\), suggesting the same origin of the corresponding frequencies; \(F_{\alpha^{\prime}}\) (\(F_{\beta^{\prime}}\)) also inherits the \(T\)-dependence of \(F_{\alpha}\) (\(F_{\beta}\)), though the variations become less remarkable (see Secs. IV and V in Supplemental Material [16] for detailed analysis). Such \(T\)-dependent SdH frequencies in the PM state presumably also causes the peculiar oscil
Figure 1: (a) The oscillatory resistivity \(\Delta\rho_{xx}\) in NdAlSi as a function of inverse field [see Fig. S2 in [16] for raw data]. Data were measured with \(H\parallel c\). The thin solid curve marks out the metamagnetic transition field \(H_{\rm m}\). Gray thick curve denotes the crossover between the PM and FIP states. (b)-(d) FFT spectra of the SdH oscillations in NdAlSi in the (b) FIP, (c) PM and (d) canted u-d-d states. Arrows guide the eye. (e) Main FFT frequencies plotted against \(T\). Error bars are defined as the half FFT peak width at 90% of the peak height. Dashed and solid lines denote the frequency changes based on the analysis of Lifshitz-Kosevich (LK) fits and shifts of SdH peak positions (\(\Delta B/B\)), respectively (more discussions are presented in [16]). (f) \(H-T\) phase diagram for NdAlSi obtained from magnetization and MR measurements (Figs. S1 and S2 in [16]). Spin configurations for the u-d-d and FIP states are illustrated. The PM and FIP states are separated here by a threshold field \(H_{\rm p}\) for spin polarization [dashed line in (a); see also Fig. S2 in [16]]. The dark shaded area bounded by \(T_{\rm m}\) and \(H_{\rm L}\) (a low-field jump in MR; see Fig. S2 in [16]) may represent an SDW order [15].
lations in the \(\rho_{xx}(T)\) curves in NdAlSi measured under constant \(H\)[37]. In contrast to previous results [14], we confirm that \(F_{\delta}\simeq 100\,\)T in the FIP state does not change with \(T\) within our experimental resolution [Fig. 1(e); see also Figs. S2(e) and S2(f) in [16]].
The fact that the temperature evolution of FSs depends on the magnetic configuration implies an intricate coupling between band structure and magnetism in NdAlSi. To further look into the fermiology, we study the angle-dependent SdH effect. The FFT spectra in the FIP state for varying magnetic-field orientations \(\theta\) (angle from \(c\) axis toward \(a\) axis) are presented in Fig. 2(a). (Note that \(H_{\rm m}\) monotonically increases to \(\sim 11\,\)T as \(H\) rotates to \(\theta\sim 70^{\circ}\), and is unrecognizable above this angle; see Figs. S2(c) and S2(d) in [16].) With increasing \(\theta\), \(F_{\delta}\) becomes higher; for \(\theta\gtrsim 40^{\circ}\), another branch \(\epsilon\) appears on the low-frequency side of \(\delta\) [arrows in Fig. 2(a)]. The angle dependence of \(F_{\delta}\) can be fitted by an ellipsoidal FS model that is elongated along the \(c\)-axis with a long-to-short axis ratio of 1.92 [solid line in Fig. 2(b)]. Moreover, the way the SdH patterns evolves with \(\theta\) alludes to changes in spin degeneracy in the corresponding FS. As shown in Fig. 2(c), with \(H\parallel c\), the SdH spectrum in the FIP state can be well described by a single-component Lifshitz-Kosevich (LK) model [29; 38]:
\[\Delta\rho_{xx}=A_{SdH}B^{1/2}\frac{X}{\sinh(X)}\exp(-\frac{\pi m^{*}}{eB\tau _{D}})\cos[2\pi(\frac{F_{\delta}}{B}+\phi)], \tag{1}\]
where \(X=(2\pi^{2}k_{B}Tm^{*})/e\hbar B\), \(m^{*}\) is the cyclotron mass, \(k_{B}\) the Boltzmann constant, \(\tau_{D}\) the Dingle relaxation time, \(\phi\) the phase of SdH oscillations, \(A_{\rm SdH}\) an amplitude coefficient, and \(F_{\delta}\) = 100 T. No Zeeman splitting of SdH peaks/valleys is observed at \(\theta=0^{\circ}\), perhaps implying a spin-polarized FS. When \(H\) is tilted from the \(c\)-axis, signatures indicative of Zeeman splitting appears for \(\theta\gtrsim 10^{\circ}\): the LK fits to SdH patterns at \(\theta=10^{\circ}\) require the inclusion of the second harmonic for \(F_{\delta}\), whereas at \(\theta=35\)\({}^{\circ}\) the fitted amplitude of second harmonic is even larger than the fundamental [Fig. 2(c); see Supplemental Material [16] for details]. Such phenomena point toward nearly spin-degenerate bands at higher \(\theta\)[39]. Considering the \(F-\theta\) relation and the putative spin-polarized nature of branch \(\delta\), we assign it to the hole FS pocket along the \(Z-\Sigma^{\prime}\) direction (labeled as "\(\lambda\)") in the first Brillouin zone [Fig. 2(d)]. Due to the damped SdH signals at higher \(\theta\) [Figs. S2(g) and S2(h) in [16]], we cannot unambiguously identify all the other branches; discussions of the possible corresponding extremal orbit areas for these frequencies are presented in the Sec. VI of Supplemental Material [16]. In particular, it is most likely that \(\beta\) (\(\beta^{\prime}\)) and \(\alpha\) (\(\alpha^{\prime}\)) also stem from the FS pocket \(\lambda\) and correspond to its spin-majority/outer and spin-minority/inner sheets, respectively, in the canted u-d-d (PM) state.
The \(T\)-dependent SdH frequencies have been reported in PrAlSi [13] but are missing in LaAlSi [40]; both are isostructural to NdAlSi and are potential Weyl semimetals. Therefore, the presence of local \(4f\) magnetic moments on the rare-earth site must be the crucial factor inducing such a phenomenon. We verify this by measuring the SdH effect in another isostructural compound with \(4f\) magnetism, CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\). Among five different members in the series of magnetic Weyl semimetals CeAlSi\({}_{1-x}\)Ge\({}_{x}\) (\(0\leq x\leq 1\)), the composition \(x=0.2\) exhibits the most pronounced SdH effect; see Fig. S4[16]. This material is ferromagnetic below \(T_{\rm C}=6.5\,\)K; the overall behavior of magnetization (Fig. S1 in [16]) and DFT-determined magnetic structure (Sec.III in Supplemental Material [16]) are similar to that reported in CeAlSi [17], in which the Ce \(4f\) moments are ordered in an noncollinear ferromagnetic state with in-plane easy axes [18]. Intriguingly, \(T\)-dependent SdH patterns are
Figure 2: (a) FFT spectra of SdH oscillations in NdAlSi measured with varying tilt angle \(\theta\) (from \(c\) toward \(a\) axis) at \(T=1.7\,\)K. FFTs are performed above \(H_{\rm m}\), _i.e.,_ in the FIP state. (b) SdH frequencies as functions of \(\theta\) for the canted u-d-d (circles), PM (triangles), and FIP (squares) states. The solid line is fit to an ellipsoidal FS model (see text). (c) Best fits of the Lifshitz-Kosevich (LK) model (solid lines) to the oscillatory MR in the FIP state (circles) measured at \(\theta=0^{\circ}\) (black), \(10^{\circ}\) (blue), \(35^{\circ}\) (purple), and \(T=2\,\)K. Inset: \(T\)-dependent SdH oscillation amplitudes measured at \(\theta\simeq 30^{\circ}\) and the LK fit (solid line) (d) DFT-calculated Fermi surfaces (FSs) in the FIP state of NdAlSi. Dark purple and green colors represent hole and electron FS pockets, respectively. An expanded view of the hole pocket (\(\lambda\)) along the \(Z-\Sigma^{\prime}\) direction is provided. The extremal orbits are highlighted accordingly.
only observed with \(H\parallel c\) and below \(T_{\rm C}\) [Fig. 3(a)], whereas in the PM state above \(T_{\rm C}\) and for the FM state in \(H\perp c\) they are absent (Fig. S3 in [16]). Figures 3(b) and 3(c) depict the \(T\) dependence of FFT frequencies for the SdH measurements under \(H\parallel c\) and \(H\parallel a\), respectively. \(F_{1}\) measured in the former \(H\) orientation is the unique branch that responds remarkably to the variation of \(T\): it takes the value of 47 T (53 T) above \(T_{\rm C}\) in our sample #1(#2), yet decreases to 33 T (45 T) at 2 K [Fig. 3(b)]. All other branches display weak or negligible \(T\) dependence [Figs. 3(b) and 3(c)]. Based on the DFT-calculated extremal orbits, we propose that most of the detected SdH frequencies stem from the FSs \(\lambda\) and \(\xi\) along the \(Z-\Sigma^{\prime}\) and \(\Gamma-\Sigma\) directions, respectively [Fig. 3(d)]; in particular, the branch \(F_{1}\) is most likely to be associated with a spin-minority pocket (see Sec.VI in Supplemental Material [16]).
Several mechanism with distinct underlying physics can lead to the temperature-induced FS modification. The topological correction [4] may contribute to but cannot fully account for the large SdH frequency shifts we observed [41]. In Kondo lattices, continuous change of the sizes of FSs with temperature can occur, reflecting the delocalization of \(f\) electrons upon cooling due to their hybridization with itinerant \(d\) electrons [33]. In the two compounds we study here, however, \(f-d\) hybridization is absent and the \(4f\) electrons are completely localized. For instance, in the FIP state in NdAlSi, the cyclotron mass for \(F_{\delta}\) is only 0.3 \(m_{0}\) [inset of Fig. 2(c); \(m_{0}\) is the mass of a free electron], excluding any Kondo-type band renormalization. In Stoner ferromagnets, the exchange splitting of bands scales with the magnetization, thus it is also a function of \(T\)[5; 6]. In NdAlSi and CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\), this scenario is also inapplicable because the local magnetism herein invalidates the Stoner model. Nonetheless, the \(T\) dependence of FSs appears to be sensitive to magnetic configuration, implying that the origin must be the \(T\)-dependent exchange coupling between the conduction electrons (Weyl fermions) and local \(4f\) moments.
Our DFT calculations show that in both NdAlSi and CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\) the Weyl fermions at \(\epsilon_{F}\) are predominantly Nd/Ce \(5d\) electrons (Fig. S5 in [16]); they thus have considerable intra-atomic exchange interactions with the local \(4f\) electrons, giving rise to band
Figure 3: (a) \(\Delta\rho_{xx}\) measured in CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\) with \(H\parallel c\) at different temperatures. Below \(T_{c}=6.5\) K (horizontal line), SdH extrema start to shift with varying \(T\) (dotted arrow). (b),(c) \(T\) dependence of SdH frequencies (see Fig. S3 in [16] for FFT spectra) for (b) \(H\parallel c\) and (c) \(H\parallel a\). In (b), the solid and hollow circles are data obtained in two samples #1 and #2 in the field intervals of 8 T \(\leq B\leq\) 14 T and 8 T \(\leq B\leq\) 33 T, respectively. Data presented in (c) were measured in sample #1, with FFT performed between 8 and 14 T. (d) The DFT calculated hole (yellow) and electron (blue) FS pockets for the polarized state with Ce \(4f\) moments aligning along \(c\) in an out-of-plane \(H\) (arrow). The expanded view shows the outer (spin-majority) and inner (spin-minority) FS sheets for the pocket \(\lambda\). Circles highlight the possible extremal orbits.
Figure 4: (a)-(d) The evolution of spin-resolved band structures of NdAlSi obtained from first-principles calculations; the local moment of Nd\({}^{3+}\) is constrained to \(\langle M_{z}\rangle\) values of (a) 0, (b) 1.0 \(\mu_{B}\), (c) 2.0 \(\mu_{B}\), and (d) 3.0 \(\mu_{B}\). Note that (a) corresponds to the PM phase under zero field and (d) corresponds to the case of free magnetism (i.e., a FM state from self-consistent calculations). Red and blue colors indicate the \(z\) component of spin-up and spin-down states, respectively. (e) The spin-dependent extremal (minimum or maximum) cross-sectional areas on FS pocket \(\lambda\), as a function of the polarized local moment, \(\langle M_{z}\rangle\), of the Nd\({}^{3+}\) ions in NdAlSi. (f) and (g) show the spin-resolved band structures of CeAlSi in the FM (Ce \(4f\) moments align along the \(c\) axis) and PM phases, respectively. All calculations include the SOC.
splitting [34] that varies with temperature. The total energy splitting of the bands contains three terms: \(\Delta E\) = \(\Delta_{0}\) + \(E_{ex}\) + \(E_{z}\), where \(\Delta_{0}\) is the zero-field band splitting due to the antisymmetric spin-orbit coupling (SOC) in these noncentrosymmetric materials; \(E_{ex}\) and \(E_{z}\) are the exchange splitting [42] and the Zeeman splitting, respectively. We mention that since both \(\Delta_{0}\) and \(E_{z}\) = \(g_{s}\mu_{B}B\) (\(g_{s}\) is the Lande \(g\) factor) are independent of \(T\), \(E_{ex}\) is solely responsible for the observed temperature-induced FS changes [43]. Considering a simplified notion of the exchange splitting: \(E_{ex}\)\(\propto\)\(I_{ex}\)(\(M_{z}\)) (where \(\langle M_{z}\rangle\) is the polarized component of the magnetic moment for 4\(f^{3}\)\(J\) = 9/2 multiplet along \(H\parallel z\)), we propose that the \(T\)-dependent SdH spectrum in the PM state of NdAlSi principally originates from the variation of \(\langle M_{z}\rangle\) at fixed \(H\)[44; 45].
In real materials, the 4\(f\)-5\(d\) exchange interaction can be much more complicated than the model mentioned above. Nevertheless, our DFT calculations successfully capture the contribution of \(E_{ex}\) to the \(T\)-dependent band structure by tracing its variation upon changing \(\langle M_{z}\rangle\). As displayed in Figs. 4(a)-4(d), the band splitting in NdAlSi is remarkably enhanced with increasing \(\langle M_{z}\rangle\). In particular, the hole band along the \(Z-\Sigma^{\prime}\) direction [band \(\lambda\), Fig. 2(d)] exhibits nearly two fold degeneracy at \(\epsilon_{F}\) with \(\langle M_{z}\rangle\) = 0 [Fig. 4(a)]; once the 4\(f\) spin polarization is induced by external \(H\), the two subbands with opposite \(z\)-direction spin components split significantly. The spin-minority subband eventually sinks below \(\epsilon_{F}\) for \(\langle M_{z}\rangle\) > 2\(\mu_{B}\) [Fig. 4(e)], leaving only one spin-polarized subband that gives the SdH branch \(\delta\). This evolution is in agreement with our experimental results. In NdAlSi, we assign the SdH branches \(\beta\), \(\beta^{\prime}\) and \(\alpha\), \(\alpha^{\prime}\) to the outer and inner FS sheets of band \(\lambda\), respectively (Sec. VI in Supplemental Material [16]); these two groups of \(F\) shift toward opposite directions (up and down, respectively) with increasing \(\langle M_{z}\rangle\) upon cooling. DFT calculations qualitatively reproduce such a process [Fig. 4(e)]. In the FIP state, the inner FS disappears, consistent with our observation of a single branch \(F_{\delta}\) which is \(T\) independent (due to the saturation of \(\langle M_{z}\rangle\)) and is likely to be spin polarized. The fact that \(F_{\delta}\) is notably higher than \(F_{\beta}\) (\(F_{\beta^{\prime}}\)) may reflect a sudden increase of exchange coupling strength upon entering the FIP state [46; 47]; a rough estimation based on the DFT-calculated band dispersion yields an enhancement of \(E_{ex}\) of \(\sim\) 46 meV.
For CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\), the complex magnetic structure [17; 18] hinders a direct comparison between theoretical and experimental results. Band structures computed by DFT [Figs. 4(f) and 4(g)] show that in the fully polarized state with 4\(f\) spins aligning along \(c\), the band splitting is much larger than that in the PM state. Consequently, it is more reasonable to assign the \(T\)-dependent SdH branch \(F_{1}\) [Fig. 3(b)] to an extremal orbit on an inner (minority) FS which shrinks upon increasing spin polarization (Sec. VI in Supplemental Material [16]). On the other hand, the almost \(T\)-independent SdH frequencies measured with \(H\perp c\) [Fig. 3(c)] probably imply different responses to the exchange coupling from bands with distinct orbital characters: for \(H\parallel c\) (\(H\perp c\)), the main SdH frequencies arise from band \(\lambda\) (\(\xi\)) (Sec. VI in Supplemental Material [16]) that is dominated by the \(d_{xy}\) and \(d_{x^{2}-y^{2}}\) (\(d_{z^{2}}\) and \(d_{yz}\)) orbitals [Fig. S5(c) [16]]. Such complex behavior highlights the influence of SOC in the exchange coupling discussed above, which requires further theoretical investigation to clarify its role. See Sec. VII in Supplemental Material [16] for more details.
We mention that the exchange-splitting-induced FS changes may explain the T -dependent quantum oscillation frequencies observed in a bunch of magnetic topological materials [10; 11; 48]; though it is usually more significant in the rare-earth compounds [9; 12; 13; 49] as a result of large \(\langle M_{z}\rangle\) of the localized 4\(f\) electrons. Moreover, it has been pointed out that, with an effective exchange coupling between the localized and itinerant electrons, an indirect Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction between the local moments can be established in topological semimetals, which is mediated by the (partially) spin-polarized Dirac/Weyl fermions [50; 51; 52]. In a noncentrosymmetric crystal, the antisymmetric SOC can further modify the form of such RKKY interaction, giving rise to chiral spin textures [51]; this scenario interprets the origin of the complex magnetic structure in NdAlSi [14]. The experimental evidence for strong local-itinerant exchange coupling presented here further verifies the RKKY mechanism and thus helps us understand how the rich magnetic orderings emerge in topological materials.
In summary, we have presented SdH oscillation measurements in different magnetic regimes in Weyl semimetals NdAlSi and CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\). The SdH frequencies reveal \(T\)-dependent FS changes that rely on the magnetic configurations: such changes are notable in both the canted u-d-d state and the PM state yet disappear in the high-\(H\) FIP state in NdAlSi, whereas they only show up in the FM state with \(H\parallel c\) in CeAlSi\({}_{0.8}\)Ge\({}_{0.2}\). These phenomena can be essentially understood as outcomes of the exchange interactions between the Weyl fermions and the rare-earth 4\(f\) local moments, which can persist into the PM state in the presence of finite 4\(f\) spin polarization. Our observations of exchange-interaction-induced FS modifications potentially open up a route for realizing manipulation of topological orders in magnetic topological materials.
We are grateful for the assistance of Chuanying Xi and Yong Zhang in high magnetic field experiments. We acknowledge insightful discussions with Aifeng Wang, Tao Wu, Jianjun Ying and Zhenyu Wang. This work was supported by the National Natural Science Foundation of China (Grants No. 12274390, No. 11888101 and No. 12222402), the Fundamental Research Funds for
the Central Universities (WK3510000014), the Strategic Priority Research Program of Chinese Academy of Sciences (XDB25000000), the Innovation Program for Quantum Science and Technology (2021ZD0302802) and Anhui Initiative in Quantum Information Technologies (AHY160000). J.L.Z. was supported by the Excellence Program of Hefei Science Center CAS 2021HSC-UE011. Z.X. acknowledges the USTC startup fund. R. W. acknowledges support by the Beijing National Laboratory for Condensed Matter Physics.
| バンド構造と磁気間の結合が、複雑なFermisurfaceの修正を引き起こす可能性があります。ここでは、NdAlSiとCeAlSi$_{0.8}$Ge$_{0.2}$という二種類の希土類磁性 Weyl半導体における Shubnikov-de Haas (SdH) 効果について、包括的な研究を行います。結果として、トポロジカルな非対称性フェルミ面は、磁気的配置stronglyに依存しています。NdAlSiにおいては、SdHの周波数は、パラマgnesic状態と磁気秩序状態においても温度変化に依存し、チラルスピンテクスチャーを持ちますが、高磁場では完全に偏波状態に達すると温度が独立します。CeAlSi$_{0.8}$Ge$_{0.2}$では、SdHの周波数は、c軸に磁場を印加するFeromagnetic状態のみで温度依存に変化します。 |
2309.16180 | A More General Theory of Diagnosis from First Principles | Model-based diagnosis has been an active research topic in different
communities including artificial intelligence, formal methods, and control.
This has led to a set of disparate approaches addressing different classes of
systems and seeking different forms of diagnoses. In this paper, we resolve
such disparities by generalising Reiter's theory to be agnostic to the types of
systems and diagnoses considered. This more general theory of diagnosis from
first principles defines the minimal diagnosis as the set of preferred
diagnosis candidates in a search space of hypotheses. Computing the minimal
diagnosis is achieved by exploring the space of diagnosis hypotheses, testing
sets of hypotheses for consistency with the system's model and the observation,
and generating conflicts that rule out successors and other portions of the
search space. Under relatively mild assumptions, our algorithms correctly
compute the set of preferred diagnosis candidates. The main difficulty here is
that the search space is no longer a powerset as in Reiter's theory, and that,
as consequence, many of the implicit properties (such as finiteness of the
search space) no longer hold. The notion of conflict also needs to be
generalised and we present such a more general notion. We present two
implementations of these algorithms, using test solvers based on satisfiability
and heuristic search, respectively, which we evaluate on instances from two
real world discrete event problems. Despite the greater generality of our
theory, these implementations surpass the special purpose algorithms designed
for discrete event systems, and enable solving instances that were out of reach
of existing diagnosis approaches. | Alban Grastien, Patrik Haslum, Sylvie Thiébaux | 2023-09-28T05:47:52 | http://arxiv.org/abs/2309.16180v1 | # A More General Theory of Diagnosis from First Principles
###### Abstract
Model-based diagnosis has been an active research topic in different communities including artificial intelligence, formal methods, and control. This has led to a set of disparate approaches addressing different classes of systems and seeking different forms of diagnoses. For instance Reiter's "Theory of Diagnosis from First Principles" primarily targets static systems, considers that diagnoses are minimal sets of faults consistent with the system's model and the observation, and efficiently explores the powerset of faults by means of simple consistency tests. In contrast, diagnosis approaches to discrete event dynamic systems, pioneered by Sampath, Zanella, and others, traditionally reconstruct all system traces consistent with the observation, either explicitly or through a precompiled structure. In this paper, we resolve such disparities by generalising Reiter's theory to be agnostic to the types of systems and diagnoses considered. This more general theory of diagnosis from first principles defines the minimal diagnosis as the set of preferred diagnosis candidates in a search space of hypotheses. Computing the minimal diagnosis is achieved by exploring the space of diagnosis hypotheses, testing sets of hypotheses for consistency with the system's model and the observation, and generating conflicts that rule out successors and other portions of the search space. Under relatively mild assumptions, our algorithms correctly compute the set of preferred diagnosis candidates. The main difficulty here is that the search space is no longer a powerset as in Reiter's theory, and that, as consequence, many of the implicit properties (such as finiteness of the search space) no longer hold. The notion of conflict also needs to be generalised and we present such a more general notion. We present two implementations of these algorithms, using test solvers based on satisfiability and heuristic search, respectively, which we evaluate on instances from two real world discrete event problems. Despite the greater generality of our theory, these implementations surpass the special purpose algorithms designed for discrete event systems, and enable solving instances that were out of reach of existing diagnosis approaches.
## 1 Introduction
Discrete event systems (Cassandras & Lafortune, 1999) (DESs) are models of dynamic systems that represent states and events in a discrete manner. DESs are a natural model of many kinds of event-based systems, such as, for example, protocols (Holzmann, 1991) or business processes (van der Aalst, 2013), and also often form a natural abstraction of hybrid discrete-continuous dynamical systems. The diagnosis problem, in the context of dynamical systems, is to infer from a system model and partial observation of events emitted by the system some diagnostically relevant properties of its current state or behaviour - for example, whether any abnormal events have occurred, and if so, which ones, how many times and in what order?
Since the seminal work of Sampath et al. (Sampath, Sengupta, Lafortune, Sinnamohideen, & Teneketzis, 1995), DESs diagnosis methods have examined all sequences of events that represent possible system behaviours under the system model and the observation, and have extracted the diagnostic information from those sequences.
This contrasts with the approach developed by the Artificial Intelligence community for static systems: known as "diagnosis from first principles" (i.e., model-based diagnosis, as opposed to expert-based diagnosis) the approach pioneered by de Kleer, Reiter and Williams (Reiter, 1987; de Kleer & Williams, 1987) uses a theorem prover to test the consistency of diagnostic hypotheses with the model and the observation. By working directly at the level of hypotheses relevant to the diagnosis, this approach avoids enumerating all explanations of the observation (which are, in general, exponentially many).
When trying to understand why such a "test-based" diagnosis approach for DESs did not eventuate, two main reasons come to mind. The first is the absence of an efficient "theorem prover" for checking the consistency of a set of hypotheses and an observed DES, which is a problem akin to planning or model checking. However, there has been considerable work in these areas in the last decades so that available tools can now be used for diagnosis (cf., (Grastien, Anbulagan, Rintanen, & Kelareva, 2007; Sohrabi, Baier, & McIlraith, 2010; Haslum & Grastien, 2011)). The second reason is that the diagnose algorithm proposed by Reiter (Reiter, 1987) was designed to diagnose circuits, and therefore returns only a set of faults. DESs, in contrast, can experience multiple occurrences of the same fault event, and the diagnoser may be required to determine the number of repetitions of faults, or order in which they took place. Reiter's algorithm cannot be applied in this setting and extending it in this direction raises major issues. Our main contribution in this paper is to resolve these issues and generalise the test-based diagnosis framework to a larger class of diagnostic hypothesis spaces, appropriate to DESs and other models of dynamical systems.
We present a general definition of model-based diagnosis, independent of the form of the system model and the form of diagnosis required. This definition encompasses the existing theory of diagnosis of circuits as a special case, but also applies to dynamic system models, such as DESs, and beyond. As a result, DES diagnosis problems can be solved using the same techniques as for circuit diagnosis.
More precisely, we formulate the diagnosis problem as follows: given a set of _hypotheses_ (abstractions of the system behaviour that discriminate only according to aspects that are relevant to the diagnosis) and a preference relation over the hypotheses, the diagnosis is defined as the set of minimal (most-preferred) _diagnosis candidates_, where a candidate is a hypothesis that is consistent with the model and the observation. _Diagnosis_ is therefore the problem of exploring the _hypothesis space_ to identify these minimal diagnosis candidates. We present different _exploration strategies_ that require only an oracle capable of testing whether a given set of hypotheses intersects the diagnosis. This test solver plays a role similar to the theorem prover in Reiter's algorithm. Importantly, we show that the test solver does not have to be given an explicit, enumerated set of hypotheses. Instead, the set of hypotheses to test is implicitly represented as those that satisfy a set of _diagnostic properties_; the test solver's task is then to find a candidate that satisfies these properties. The implicit representation of hypothesis sets allows the diagnosis algorithm to test infinite sets of hypotheses that can be represented by a finite set of properties.
The exploration strategies we propose fall into two classes: The "preferred-first" strategies start by testing the most preferred hypotheses, until candidates are found; these candidates are then minimal. The "preferred-last" strategies generate and refine candidates until their minimality
is proven. For each exploration strategy, we determine the conditions on the hypothesis space that are necessary to ensure termination of the diagnosis algorithm.
Reiter's diagnose algorithm follows a preferred-first strategy, but additionally uses _conflicts_ to improve its efficiency. Conflicts enable the test solver to provide more information when the outcome of a test is negative. We generalise this idea and incorporate it into our preferred-first strategy. In our framework, a conflict is a chunk of the hypothesis space, which may be larger than the set of hypotheses tested, that is proven to contain no candidate. We show that they can be represented as sets of diagnostic properties that are inconsistent with the observed system. Because at least one of these properties must be negated, conflicts focus the exploration of the hypothesis space and thus accelerate the search for a diagnosis.
This work was motivated by our experience with real-world DES diagnosis problems occurring in a number of application domains, including in particular power systems alarm processing and business process conformance checking, which we describe below. Existing model-based diagnosis approaches were unable to cope with the complexity of these problems. We use these problems to benchmark various instances of our approach, differing in the hypotheses space, the strategy for exploring it, and the test solver implementation chosen, against other DES diagnosis methods. We show that our approach, using a test solver based on SAT, is able to solve most of these problems, significantly outperforming earlier state-of-the-art algorithms. We also obtain good performance with a test solver based on heuristic search.
The present article builds on our earlier conference publications (Grastien, Haslum, & Thiebaux, 2011; Grastien, Haslum, & Thiebaux, 2012; Grastien, 2014). The first article formulates diagnosis as a search problem on the hypothesis space and introduces the idea of a search strategy; the second one explains how conflicts can be exploited for the specific case of diagnosis of discrete event systems; and the last one shows how the theory can be applied to hybrid systems. Compared to these original works, we now present a unified theory motivated by a number of real world examples. This theory is more thoroughly developed, complete with proofs, and comprehensively evaluated wrt other algorithms.
This paper is organised as follows: In the next section, we provide some motivating examples for the present work. Section 3 gives a definition of the diagnosis problem that is independent from the modeling framework and the hypothesis space. Section 4 introduces the key concept of representation of sets of hypotheses by sets of properties and explains how questions relevant to diagnosis are formulated as diagnosis tests. Section 5 demonstrates how these definitions are instantiated for two different modeling frameworks: diagnosis of circuits and diagnosis of discrete event systems. Section 6 presents different strategies for exploring the hypothesis space. In Section 7, we discuss the relation to previous work, in particular that which our theory generalises. Section 8 describes two implementations of test solvers for discrete event systems diagnosis, and Section 9 the results of our experiments with these implementations. Section 10 concludes.
## 2 Motivating Examples
In this section, we briefly present examples of diagnosis problems for discrete event and hybrid dynamical systems. Each one of these problems requires a more expressive concept of diagnosis than the classical "set of faults" definition, and thus serves to motivate our general framing of the problem and our generalisation of test-based diagnosis algorithms.
### Conformance Checking and Data Cleaning
Deciding if a record of events matches or does not match a specified process, or obeys or does not obey a set of rules, is a problem that arises in several contexts. It is known as _conformance_ or _compliance checking_ in the Business Process Modelling (BPM) literature (van der Aalst, 2013; Hashmi, Governatori, Lam, & Wynn, 2018). Although there are many BPM formalisms, most of them model discrete event systems. Conformance checking may be just deciding whether the recorded event trace matches the process specification (in diagnosis terms, whether the system's execution is normal or abnormal), but often one seeks to find a best _trace alignment_(De Giacomo, Maggi, Marella, & Sardina, 2016): a set of insertions (events missing from the trace), deletions (spurious events in the trace) and substitutions (erroneous events in the trace) that together are sufficient to make the event trace match the process. In diagnosis terms, these adjustments to the trace are fault events, and a best trace alignment corresponds to a minimal diagnosis candidate. Note that in such a candidate, the same fault event may occur multiple times, for example if the trace has multiple spurious events of the same type. Thus, the space of diagnosis hypotheses can not be modelled simply as sets of fault events. The problem of process model adaptation, which examines event traces corresponding to multiple executions of the process and seeks a minimal modification of the process specification that suffices to make all traces match, can likewise be viewed as an instance of DES diagnosis.
Another example of diagnosis of recorded event traces occurs in longitudinal, or temporal, databases, where each record denotes a change in the status of some entity occurring at some time. The ordered set of records relating to one entity forms a timeline, or event trace, of that entity. In the case study described by Boselli et al. (Boselli, Cesarini, Mercorio, & Mezzanzanica, 2014), each entity is a person, and each record pertains to a change in their employment status: starting work for a new employer, ceasing work, extending a fixed-term position or converting a current job between part-time and full-time, or between fixed-term and continuing. Entity timelines are typically subject to integrity constraints, rules that prescribe events that cannot or must happen. For example, a person must have started work with an employer before that job can cease, or be converted or extended; a person can only hold one full-time job at a time, and thus cannot start a part-time job if already on a full-time position, or start a full-time job if already working at all, but a person can start a new part-time job if they are already working another part time.
However, errors and omissions in data entry mean that entity timelines often do not satisfy the database rules. Rather than rejecting such records, the problem of _data cleaning_ is to find a minimal set of corrections that will restore consistency to the timeline (Dallachiesa, Ebaid, Eldawy, Elmagarmid, Ilyas, Ouzzani, & Tang, 2013; Geerts, Mecca, Papotti, & Santorino, 2013; Boselli et al., 2014). For example, consider the following timeline, from Boselli et al.'s data set:
\begin{tabular}{c c c c c c} Date & Worker & Event type & Full/Part & Term/Cont. & Employer \\ \hline \(d_{1}\) & 1370 & start & full-time & fixed-term & 8274 \\ \(d_{2}\) & 1370 & cease & full-time & fixed-term & 8274 \\ \(d_{3}\) & _1370_ & _convert_ & _full-time_ & _fixed-term_ & _8274_ \\ \(d_{4}\) & _1370_ & _cease_ & _full-time_ & _fixed-term_ & _8274_ \\ \(d_{5}\) & 1370 & start & full-time & fixed-term & 36638 \\ \end{tabular} The records on dates \(d_{3}\) and \(d_{4}\) violate the integrity constraints, because they record a conversion event for a position that has already ceased, and a double cessation record for the same position.
Like trace alignment, these corrections may be insertion of missing records, deletion of spurious records, or changes to individual fields of a record, including changes to the timing of records, and
thus the order of event in the timeline, and like in that case each correction can occur multiple times in a timeline. Thus, viewed as a DES diagnosis problem, a minimal diagnosis candidate is a multiset of faults events. In the example above, the minimal diagnosis candidates include replacing the conversion on \(d_{3}\) with a "start" event (i.e., the person starting work again for the same employer), or deleting the cessation event on \(d_{2}\) and changing either full-time or fixed-term status in the records on \(d_{3}\) and \(d_{4}\). Because there is no way to know with certainty which diagnosis candidate corresponds to the true sequence of events, a data cleaning diagnoser needs to return the complete set of minimal fault event multisets, for a human to decide which corrections to apply or whether to investigate further.
Note that when the diagnostic hypotheses are multisets (or sequences) of faults rather than simple sets, the hypothesis space is infinite, and even the set of candidates or the diagnosis may be infinite. Close attention must therefore be given to avoiding non-termination of the diagnosis algorithm. In this paper, we present a number of algorithms that are able to compute the complete diagnosis, also in infinite hypothesis spaces, along with sufficient assumptions to guarantee their termination (Section 6).
### Alarm Processing
In large complex systems, such as power grids or telecommunication networks, faults can produce non-trivial effects. Alarms are time-stamped system-generated messages intended to aid operators in diagnosing fault conditions and take timely corrective actions. However, system complexity and the local nature of alarm conditions mean that when a fault occurs, its secondary effects often result in "alarm cascades" which obscure rather than inform about the root cause. This problem has been recognised for some time (Prince, Wollenberg, & Bertagnolli, 1989), and there have been several attempts to use AI techniques to ease the interpretation of alarms through filtering, prioritising and explaining them (Cordier, Krivine, Laborie, & Thiebaux, 1998; Cordier & Dousson, 2000; Taisne, 2006; Larsson, 2009; Bauer, Botea, Grastien, Haslum, & Rintanen, 2011). Framing the problem as dynamical system diagnosis treating unexplained alarms as fault events means that a diagnoser can identify secondary alarms, and thus focus attention on root causes (Bauer et al., 2011; Haslum & Grastien, 2011).
Alarm logs have an important temporal dimension. For example, in a power network, the event of a circuit breaker opening can explain a following voltage drop alarm on the power line protected by the breaker, if the breaker opening isolates the line. This implies that the _sequence_ of fault (unexplained) events in the diagnosis also matters: An unexplained circuit breaker opening followed by an unexplained voltage drop does not carry the same meaning as the same two unexplained alarms in the opposite order (the former implies that it could not be inferred from the model and observation that the breaker opening was sufficient to isolate the line). Thus, the diagnostic hypotheses in this setting are sequences of fault events, rather than sets.
Sequences of faults pose particular problems for classical diagnosis algorithms. Decomposition, for instance, is no longer as easy: In the simple case when diagnostic hypotheses are sets of faults, inferring independently that faults \(f_{1}\) and \(f_{2}\) are present implies that any candidate fault set must contain \(\{f_{1},f_{2}\}\) as a subset. However, when diagnostic hypotheses are fault sequences, inferring the presence of fault events \(f_{1}\) and \(f_{2}\) does not distinguish between sequences in which \(f_{1}\) occurs before \(f_{2}\) and those with the two events in opposite order. Existing conflict-directed algorithms for diagnosis over fault-set hypotheses are based on such a decomposition. We show in this paper how the notion of conflict can be generalised to any type of diagnostic hypothesis space. This is done
by making the concept of _properties_ of a hypothesis explicit, and defining a set of properties that is sufficient to represent every relevant hypothesis set, for any hypothesis space (Section 4.2).
### Diagnosis of Hybrid Systems
Hybrid systems are a class of models of dynamic systems that exhibit both discrete mode changes and continuous evolution. Hybrid systems naturally model physical processes under discrete control, such as electrical systems (Kurtoglu, Narasimhan, Poll, Garcia, Kuhn, de Kleer, van Gemund, & Feldman, 2009; Fox, Long, & Magazzeni, 2012) and heating, ventilation, and air conditioning (HVAC) systems (Behrens & Provan, 2010; Ono, Graybill, & Williams, 2012; Lim, van den Briel, Thiebaux, Backhaus, & Bent, 2015). Diagnosis of hybrid systems can exhibit all the complexities of discrete event systems, and more. Consider, for example, the possible fault modes of sensors in the Adapt benchmark system (Kurtoglu et al., 2009): When operating normally, a sensor's output is the real-valued sensed value plus a bounded random noise. However, the sensor can fail by becoming stuck at a fixed reading, by returning a value at a fixed offset from the true reading, or by becoming subject to drift, which is an offset value that increases over time. At a discrete abstraction level, this is simply four possible fault modes, but a fully precise diagnosis should also identify the offset constant or drift rate for those fault modes.
The consistency-based approach can be applied to diagnosis of hybrid systems (Grastien, 2014), and has some advantages over approaches that require a predictive model to simulate the system, which are unable to handle unspecified or unpredictable behaviour modes (Hofbaur & Williams, 2004). However, as we will show in this paper, there are limitations to what can be guaranteed. If the diagnosis is required to estimate real-valued fault parameters, such as the offset or drift rate of a faulty sensor, the hypothesis space is _dense_, in which case a finite minimal diagnosis may not exist.
## 3 The Diagnosis Problem
In this section, we first present a generic definition of the diagnosis problem, based on the notion of hypothesis space. The hypothesis space is motivated by the fact that different diagnostic environments (static systems and dynamic systems, in particular) require different types of diagnoses. We then illustrate the generic definition with different types of hypothesis spaces and discuss their relative expressiveness. Finally, we discuss a number of properties of these spaces that will influence the type of strategy that may be used to explore the space.
### Diagnosis Definition
We consider a system with a model _Mod_, i.e., a description of all behaviours the system can exhibit. We assume this model is "complete", by which we mean that if a behaviour \(\sigma\) is possible in the system, then this behaviour is allowed by the model; we then write \(\sigma\in\textit{Mod}\). A (partial) observation \(o\) of the system is a predicate on behaviours: \(o(\sigma)\) is true if behaviour \(\sigma\) is consistent with what has been observed. We make no assumptions about how _Mod_ and \(o\) are represented other than that they are of a form that the test solver (that is, the theorem prover, model checker, etc, that will be used to reason about the system) can work with. Typically, they will be given in some compact form (such as a set of logical constraints, a factored representation of a discrete event system, or similar).
Given the model \(\mathit{Mod}\) and an observation \(\mathit{o}\), the purpose of diagnosis is not to retrieve the exact behaviour (or set of possible behaviours), but to infer the diagnostic information associated with it. For instance, we may want to identify which faults have occurred in the system and in which order. The diagnostic abstraction of a behaviour is called a "hypothesis" and we write \(\mathit{hypo}(\sigma)\) for the (single) hypothesis associated with behaviour \(\sigma\). We write \(\mathbb{H}\) for the hypothesis space and we assume that hypotheses are mutually exclusive (i.e., \(\mathit{hypo}:\mathit{Mod}\rightarrow\mathbb{H}\) is a function). Because the system is only partially observable and may not be diagnosable1, it is generally not possible to precisely retrieve the hypothesis \(\mathit{hypo}(\sigma)\). Instead, the _diagnosis_ is the collection of hypotheses that are consistent (compatible) with both the model and the observation; such hypotheses are called "diagnosis candidates". From now on, we will use \(\delta\) to represent a candidate, whilst \(h\) will refer to a hypothesis that may not be a candidate.
Footnote 1: Diagnosability is the property that a fault will always be precisely identified; there is generally a correlation between diagnosability and the uniqueness of the diagnosis candidate (Grastici & Torta, 2011).
**Definition 1** (Diagnosis): _Given a model \(\mathit{Mod}\), an observation \(\mathit{o}\), and a hypothesis space \(\mathbb{H}\), the diagnosis is the subset \(\Delta(\mathit{Mod},\mathit{o},\mathbb{H})\) of hypotheses supported by at least one behaviour consistent with the observation:_
\[\Delta(\mathit{Mod},\mathit{o},\mathbb{H})=\{\delta\in\mathbb{H}\mid\exists \sigma\in\mathit{Mod}:\mathit{o}(\sigma)\wedge\mathit{hypo}(\sigma)=\delta\}. \tag{1}\]
Because it asks only for consistency between the candidate and the observation, this definition of diagnosis is weaker than that of an abductive diagnosis (Brusoni, Console, Terenziani, & Theseider Dupre, 1998), which requires each candidate to logically imply (part of) the observation.
To make the diagnosis more precise, it is common to impose a minimality condition. The hypothesis space is equipped with a partial order relation \(\preceq\) such that if \(\delta\preceq\delta^{\prime}\), then \(\delta\) is preferred to \(\delta^{\prime}\), meaning \(\delta^{\prime}\) may be removed from the diagnosis. Recall that a partial order relation is antisymmetric, i.e., \((h\preceq h^{\prime})\wedge(h^{\prime}\preceq h)\Rightarrow(h=h^{\prime})\). In the rest of the paper, we assume without loss of generality the existence of a unique most preferred hypothesis \(h_{0}\) of \(\mathbb{H}\). This will normally correspond to the nominal system behavior, but if necessary (e.g., if they were multiple such behaviors or even an infinite number of them), one can always take \(h_{0}\) to be a dummy hypothesis inconsistent with the system model.
We want to ignore the candidates that are not minimal with respect to \(\preceq\), where the subset of minimal elements of a set \(\mathbb{H}\) is defined as \(\min_{\prec}H=\{h\in H\mid\nexists h^{\prime}\in H.\ h^{\prime}\prec h\}\). We also want every ignored candidate to be supported by at least one minimal candidate. We then say that the minimal diagnosis _covers_ the diagnosis. Formally given two subsets \(H\) and \(H^{\prime}\) of \(\mathbb{H}\), \(H\) covers \(H^{\prime}\) if
\[\forall h^{\prime}\in H^{\prime}.\ \exists h\in H.\ h\preceq h^{\prime}.\]
The definition of the minimal diagnosis is as follows:
**Definition 2** (Minimal Diagnosis): _Given a model \(\mathit{Mod}\), an observation \(\mathit{o}\), and a hypothesis space \(\mathbb{H}\), the subset \(\Delta_{\preceq}(\mathit{Mod},\mathit{o},\mathbb{H})\) of candidates in \(\Delta(\mathit{Mod},\mathit{o},\mathbb{H})\) that are minimal with respect to \(\preceq\) is the minimal diagnosis if it covers the diagnosis._
In most diagnosis environments, it will be the case that a minimal diagnosis always exists. However, in Subsection 3.3 we show an example of where it does not. To simplify notation, we will in the rest of the paper omit the parameters from the diagnosis and the minimal diagnosis, i.e., we will simply write \(\Delta\) and \(\Delta_{\preceq}\).
### Examples of Hypothesis Spaces
The simplest hypothesis space is the Binary Hypothesis Space (BHS), where each behaviour is classified only as either nominal or faulty. This leads to a fault detection problem rather than a diagnosis one. The preferred hypothesis is generally the nominal hypothesis.
The most commonly used hypothesis space is the Set Hypothesis Space (SHS). Given a set \(F\) of faults, a hypothesis is the subset \(h\subseteq F\) of faults that appear in the behaviour. Preference is given to hypotheses that contain a subset of faults: \(h\preceq h^{\prime}\Leftrightarrow h\subseteq h^{\prime}\).
Another popular hypothesis is the Minimal Cardinality Set Hypothesis Space (MC-SHS). Hypotheses are defined similarly to SHS, as the set of faults that affect the system. The preference relation however is defined through the number of faults, with \(h\) preferred over \(h^{\prime}\) if it has the smallest cardinality (number of faults in the hypothesis):
\[h\preceq h^{\prime}\Leftrightarrow\bigg{(}h=h^{\prime}\ \vee\ |h|<|h^{\prime}|\bigg{)}.\]
For the case where the probability of faults varies, each fault \(f\) is associated with an a-priori probability \(Pr(f)\in(0,0.5)\), and the a-priori probability of hypothesis \(h\) is then \(h=\Pi_{f\in h}\ Pr(f)\times\Pi_{f\in F\setminus h}\ (1-Pr(f))\). The preference relation of the A-priori Probability Set Hypothesis Space (AP-SHS) then maximises the a-priori probability:
\[h\preceq h^{\prime}\Leftrightarrow\bigg{(}h=h^{\prime}\ \vee\ Pr(h)<Pr(h^{\prime})\bigg{)}.\]
Bylander et al. proposed more elaborate definitions based on qualitative plausibilities of the hypotheses (Bylander, Allemang, Tanner, & Josephson, 1991).
Our theory does not handle diagnosis problems in which probability is maximised a-posteriori, i.e., after the likelihood of the hypothesis given the observations has been factored in (Lucas, 2001).
In dynamic systems, faults may occur several times. The Multiset Hypothesis Space (MHS) associates each fault with the number of occurrences of this fault: \(h:F\rightarrow\mathbf{N}\). A hypothesis is preferred to another if it has no more occurrences of any fault: \(h\preceq h^{\prime}\Leftrightarrow(\forall f\in F,\ h(f)\leq h^{\prime}(f))\).
If we wish to also distinguish the order of occurrences of faults, a hypothesis in the Sequence Hypothesis Space (SqHS) is a (possibly empty) sequence of faults: \(h\in F^{\star}\). A hypothesis is preferred to another if the former is a subsequence of the latter. Formally, if \(h=[f_{1},\ldots,f_{k}]\) and \(h^{\prime}=[f_{1}^{\prime},\ldots,f_{n}^{\prime}]\), then \(h\preceq h^{\prime}\Leftrightarrow\exists g:\{1,\ldots,k\}\rightarrow\{1, \ldots,n\}:\ (\forall i\in\{1,\ldots,k-1\},\ g(i)<g(i+1))\)\(\wedge\ (\forall i\in\{1,\ldots,k\},\ f_{i}=f_{g(i)}^{\prime})\). For instance, hypothesis \([a,b]\) is preferable to hypothesis \([c,a,d,b]\).
We can also strengthen the preference order to treat faults differently, for instance, to reflect their relative likelihood. As an example, we consider the Ordered Multiset Hypothesis Space (OMHS). The hypotheses in this space are the same as in MHS, i.e., mappings from each fault to the number of times it occurred, but we also have an ordering of the faults, and any number of occurrences of a fault \(f^{\prime}\) is preferred to a single occurrence of a fault \(f\prec f^{\prime}\). Formally, \(h\preceq h^{\prime}\Leftrightarrow\forall f^{\prime}\in F,\ h(f^{\prime})>h^{ \prime}(f^{\prime})\Rightarrow\exists f\in F:(f\prec f^{\prime})\wedge(h(f)<h^ {\prime}(f))\). This corresponds to fault \(f^{\prime}\) being infinitely more likely than fault \(f\).
Finally, we consider faults that are represented by a continuous value. This can be used to model, for example, the situation where the fault is a drift in a model parameter. We assume a single continuous-valued fault. This is a very simple case, but it will be sufficient for illustrative purposes. In the Continuous Hypothesis Space (CHS), a hypothesis is a positive real value: \(h\in\mathbf{R}^{+}\). Preference is given to smaller values: \(h\preceq h^{\prime}\Leftrightarrow h\leq h^{\prime}\).
### Properties of Hypothesis Spaces
In this section, we define the terminology related to hypothesis spaces which will be used to define our framework and to formulate termination conditions for different exploration strategies.
Relations Between HypothesesIf \(h\preceq h^{\prime}\), we say that \(h\) is an _ancestor_ of \(h^{\prime}\) and that \(h^{\prime}\) is a _descendant_ of \(h\) (note that, since \(\preceq\) is non-strict, \(h\) is an ancestor and a descendant of itself). If \(h\prec h^{\prime}\) and there is no \(h^{\prime\prime}\) such that \(h\prec h^{\prime\prime}\prec h^{\prime}\), then we say that \(h^{\prime}\) is a _child_ of \(h\) and \(h\) is a _parent_ of \(h^{\prime}\).
FinitenessThe first condition we consider is whether the hypothesis space is finite. Infinite hypothesis spaces must be dealt with more cautiously, as they may prevent the diagnosis algorithm from terminating. In a finite space, any systematic exploration strategy (i.e, one that does not revisit a previously rejected hypothesis) will terminate. BHS, SHS, MC-SHS, and AP-SHS are finite.
Well Partial OrdernessA binary relation on a set \(\mathbb{S}\) is a _well partial order_ iff it is a (non-strict) partial order and every non-empty subset of \(\mathbb{S}\) has a finite and non-empty set of minimal elements according to the order (e.g., (Kruskal, 1972)). That is,
\[\forall S\subseteq\mathbb{S}.\quad S\neq\emptyset\ \Rightarrow\ 0<|\min_{\preceq}(S)|<\infty.\]
If the preference order \(\preceq\) is a well partial order on \(\mathbb{H}\), we say that \(\mathbb{H}\) is _well partially ordered_ (by \(\preceq\)). A well partial order is always well-founded, meaning it has no infinite descending chains.
The continuous hypothesis space given in the previous section (CHS) is not well partially ordered. To see this, consider the set of hypotheses that correspond to a strictly positive value, i.e., \(S=\{h\in\mathbb{H}_{\mathrm{CHS}}\mid h>0\}\). This set has no minimal value, which means that \(\min_{\preceq}(S)\) is empty. All the other hypothesis spaces discussed in the previous section are well partially ordered. For the non-trivial cases of MHS and SqHS, this follows from the work of Nash-Williams (Nash-Williams, 1963) on well-quasi-ordered finite trees.
Well partially ordered hypothesis spaces have several useful properties: First, that the minimal diagnosis always exists and is finite (this is shown in Theorem 1 below). Second, that the set of parents and the set of children of any given hypothesis are both finite. This follows from the fact that all parents of a hypothesis are themselves unordered; thus, they are all minimal in the set of the hypothesis' parents and, therefore, there cannot be infinitely many of them. The same is true of its children. Third, any strict descendant of a hypothesis is also a (possibly non-strict) descendant of some child of that hypothesis.
**Theorem 1**: _If the hypothesis space is well partially ordered, then the minimal diagnosis exists and is defined by:_
\[\Delta_{\preceq}=\min_{\preceq}(\Delta)=\{h\in\Delta\mid\forall h^{\prime}\in \Delta,\ h^{\prime}\preceq h\Rightarrow h=h^{\prime}\}. \tag{2}\]
_Furthermore, \(\Delta_{\preceq}\) is finite._
**Proof:** We must show that \(\min_{\preceq}(\Delta)\) satisfies the condition of Definition 2 which states that \(\min_{\preceq}(\Delta)\) must cover the diagnosis.
Assume that the diagnosis is not covered by \(\min_{\preceq}(\Delta)\). Let \(\delta_{1}\) be a diagnosis candidate that is not covered: \(\nexists\delta^{\prime}\in\min_{\preceq}(\Delta)\) such that \(\delta^{\prime}\preceq\delta_{1}\). Then because \(\delta_{1}\not\in\min_{\preceq}(\Delta)\), there exists another preferable candidate \(\delta_{2}\prec\delta_{1}\) that is not covered. Applying the same reasoning, we end up with an infinite sequence of hypotheses \(\delta_{1}\succ\delta_{2}\succ\ldots\) This sequence contradicts the property of well partially order. \(\Box\)
If the space is not well partially ordered, there is no such guarantee. For instance, in the CHS, if \(\Delta=\{h\in\mathbb{H}_{\text{CHS}}\mid h>0\}\), as in the example above, then \(\min_{\preceq}(\Delta)=\emptyset\) which does not satisfy the covering requirement of Definition 2. Thus, in this situation there exists no minimal diagnosis.
Path, Depth and DistanceFinally, we define concepts that relate to a hypothesis' "position" in the hypothesis space, which we will use in Section 6 when proving termination of our diagnosis algorithms.
A _path_ from hypothesis \(h\) to \(h^{\prime}\) is a sequence of hypotheses \(h_{1}\prec\ldots\prec h_{k}\) such that \(h_{1}=h\) and \(h^{\prime}=h_{k}\). An _atomic path_ is a path \(h_{1}\prec\ldots\prec h_{k}\) such that each \(h_{i}\) is a parent of \(h_{i+1}\).
The _distance_ of hypothesis \(h\) (implicitely from \(h_{0}\)) is the minimal length of an atomic path from hypothesis \(h_{0}\) to hypothesis \(h\); if no such atomic path exists, hypothesis \(h\) is said to have an infinite distance. A hypothesis is said to be _finitely reachable_ if it has a finite distance.
The ordered multiset hypothesis space (OMHS) illustrates a situation with non-finitely reachable hypotheses. Assume two fault events, \(f_{1}\) and \(f_{2}\) where \(f_{1}\prec f_{2}\) (any number of occurrences of \(f_{2}\) is preferred to one occurrence of \(f_{1}\)) and consider hypothesis \(h=\{f_{1}\to 1,f_{2}\to 0\}\). Then \(h\) has no parent: indeed, all strict ancestors of \(h\) are hypotheses \(h_{i}\) with no occurrence of \(f_{1}\) and \(i\) occurrences of \(f_{2}\): \(h_{i}=\{f_{1}\to 0,f_{2}\to i\}\). Then for all \(i\) the property \(h_{i}\prec h_{i+1}\prec h\) holds, and \(h_{i}\) is not a parent of \(h\). Since \(h\) has no parent no atomic path leads to \(h\).
The _depth_ of a hypothesis \(h\) is the maximal length of a path from \(h_{0}\) to \(h\). If there is no maximal length, the depth is said to be infinite.
The depth of a hypothesis is, by definition, larger than or equal to its distance, hence a hypothesis that is not finitely-reachable has an infinite depth. The converse may not hold however: there can be a finite atomic path \(h_{0}\prec h_{1}\prec h\) and, at the same time, an infinite number of paths \(h_{0}\prec h_{1}^{\prime}\prec\ldots\prec h_{k}^{\prime}\prec h\) for any \(k\). To find an example we have to look at some even more fine-grained preference order. For example, with reference to Figure 1, consider a system consisting of a component monitored by a sensor. The component can exhibit any number of temporary failures (represented by a natural number), while the sensor has two modes: nominal (\(N\)) and faulty (\(F\)). It is assumed that the component and the sensor both experiencing faults is infinitely more unlikely than any number of faults on the component. Consequently, \(h_{0}=\langle 0,N\rangle\) is the unique preferred hypothesis; \(h_{1}=\langle 0,F\rangle\) is a child of \(h_{0}\) (any \(h_{i}^{\prime}=\langle i,N\rangle\), \(i\geq 1\), is incomparable to \(h_{1}\)); \(h=\langle 1,F\rangle\) is a child of \(h_{1}\) (there is no hypothesis \(h^{\prime}\) such that \(h_{1}\prec h^{\prime}\prec h\)) hence \(h\)'s distance is \(2\) and \(h\) is finitely-reachable. On the other hand, we have \(h_{0}\prec h_{1}^{\prime}\prec h_{2}^{\prime}\prec\ldots\prec h\), i.e., \(h\) is infinitely deep.
### Abstraction of Hypothesis Spaces
In the previous section, we hinted that the diagnosis in some hypothesis spaces is more informative than in others. We now formalise this notion.
A hypothesis space \(\mathbb{H}\) (together with its preference relation \(\preceq\) and its function \(\mathit{hypo}:\mathit{Mod}\rightarrow\mathbb{H}\)) is a _refinement_ of hypothesis space \(\mathbb{H}^{\prime}\) (together with \(\preceq^{\prime}\) and \(\mathit{hypo}^{\prime}\)), and conversely \(\mathbb{H}^{\prime}\) is an _abstraction_ of \(\mathbb{H}\), if each hypothesis of \(\mathbb{H}^{\prime}\) corresponds exactly to a subset of hypotheses in \(\mathbb{H}\). Formally, there exists a function \(\alpha:\mathbb{H}\rightarrow\mathbb{H}^{\prime}\) that projects each hypothesis of \(\mathbb{H}\) on \(\mathbb{H}^{\prime}\) such that
* \(\forall\sigma\in\mbox{\it Mod.\ hypo}^{\prime}(\sigma)=\alpha(\mbox{\it hypo}( \sigma))\), i.e., \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\), and
* \(\forall\{h_{1},h_{2}\}\subseteq\mathbb{H}.\ h_{1}\preceq h_{2}\Rightarrow\alpha (h_{1})\preceq^{\prime}\alpha(h_{2})\), i.e., the preference relation is maintained by the abstraction.
The projection is extended naturally to a set of hypotheses, i.e., \(\alpha(H)=\{h^{\prime}\in\mathbb{H}^{\prime}\ |\ \exists h\in H.\ h^{\prime}= \alpha(h)\}\).
For instance, the set hypothesis space is an abstraction of the multiset hypothesis space (over the same set of faults). Given a multiset hypothesis, i.e., a mapping \(F\rightarrow\mathbf{N}\), the abstraction function \(\alpha\) returns the subset of faults that are associated with a strictly positive number: \(\alpha(h)=\{f\in F\ |\ h(f)>0\}\). Furthermore, the preference relation is maintained: if \(h_{1}\preceq_{\rm MHS}h_{2}\), then \(h_{1}(f)\leq h_{2}(f)\) for all \(f\); consequently, \(\alpha(h_{1})\subseteq\alpha(h_{2})\) and \(\alpha(h_{1})\preceq_{\rm SHS}\alpha(h_{2})\).
An abstraction/refinement relationship between two hypothesis spaces implies that the diagnoses (and minimal diagnoses) in those two spaces are also related. This is shown by the following two lemmas. Theorem 2 below states all abstraction relations (summarised in Figure 2) between the hypothesis spaces for discrete event systems (BHS, SHS, MC-SHS, AP-SHS, OMHS, MHS, and SqHS) described in the previous subsection.
**Lemma 1**: _If \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\), the projection on \(\mathbb{H}^{\prime}\) of the diagnosis in \(\mathbb{H}\) is the diagnosis in \(\mathbb{H}^{\prime}\): \(\alpha(\Delta)=\Delta^{\prime}\)._
**Proof:** We prove that \(\Delta^{\prime}\) is exactly the set of hypotheses \(\delta^{\prime}=\alpha(\delta)\) for some candidate \(\delta\in\Delta\).
\[\begin{array}{rcl}\delta\in\Delta&\Rightarrow&\exists\sigma\in\mbox{\it Mod.\ }o(\sigma)\wedge\mbox{\it hypo}(\sigma)=\delta\\ &\Rightarrow&\exists\sigma\in\mbox{\it Mod.\ }o(\sigma)\wedge\mbox{\it hypo}^{ \prime}(\sigma)=\alpha(\delta)\\ &\Rightarrow&\alpha(\delta)\in\Delta^{\prime}\end{array}\]
Figure 1: Hypothesis space illustrating that the depth can be infinite while the distance is finite. An unbroken line indicates a parent/child relationship; a dashed one, an ancestor one. The distance between \(\langle 0,N\rangle\) and \(\langle 1,F\rangle\) is two; the depth is infinite.
Conversely,
\[\begin{array}{rcl}\delta^{\prime}\in\Delta^{\prime}&\Rightarrow&\exists\sigma\in \mbox{\it Mod. }o(\sigma)\wedge\mbox{\it hypo}^{\prime}(\sigma)=\delta^{\prime}\\ &\Rightarrow&\exists\sigma\in\mbox{\it Mod. }\mbox{\it hypo}(\sigma)\in\Delta\wedge\mbox{\it hypo }^{\prime}(\sigma)=\delta^{\prime}\\ &\Rightarrow&\exists\sigma\in\mbox{\it Mod. }\mbox{\it hypo}(\sigma)\in\Delta\wedge\alpha(\mbox{\it hypo }(\sigma))=\delta^{\prime}\\ &\Rightarrow&\exists\delta\in\Delta.\ \alpha(\delta)=\delta^{\prime}\end{array}\]
\(\Box\)
**Lemma 2**: _If \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\), the projection on \(\mathbb{H}^{\prime}\) of the minimal diagnosis in \(\mathbb{H}\) is contained in the diagnosis in \(\mathbb{H}^{\prime}\) and contains the minimal diagnosis in \(\mathbb{H}^{\prime}\): \(\Delta^{\prime}_{\preceq^{\prime}}\subseteq\alpha(\Delta_{\preceq})\subseteq \Delta^{\prime}\)._
**Proof:** Since \(\Delta_{\preceq}\subseteq\Delta\) then clearly \(\alpha(\Delta_{\preceq})\subseteq\alpha(\Delta)=\Delta^{\prime}\).
Assume now that there exists a minimal candidate \(\delta^{\prime}_{1}\) in \(\mathbb{H}^{\prime}\) such that \(\delta^{\prime}_{1}\in\Delta^{\prime}_{\preceq^{\prime}}\setminus\alpha( \Delta_{\preceq})\). Then, by Lemma 1, there exists a candidate \(\delta_{1}\in\Delta\) such that \(\alpha(\delta_{1})=\delta^{\prime}_{1}\). Furthermore, since \(\delta^{\prime}_{1}\not\in\alpha(\Delta_{\preceq})\), \(\delta_{1}\not\in\Delta_{\preceq}\). Therefore, there must exist another candidate \(\delta_{2}\in\Delta_{\preceq}\) such that i) \(\delta_{2}\preceq\delta_{1}\) (which is why \(\delta_{1}\not\in\Delta_{\preceq}\)) and ii) \(\alpha(\delta_{2})=\delta^{\prime}_{2}\neq\delta^{\prime}_{1}\) (since \(\delta^{\prime}_{2}\in\alpha(\Delta_{\preceq})\) but \(\delta^{\prime}_{1}\not\in\alpha(\Delta_{\preceq})\)). However, by Lemma 1, \(\delta^{\prime}_{2}\) is a candidate, and by the second condition on \(\alpha\), \(\delta^{\prime}_{2}\preceq\delta^{\prime}_{1}\). Hence, \(\delta^{\prime}_{1}\) is not a minimal candidate, which contradicts its existence. \(\Box\)
In other words, the projection of the minimal diagnosis \(\Delta_{\preceq}\) in \(\mathbb{H}\) is a subset of (possibly equal to) the diagnosis in the more abstract space \(\mathbb{H}^{\prime}\), whose minimisation is the minimal diagnosis in \(\mathbb{H}^{\prime}\).
Returning to the example of the set and multiset hypothesis spaces, given a minimal diagnosis \(\Delta^{\mbox{\scriptsize{MHS}}}_{\preceq}=\{\{a\to 2,b\to 0\},\{a\to 1,b\to 1\},\{a\to 0,b\to 2\}\}\) in the multiset hypothesis space, its projection on the set hypothesis space is \(\alpha(\Delta^{\mbox{\scriptsize{MHS}}}_{\preceq})=\{\{a\},\{a,b\},\{b\}\}\). The minimal diagnosis in the set hypothesis space is \(\Delta^{\mbox{\scriptsize{SHS}}}_{\preceq}=\{\{a\},\{b\}\}\), which is the set of minimal elements of \(\alpha(\Delta^{\mbox{\scriptsize{MHS}}}_{\preceq})\).
This relation between the (minimal) diagnosis in a hypothesis space \(\mathbb{H}\) and an abstraction \(\mathbb{H}^{\prime}\) of \(\mathbb{H}\) has implications for the complexity of computing it: Since the (minimal) diagnosis in \(\mathbb{H}^{\prime}\) can be computed from the (minimal) diagnosis in \(\mathbb{H}\), in time polynomial in the size of the diagnosis, we can say that diagnosing in a more refined hypothesis space is at least as hard as diagnosing in the more abstract space.
**Theorem 2**: _The set of abstraction relations between hypothesis spaces shown in Figure 2 is correct and complete._
**Proof:** (Sketch) The abstraction function from *SHS to BHS is \(\alpha(h)=\mbox{nominal iff }h=\emptyset\). The abstraction function from SHS to CA-SHS is the identity function, and the preference relation of SHS is indeed maintained in CA-SHS: \(h\subseteq h^{\prime}\Rightarrow\left(h=h^{\prime}\ \vee\ |h|<|h^{\prime}|\right)\). Similarly the
Figure 2: Abstraction relations between the hypothesis spaces of DES presented in Subsection 3.2; \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\) iff there is a directed path from \(\mathbb{H}\) to \(\mathbb{H}^{\prime}\).
preference between two SHS hypotheses is maintained when these hypotheses are interpreted as AP-SHS thanks to the fact that each fault has an a-priori probability below \(0.5\), which implies that removing a fault from a hypothesis increases its a-priori probability. The abstraction function from MHS to SHS has already been described. The abstraction function from SqHS to MHS counts the number of occurrences of each faulty event in the sequence. The abstraction function from OMHS to BHS is \(\alpha(h)=\text{nominal iff }h(f)=0\) for all faulty events \(f\). The abstraction function from MHS to OMHS is the identity function; OMHS is an abstraction of MHS because its associated precedence relation is more restrictive than that of MHS. Finally the abstraction function from CHS to BHS is \(\alpha(h)=\text{nominal iff }h=0\).
There is no relation between SHS and OMHS since SHS does not mention the number of occurrences as OMHS does, while the mapping from OMHS to SHS does not maintain the preference relation: for instance, if \(a\prec b\), then \(\{a\to 0,b\to 1\}\prec_{\text{OMHS}}\{a\to 1,b\to 0\}\), while \(\{b\}\not\prec_{\text{SHS}}\{a\}\). \(\Box\)
## 4 Representing and Testing Sets of Hypotheses
The diagnosis approach developed in this paper is based on an operation called the _diagnosis test_. A test, defined in Subsection 4.1, decides whether a given set of hypotheses has a non-empty intersection with the diagnosis, that is, whether any hypothesis in the set is a candidate. The set of hypotheses to be tested is not enumerated but represented symbolically. To this end, we define in Subsection 4.2_hypothesis properties_, which are atomic statements used to describe hypotheses. We show how to construct for any hypothesis space a matching property space that is "sufficient", in the sense that any set of hypotheses that we need to test has a representation using properties in this space. In Subsection 4.3 we discuss three specific types of tests, which we term "diagnosis questions", that together are sufficient to implement the exploration strategies we propose. The strategies themselves are described in Section 6.
Here, and in the remainder of the paper, we consider only well partially ordered hypothesis spaces. As shown earlier, this ensures that the minimal diagnosis exists and is finite, so that the diagnosis algorithm can output it in finite time.
### The Diagnosis Test
Our diagnosis algorithms are based on an operation called the _diagnosis test_. We assume the existence of an "oracle", called the _test solver_, that is able to perform such tests. We will describe several concrete implementations of test solvers, for DES and different hypothesis spaces, in Section 8.
A diagnosis test is the problem of deciding whether a given set \(H\subseteq\mathbb{H}\) contains a diagnosis candidate.
**Definition 3**: _A diagnosis test is a tuple \(\langle\text{Mod},o,H\rangle\) where Mod is a system model, \(o\) is an observation, and \(H\subseteq\mathbb{H}\) is a set of hypotheses._
_The result of a test is either a hypothesis \(\delta\in H\) such that \(\delta\in\Delta(\text{Mod},o,\mathbb{H})\) if any such \(\delta\) exists, and \(\bot\) otherwise (where \(\bot\notin\mathbb{H}\) is a distinct symbol)._
Later, in Section 6.3, we will amend this definition to allow the test solver to return a conflict instead of \(\bot\), but for now we limit ourselves to the simple version. Given a diagnosis problem \(\langle\text{Mod},o,\mathbb{H}\rangle\), a test is defined solely by the hypothesis set \(H\). If the test returns a candidate, we say it is successful; otherwise, we say it failed.
### Hypothesis Properties
Some of the sets of hypotheses we will need to test to compute the diagnosis can be very large, and some of them will even be infinite. Therefore, we represent such sets symbolically, by a finite set of _hypothesis properties_. These properties are atomic statements about hypotheses. A set of properties represents those hypotheses that satisfy all properties in the set.
Not all sets of hypotheses will be represented in this way. The minimal diagnosis returned by our algorithms is an explicitly enumerated set of candidates, as are some other sets manipulated by the algorithms during computation of the minimal diagnosis. However, all hypothesis sets given to the test solver to test are represented symbolically; that is, the test solver's input will be a set of properties, rather than a set of hypotheses. To distinguish the two types of sets, we will use \(H\) for sets of hypotheses represented symbolically and \(S\) for explicitly enumerated hypothesis sets.
**Definition 4**: _A hypothesis property (or simply, property) is an object \(p\) that implicitly represents a (possibly infinite) set of hypotheses \(\mbox{hypos}(p)\subseteq\mathbb{H}\). If hypothesis \(h\) belongs to \(\mbox{hypos}(p)\), we say that \(h\) exhibits property \(p\), or that \(p\) is a property of \(h\). For any property \(p\), we also use \(\neg p\) as a property, with the meaning \(\mbox{hypos}(\neg p)=\mathbb{H}\setminus\mbox{hypos}(p)\)._
_Given a hypothesis property space \(\mathbb{P}\), we write \(\mbox{props}(h)\subseteq\mathbb{P}\) for the set of properties of \(h\). A set \(P\subseteq\mathbb{P}\) of properties implicitly represents the set \(\mbox{hypos}(P)\) of hypotheses that exhibit all properties in \(P\): \(\mbox{hypos}(P)=\{h\in\mathbb{H}\mid P\subseteq\mbox{props}(h)\}=\bigcap_{p \in P}\mbox{hypos}(p)\)._
Simple examples of properties are that a given fault occurred; or did not; that it occurred at most once; or more than once; that one type of fault occurred before another; and so on. We give more examples of properties later in this subsection.
A priori, we can define properties to represent any set of hypotheses. Given a set \(H\) of hypotheses, we could define a property \(p_{H}\) such that \(\mbox{hypos}(p_{H})=H\). However, implementing support for such ad hoc properties in the test solver is not practical, and is also not very useful, since it does not help in the formation of informative conflicts. Useful properties are ones that allow the test solver to automatically infer information that can be generalised. For instance, the property that states that a specific fault did not occur is of this kind.
Next, we define the property space \(\mathbb{P}\) that we will use in the rest of this paper. \(\mathbb{P}\) is derived from the hypothesis space \(\mathbb{H}\) considered and its preference relation, and is therefore defined for any hypothesis space. For each hypothesis \(h\in\mathbb{H}\), \(\mathbb{P}\) contains the following two properties and their negations:
* \(p_{\mbox{desc}}(h)\) is the property of being a descendant of hypothesis \(h\), i.e., \(\mbox{hypos}(p_{\mbox{desc}}(h))=\{h^{\prime}\in\mathbb{H}\mid h\preceq h^{ \prime}\}\) and
* \(p_{\mbox{anc}}(h)\) is the property of being an ancestor of hypothesis \(h\), i.e., \(\mbox{hypos}(p_{\mbox{anc}}(h))=\{h^{\prime}\in\mathbb{H}\mid h^{\prime} \preceq h\}\).
These properties may appear somewhat abstract; their concrete meaning depends on the hypothesis space and preference order that underlies them. To give a more concrete example, let us look at the set hypothesis space (SHS): Let \(h=\{f_{1},f_{2}\}\subseteq F=\{f_{1},\ldots,f_{4}\}\) be the hypothesis that faults \(f_{1}\) and \(f_{2}\) took place, while the other two faults (\(f_{3}\) and \(f_{4}\)) did not. Then
* \(p_{\mbox{desc}}(h)\) is the property that \(f_{1}\) and \(f_{2}\) took place (not ruling out that other faults may also have happened);
* \(\neg p_{\rm desc}(h)\) is the property that not both \(f_{1}\) and \(f_{2}\) occurred;
* \(p_{\rm anc}(h)\) is the property that no fault other than \(f_{1}\) or \(f_{2}\) took place, i.e., neither \(f_{3}\) nor \(f_{4}\); and
* \(\neg p_{\rm anc}(h)\) is the property that some fault other than \(f_{1}\) and \(f_{2}\) took place, i.e., either \(f_{3}\) or \(f_{4}\) happened.
These properties are sufficient to represent all of the sets of hypotheses that we will need to test in any of our strategies for exploring the hypothesis space. In fact, we can give a more precise characterisation of the hypothesis sets that can be represented with conjunctions of properties in \(\mathbb{P}\). To do this, we first need to recall some standard terminology: Let \(\preceq\) be a partial order on some set \(\mathbb{S}\); a subset \(S\) of \(\mathbb{S}\) is _convex_ iff for any two distinct elements \(a,b\in S\), every element \(c\) such that \(a\preceq c\preceq b\) is also in \(S\).
**Theorem 3**: _Hypothesis set \(H\subseteq\mathbb{H}\) can be represented by a finite conjunction of properties over \(\mathbb{P}\) if and only if \(H\) is convex._
**Proof:** First, let \(H\) be a convex hypothesis set. If \(H=\emptyset\), the claim holds trivially, since the empty set can be represented by any contradictory set of properties, e.g., \(\{p_{\rm desc}(h),\neg p_{\rm desc}(h)\}\). Therefore, suppose \(H\) is non-empty.
Let \(H^{\prec}=\{h^{\prime}\not\in H\mid\exists h\in H:h^{\prime}\preceq h\}\), \(H^{\succ}=\{h^{\prime}\not\in H\mid\exists h\in H:h\preceq h^{\prime}\}\), and \(H^{\rm U}=\{h^{\prime}\not\in H\mid\forall h\in H:h^{\prime}\not\preceq h\, \mbox{and}\,h\not\preceq h^{\prime}\}\), that is, \(H^{\prec}\) is the set of ancestors of hypotheses in \(H\) that are not themselves in \(H\), \(H^{\succ}\) is the set of descendants of hypotheses in \(H\) that are not themselves in \(H\), and \(H^{\rm U}\) is the set of hypotheses that are not ordered with respect to any element in \(H\). Because \(H\) is convex, every hypothesis \(h^{\prime}\in\mathbb{H}\setminus H\) must belong to one of these three sets: if \(h^{\prime}\) is not unrelated to every hypothesis in \(H\), it must be either preferred to some \(h\in H\), or some \(h\in H\) preferred to it; thus it belongs to either \(H^{\prec}\), \(H^{\succ}\). Furthermore, it cannot belong to both: if it did, there would be some hypothesis \(h\in H\) such that \(h\preceq h^{\prime}\) and some hypothesis \(h^{\prime\prime}\in H\) such that \(h^{\prime}\preceq h^{\prime\prime}\); this contradicts the convexity of \(H\).
Construct the property set \(P=\{\neg p_{\rm anc}(h^{\prime})\mid h^{\prime}\in\max_{\prec}(H^{\prec})\} \cup\{\neg p_{\rm desc}(h^{\prime})\mid h^{\prime}\in\min_{\preceq}(H^{\succ}) \}\cup\{\neg p_{\rm desc}(h^{\prime})\mid h^{\prime}\in\min_{\preceq}(H^{\rm U })\}\). We claim that \(P\) is finite and that \(\mbox{\it hyppos}(P)=H\).
That \(\min_{\preceq}(H^{\succ})\) and \(\min_{\preceq}(H^{\rm U})\) are finite follows directly from that \(\mathbb{H}\) is well partially ordered. For every hypothesis \(h^{\prime}\in H^{\prec}\) there is a \(h\in H\) such that \(h^{\prime}\preceq h\) (by construction) and such that \(h\) is minimal in \(H\). Hence, the maximal elements in \(H^{\prec}\) are exactly the minimal elements in the set of parents of the hypotheses in \(H\), and thus this set is also finite by the well partial orderedness of \(\mathbb{H}\). Since all three sets are finite, so is \(P\).
If \(h\) exhibits \(p_{\rm anc}(h^{\prime})\) for some \(h^{\prime}\in H^{\prec}\), then \(h\preceq h^{\prime}\prec h^{\prime\prime}\) for some \(h^{\prime\prime}\in H\). Since \(h^{\prime}\not\in H\), by convexity, \(h\) cannot be in \(H\) either. Thus, all \(h\in H\) exhibit \(\neg p_{\rm anc}(h^{\prime})\) for all \(h^{\prime}\in H^{\prec}\).
If \(h\) exhibits \(p_{\rm desc}(h^{\prime})\) for some \(h^{\prime}\in H^{\succ}\), then \(h^{\prime\prime}\prec h^{\prime}\preceq h\) for some \(h^{\prime\prime}\in H\). Analogously to the previous case, because \(h^{\prime}\not\in H\) and \(H\) is convex, \(h\) cannot be in \(H\). Thus, all \(h\in H\) exhibit \(\neg p_{\rm desc}(h^{\prime})\) for all \(h^{\prime}\in H^{\succ}\).
Finally, if \(h\) exhibits \(p_{\rm desc}(h^{\prime})\) for some \(h^{\prime}\in H^{\rm U}\), then \(h^{\prime}\preceq h\). \(h\) cannot belong to \(H\) because if it did, \(h^{\prime}\) would be related to some element in \(H\), contradicting the construction of \(H^{\rm U}\). Thus, all \(h\in H\) exhibit \(\neg p_{\rm desc}(h^{\prime})\) for all \(h^{\prime}\in H^{\rm U}\).
In summary, each hypothesis \(h\in H\) exhibits all properties in \(P\). Thus, \(H\subseteq\mbox{\it hyppos}(P)\).
Now, let \(h^{\prime}\) be a hypothesis not in \(H\). We know that \(h^{\prime}\) belongs to at least one of \(H^{\prec}\), \(H^{\prec}\) or, \(H^{\tt U}\). If \(h^{\prime}\in H^{\prec}\) then it is either maximal in \(H^{\prec}\) or the ancestor of a hypothesis that is maximal in \(H^{\prec}\); in either case, it exhibits \(p_{\rm anc}(h^{\prime\prime})\) for some \(h^{\prime\prime}\in H^{\prec}\). Likewise, if \(h^{\prime}\in H^{\succ}\) then it is either minimal in \(H^{\succ}\) or the descendant of a hypothesis that is minimal in \(H^{\succ}\), so it exhibits \(p_{\rm desc}(h^{\prime\prime})\) for some \(h^{\prime\prime}\in H^{\succ}\). Finally, \(h^{\prime}\in H^{\tt U}\) then it is either minimal in \(H^{\tt U}\) or the descendant of a hypothesis that is minimal in \(H^{\tt U}\), so it exhibits \(p_{\rm desc}(h^{\prime\prime})\) for some \(h^{\prime\prime}\in H^{\tt U}\). In all three cases, \(h^{\prime}\) exhibits a property whose negation is in \(P\), and therefore \(h^{\prime}\not\in\mathit{hypos}(P)\). Hence \(\mathit{hypos}(P)\subseteq H\).
So far, we have shown that if \(H\) is convex, then it can be represented by a finite conjunction of properties in \(\mathbb{P}\). To show the converse (only if), let \(H\) be a non-convex set. This means there are three hypotheses, \(h_{a}\), \(h_{b}\) and \(h_{c}\), such that \(h_{a}\preceq h_{c}\preceq h_{b}\), \(h_{a},h_{b}\in H\) and \(h_{c}\not\in H\). (Since the three hypotheses are necessarily distinct, we have in fact \(h_{a}\prec h_{c}\prec h_{b}\).)
Suppose there is a property set \(P\) such that \(\mathit{hypos}(P)=H\): \(P\) must exclude \(h_{c}\), that is, there must be at least one property \(p\in P\) that \(h_{c}\) does not exhibit. There are only four ways to construct such a property:
(1) \(p=p_{\rm anc}(h)\) for some strict ancestor \(h\prec h_{c}\). But this property also excludes \(h_{b}\) from \(\mathit{hypos}(P)\), since \(h_{c}\preceq h_{b}\).
(2) \(p=p_{\rm desc}(h)\) for some strict descendant \(h_{c}\prec h\). This property excludes \(h_{a}\), since \(h_{a}\preceq h_{c}\).
(3) \(p=\neg p_{\rm anc}(h)\) for some descendant \(h_{c}\preceq h\). (Note that here, \(h\) may be equal to \(h_{c}\).) Again, this property excludes \(h_{a}\), since \(h_{a}\preceq h_{c}\).
(4) \(p=\neg p_{\rm desc}(h)\) for some ancestor \(h\preceq h_{c}\) (which may also equal \(h_{c}\)). This property excludes \(h_{b}\), since \(h_{c}\preceq h_{b}\).
Thus, it is not possible to exclude \(h_{c}\) from \(\mathit{hypos}(P)\) without also excluding either \(h_{a}\) or \(h_{b}\). Therefore, since \(H\) includes both \(h_{a}\) and \(h_{b}\) but not \(h_{c}\), \(\mathit{hypos}(P)\) cannot equal \(H\). \(\Box\)
### Diagnostic Questions and Their Representations
Next, we describe three different "diagnostic questions". Each question is a specific test that provides a piece of information about the diagnosis problem at hand. The strategies we present in Section 6 to explore the hypothesis space in search of the minimal diagnosis use these questions as their main primitives for interacting with the problem.
We show how each of the questions is formulated as sets of hypotheses to test, and how those hypothesis sets can be represented by (conjunctive) sets of properties. In most cases, the mapping from a question to a test and from a test to its representation is straightforward, but for some, there are alternative representations. Which is the best representation depends in part on the strategy for exploring the hypothesis space: For conflict-directed strategies (introduced in Subsection 6.3), the representation should produce conflicts that are as general as possible. In addition, for the preferred-first strategy (Subsection 6.2), those conflicts should generate as few successors as possible. Finally, the property set should facilitate the task of the test solver.
**Question 1**.: Is a given hypothesis \(h\) a diagnosis candidate? (candidate\((h)\))
* **Test hypothesis set**: \(H=\{h\}\).
* **Representation by properties**: \(\{p_{\rm desc}(h)\}\cup\{\neg p_{\rm desc}(h^{\prime})\mid h^{\prime}\in{\rm children }(h)\}\).
* **Test result**: yes or no. The test solver returns \(h\) if successful, and \(\bot\) otherwise.
Note that this question could also be represented by the property set \(\{p_{\mathrm{desc}}(h),p_{\mathrm{anc}}(h)\}\) (since \(h\) is the only hypothesis that is both an ancestor and a descendant of \(h\)). However, the representation given above is the better one for the conflict-directed preferred-first strategy, and the basis of the one that we use. For particular hypothesis spaces, there can also be other, simpler but equivalent ways of representing them by properties. We discuss some alternatives in conjunction with the SAT-based implementation of a test solver for discrete event system diagnosis in Subsection 8.2.
**Question 2.** Is a given candidate \(\delta\) minimal? (minimal(\(\delta\)))
* **Test hypothesis set**: \(H=\{h\in\mathbb{H}\mid h\prec\delta\}\);
* **Representation by properties**: \(\{p_{\mathrm{anc}}(\delta),\neg p_{\mathrm{desc}}(\delta)\}\).
* **Test result**: Testing \(H\) above amounts to asking, "is there a candidate preferred to \(\delta\)?". Thus, the answer to the original question (\(\delta\) is minimal) is yes if the outcome of the test is \(\bot\). If \(\delta\) is not minimal, the test solver returns a strictly preferred candidate.
**Question 3.** Given a finite and explicitly enumerated set of hypotheses \(S\), does \(S\) cover the diagnosis? (covers(\(S\)))
* **Test hypothesis set**: \(H=\{h\in\mathbb{H}\mid\forall h^{\prime}\in S:h^{\prime}\not\preceq h\}\);
* **Representation by properties**: \(\{\neg p_{\mathrm{desc}}(h^{\prime})\in\mathbb{P}\mid h^{\prime}\in S\}\).
* **Test result**: As in Question 2, testing \(H\) asks the reverse of the question; thus, the answer is yes (\(S\) does cover the diagnosis) if the test solver returns \(\bot\), and if \(S\) does not cover the diagnosis, it returns a counter-example, in the form of a candidate not covered by \(S\).
It is possible to characterise the minimal diagnosis in terms of diagnosis questions.
**Theorem 4**: _A subset of hypothesis \(S\) is the minimal diagnosis if and only if it satisfies the following three conditions:_
* \(\forall h\in S.\;\mathrm{candidate}(h)\)_;_
* \(\forall h\in S.\;\mathrm{minimal}(h)\)_;_
* \(\mathrm{covers}(S)\)_._
**Proof:** That the minimal diagnosis satisfies these three conditions is a direct consequence of its definition (Definition 2).
Assume now that \(S\) satisfies the conditions of the theorem. We show that \(S=\min_{\preceq}(\Delta)\) which, by Theorem 1, concludes the proof. Assume that \(S\neq\min_{\preceq}(\Delta)\); this means that either \(S\setminus\min_{\preceq}(\Delta)\neq\emptyset\) or \(\min_{\preceq}(\Delta)\setminus S\neq\emptyset\).
i) Let \(h\) be a hypothesis of \(S\setminus\min_{\preceq}(\Delta)\); \(h\) is a candidate (by definition of \(S\)) but a non-minimal one. Consequently, there exists a minimal candidate \(\delta\in\min_{\preceq}(\Delta)\) such that \(\delta\prec h\). This contradicts the condition \(\mathrm{minimal}(h)\).
ii) Let \(\delta\) be a minimal candidate of \(\min_{\preceq}(\Delta)\setminus S\). Since \(S\) covers the diagnosis, it must contain a hypothesis \(h\preceq\delta\); furthermore, since \(\delta\not\in S\), \(h\prec\delta\). Because \(\delta\) is a minimal candidate, \(h\) is not a candidate. This contradicts the first condition that all hypotheses in \(S\) should be candidates. \(\Box\)
Some of our diagnosis procedures will not rely on the diagnosis question \(\mathrm{minimal}(h)\). For these procedures, we will rely on the following theorem instead.
**Theorem 5**: _A subset of hypotheses \(S\) is the minimal diagnosis if and only if it satisfies the following three conditions:_
* \(\forall h\in S,\ \mbox{\rm candidate}(h)\)_;_
* \(\forall h,h^{\prime}\in S,\ h^{\prime}\not\prec h\)_;_
* \(\mbox{\rm covers}(S)\)_._
**Proof:** The proof is essentially the same as that of Theorem 4. The difference lies in the part i). We reuse the same notation, i.e., \(h\in S\setminus\min_{\preceq}(\Delta)\) and \(\delta\in\min_{\preceq}(\Delta)\) is such that \(\delta\prec h\). From the third condition, we know that there is \(h^{\prime}\in S\) such that \(h^{\prime}\preceq\delta\) (actually, the stronger relation \(h^{\prime}\prec\delta\) holds since \(\delta\) is not an element of \(S\)). Therefore the two elements \(h\) and \(h^{\prime}\) from \(S\) satisfy \(h^{\prime}\prec h\), which contradicts the second condition of Theorem 5. \(\Box\)
## 5 Diagnostic Properties in Different Settings
In this section, we illustrate how the abstract definitions in the previous section are instantiated in two different modeling frameworks: static and dynamic (discrete event) systems. For the latter, we also show the instantiation of different hypothesis spaces.
### Static Systems
Static systems are systems whose state does not normally change over time (except for becoming faulty). A typical example of a static system is a Boolean circuit. Static systems diagnosis consists in identifying the set of faults the system exhibits at a given point in time; there is no notion of multiple occurrences of the same fault, nor of temporal order between fault occurrences. Hence, the diagnosis is normally defined over the set hypothesis space (power set of the set \(F\) of faults), with the preference order defined as the subset relation.
Static systems are typically modeled by a finite set of variables, each with their own domain of values. The set of possible system behaviours, which is a subset of all assignments of values to the variables, is defined by a set _Mod_ of constraints over the variables. (These can be expressed in propositional or first-order logic, or some other constraint formalism.) The observation is also defined by a set \(o\) of constraints on the values of certain variables of the model (for instance \(\mbox{\tt voltage}=\mbox{low}\)). Each possible fault \(f\in F\) is modeled by a Boolean variable \(v_{f}\in V_{F}\): this variable takes the value _true_ iff the fault is present. The hypothesis associated with a behaviour is then the subset of faults \(f\) whose corresponding variable \(v_{f}\) is _true_: \(\mbox{\it hypo}(\sigma)=\{f\in F\mid\sigma\to v_{f}\}\).
A hypothesis \(h\) can be represented by a propositional formula \(\Phi_{h}=\bigwedge_{f\in h}v_{f}\wedge\bigwedge_{f\in F\setminus h}\neg v_{f}\). A hypothesis \(h\subseteq F\) is a candidate if it is logically consistent with the model and observation, i.e., if
\[\mbox{\it Mod},o,\Phi_{h}\not\models\bot.\]
Performing a test is therefore equivalent to solving a constraint satisfaction problem. (In case the model is represented by a propositional logic formula, that means a propositional satisfiability problem).
The property \(p_{H}\) corresponding to a hypothesis set \(H\subseteq\mathbb{H}\), i.e., such that \(\mbox{\it hypo}(p_{H})=H\), is the logical disjunction of the formulas of the hypotheses in \(H\): \(\Phi_{p_{H}}=\bigvee_{h\in H}\Phi_{h}\). Of course, \(\Phi_{p_{H}}\) can also be represented by any formula that is equivalent to this. It is easy to show that:
* \(\Phi_{\text{{\em dec}}(h)}\equiv\bigwedge_{f\in h}v_{f}\). That is, the descendants of hypothesis \(h\) (which are those hypotheses that \(h\) is preferred or equal to) are exactly those that include all faults that are present in \(h\), and possibly other faults as well.
* \(\Phi_{\text{{\em enc}}(h)}\equiv\bigwedge_{f\in F\setminus h}\neg v_{f}\). That is, the ancestors of hypothesis \(h\) (which are those hypotheses that are preferred or equal to \(h\)) are exactly those that do not include any fault not present in \(h\), and possibly exclude some of the faults that \(h\) has.
### Discrete Event Systems
Event-driven dynamic systems are characterised by transitions (be they discrete, timed or continuous) taking place over time. To simplify the discussion of this example, we will consider discrete untimed transitions, i.e., the classical discrete event system (DES) framework (Cassandras & Lafortune, 1999). However, the formulation below, and the diagnosis algorithms we present in the next section, generalise to other types of dynamic systems.
Let \(\Sigma\) be the set of events that can take place in the system. A behaviour \(\sigma\in\Sigma^{\star}\) of the system is a (finite) sequence of events. Thus, the system model is a language \(\mathit{Mod}\subseteq\Sigma^{\star}\). It is common to distinguish in \(\Sigma\) a subset of observable events (\(\Sigma_{o}\)), and to define the observable consequence of a behaviour as the projection \(\Pi_{\Sigma_{o}}(\sigma)\) of the event sequence \(\sigma\) on \(\Sigma_{o}\)(Sampath et al., 1995). Then, an observation, expressed as a predicate on behaviours, has the form \(\mathit{o}(\sigma)\equiv(\Pi_{\Sigma_{o}}(\sigma)=w)\), for some fixed \(w\in\Sigma_{o}^{\star}\). More general forms of observation, such as partially ordered or ambiguous occurrences of observable events, can be specified similarly. Whichever form it takes, we can say that the observation is another language \(\mathcal{L}_{\mathit{O}}\subseteq\Sigma^{\star}\) such that a behaviour \(\sigma\) is consistent with it iff \(\sigma\in\mathcal{L}_{\mathit{O}}\). The faults are modeled by a subset \(F\subseteq\Sigma\) of (unobservable) events. The set of behaviours that correspond to a hypothesis \(h\) is also a language: \(\mathcal{L}_{h}=\{\sigma\in\Sigma^{\star}\mid\mathit{hypo}(\sigma)=h\}\). The precise definition of \(\mathcal{L}_{h}\) depends on the type of hypothesis space.
In most cases, these languages are all regular, and hence representable by finite state machines. However, such a representation is normally too large to be feasible for computational purposes, so in practice an exponentially compact factored representation, such as a network of partially synchronised automata (Pencole & Cordier, 2005), Petri nets (Benveniste, Fabre, Haar, & Jard, 2003), or description in a modelling formalism like PDDL (Haslum & Grastien, 2011), is used instead. As we describe the hypotheses and properties for different spaces in the following, we will simply give them as (regular) languages.
For the set hypothesis space (SHS), a hypothesis \(h\subseteq F\) corresponds to the language \(\mathcal{L}_{h}=\bigcap_{f\in h}(\Sigma^{\star}\{f\}\Sigma^{\star})\ \cap\ \bigcap_{f\in F \setminus h}(\Sigma\setminus\{f\})^{\star}\). For the multiset hypothesis space (MHS), the language \(\mathcal{L}_{h}\) of hypothesis \(h\) is the intersection \(\bigcap_{f\in F}\mathcal{L}_{f}^{=h(f)}\), where for each \(f\), \(\mathcal{L}_{f}^{=h(f)}\) contains all event sequences that have exactly \(h(f)\) occurrences of \(f\). For instance \(\mathcal{L}_{f}^{=2}=(\Sigma\setminus\{f\})^{\star}\left\{f\right\}(\Sigma \setminus\{f\})^{\star}\left\{f\right\}(\Sigma\setminus\{f\})^{\star}\). For the sequence hypothesis space (SqHS), \(\mathcal{L}_{h}\) is the language of words whose projection over \(F\) is \(h\): if \(h=[f_{1},\ldots,f_{k}]\), then \(\mathcal{L}_{h}=(\Sigma\setminus F)^{\star}\left\{f_{1}\right\}(\Sigma \setminus F)^{\star}\ldots(\Sigma\setminus F)^{\star}\left\{f_{k}\right\}( \Sigma\setminus F)^{\star}\).
A hypothesis \(h\) is a candidate if the intersection \(\mathit{Mod}\cap\mathcal{L}_{\mathit{O}}\cap\mathcal{L}_{h}\) is non-empty. Essentially, any \(\sigma\) that belongs to this intersection is a possible behaviour of the system. Thus, a test can be seen as a discrete-state reachability problem. Given compact representations of the languages involved, tests can be carried out using, for example, model checking (Clarke, Grumberg, & Peled, 2000) or AI planning (Ghallab, Nau, & Traverso, 2004) tools.
The property \(p_{H}\) is also a language, specifically \(\mathcal{L}_{p_{H}}=\bigcup_{h\in H}\mathcal{L}_{h}\). Immediate from the definition, the language of a set of properties \(P\) is the intersection of the properties' languages: \(\mathcal{L}_{P}=\bigcap_{p\in P}\mathcal{L}_{p}\).
Likewise, the language of the negation of a property is the complement of its language, i.e., \(\mathcal{L}_{\neg p}=\Sigma^{\star}\setminus\mathcal{L}_{p}\). Using these, the languages of properties \(p_{\text{desc}}(h)\) and \(p_{\text{anc}}(h)\) can be built up according to their definitions.
However, just as in the case of static systems, we can also find simpler, and more intuitive, equivalent expressions for \(\mathcal{L}_{p_{\text{desc}}(h)}\) and \(\mathcal{L}_{p_{\text{anc}}(h)}\). For the set hypothesis space, these are:
* \(\mathcal{L}_{p_{\text{desc}}(h)}=\bigcap_{f\in h}(\Sigma^{\star}\{f\}\Sigma^{ \star})\). In other words, descendants of \(h\) are those event sequences that contain at least one occurrence of each fault \(f\in h\).
* \(\mathcal{L}_{p_{\text{anc}}(h)}=\bigcap_{f\in F\setminus h}(\Sigma\setminus\{ f\})^{\star}\). The ancestors of \(h\) are those event sequences that do not contain any occurrence of any fault event not in \(h\).
For the multiset hypothesis space, the languages of these properties can be written as follows:
* \(\mathcal{L}_{p_{\text{desc}}(h)}=\bigcap_{f\in F}\mathcal{L}_{f}^{\geq h(f)}\),
* \(\mathcal{L}_{p_{\text{anc}}(h)}=\bigcap_{f\in F}\mathcal{L}_{f}^{\leq h(f)}\),
where \(\mathcal{L}_{e}^{\geq x}\) is the language of event sequences in which \(e\) occurs at least \(x\) times and \(\mathcal{L}_{e}^{\leq x}\) the language of sequences where \(e\) occurs at most \(x\) times. The former can be written as \(\mathcal{L}_{e}^{\geq x}=\Sigma^{\star}\mathcal{L}_{e}^{=x}\Sigma^{\star}\), and the latter as \(\bigcup_{i=0,\dots,x}\mathcal{L}_{e}^{=i}\).
For the sequence hypothesis space, the properties can be written as follows. Let \(h=[f_{1},\dots,f_{k}]\):
* \(\mathcal{L}_{p_{\text{desc}}(h)}=\Sigma^{\star}\{f_{1}\}\Sigma^{\star}\dots \Sigma^{\star}\{f_{k}\}\Sigma^{\star}\). In other words, the descendants of \(h\) are all event sequences in which the sequence \(h=[f_{1},\dots,f_{k}]\) is "embedded".
* \(\mathcal{L}_{p_{\text{anc}}(h)}=(\Sigma\setminus F)^{\star}\left\{f_{1}\right\} ^{0/1}(\Sigma\setminus F)^{\star}\dots(\Sigma\setminus F)^{\star}\left\{f_{k} \right\}^{0/1}(\Sigma\setminus F)^{\star}\). That is, the ancestors of \(h\) are all event sequences that contain some substring of \(h\) as an embedded subsequence, and that do not contain any fault event interspersed between the occurrences of these fault events.
## 6 Diagnosis Strategies
We have cast the diagnosis problem as a search for the minimal candidates in the space of hypotheses, and we have shown how this search can query the problem using symbolic tests. To instantiate the framework into a concrete diagnosis algorithm, we must also specify a strategy for the exploration of the hypothesis space, and an implementation of the test solver that is appropriate for the class of system models and the hypothesis space. We describe implementations of test solvers in Section 8.
In this section, we outline two broad types of exploration strategies: The first, which we call "preferred-last", maintains a set of candidates, which is iteratively extended until it covers the diagnosis. The second, which we call "preferred-first", searches in a top-down fashion, testing at each step the most preferred hypothesis that has not yet been rejected. In each case, we first present the basic strategy, followed by refined versions. In particular, we show how the preferred-first strategy can be enhanced through the use of _conflicts_, in a manner analogous to their use in diagnose(Reiter, 1987).
### The Preferred-Last Strategy
The preferred-last strategy (PLS) begins with an empty set \(S\) of candidates, and repeatedly tests whether this set covers the diagnosis. This test is an instance of Question 3, described in Subsection 4.3. If the answer to the question is negative, it leads to the discovery of a new candidate which is added to \(S\). When \(S\) covers the diagnosis we know that it is a superset of the minimal diagnosis, because it contains only candidates. The minimal diagnosis is then extracted from \(S\) by removing non-minimal elements, as required by Theorem 5. The strategy is summarised in Algorithm 1.
```
1:Input: Model Mod, observation \(o\), hypothesis space \(\mathbb{H}\)
2:\(S:=\emptyset\)
3:while\(\neg\)covers\((S)\)do
4: Let \(\delta\) be the candidate found by the coverage test.
5:\(S:=S\cup\{\delta\}\)
6:endwhile
7:return\(\min_{\preceq}(S)\)
```
**Algorithm 1** The preferred-last strategy (PLS)
**Theorem 6**: _PLS returns the minimal diagnosis. Furthermore, if the hypothesis space is well partially ordered, then PLS terminates._
**Proof:** Assume PLS terminates. We first show that the three conditions of Theorem 5 are satisfied by the returned set \(R=\min_{\preceq}(S)\). Observe that both \(S\) and \(R\subseteq S\) are finite since \(S\) is enumerated.
1) All hypotheses in \(R\) are candidates. 2) Since \(R\) is minimised, it contains no pair of hypotheses that are comparable. 3) Let \(\delta\in\Delta\) be a candidate. Since \(S\) covers \(\Delta\), there exists \(h_{1}\in S\) such that \(h_{1}\preceq\delta\). If \(h_{1}\in R\), then \(\delta\) is covered, but we need to consider the general case where \(h_{1}\not\in R\). Because \(h_{1}\) is in the set of non-minimal elements of \(S\) and \(S\) is finite, there is another hypothesis \(h_{2}\in S\) such that \(h_{2}\preceq h_{1}\) holds. This hypothesis \(h_{2}\) could also not belong to \(R\), in which case this hypothesis is also covered by another hypothesis \(h_{3}\). This gives us a sequence of hypotheses \(h_{1}\succ h_{2}\succ\ldots\) that all belong to \(S\). Since \(S\) is finite, there is a minimal hypothesis \(h_{k}\) for this sequence, and this hypothesis belong to \(\min_{\preceq}\ S\). Thus \(R\) covers the diagnosis.
Now, suppose that PLS does not terminate: This means PLS generates an infinite sequence of candidates, \(\delta_{1},\delta_{2},\ldots\) Because \(\delta_{j}\) is generated from a test of coverage of \(\{\delta_{1},\ldots,\delta_{j-1}\}\), we know that \(\delta_{i}\not\preceq\delta_{j}\) for all \(i<j\). Furthermore, since the preference order is well-founded, we know that any strictly descending subchain of this sequence is finite. Therefore, for any index \(i\), there exists at least one index \(k\geq i\) such that \(\delta_{k}\preceq\delta_{i}\) and \(\delta_{k}\) is minimal in the sequence. We write \(m(i)\) the smallest such index \(k\). We note that for any index \(j>m(i)\), \(\delta_{m(i)}\) and \(\delta_{j}\) are incomparable (as \(\delta_{m(i)}\) is minimal in \(S\) and \(\delta_{j}\) is after \(\delta_{min(i)}\) in the sequence). We also note \(m(i+1)>i\) for any index \(i\). Therefore, the set
\[S^{\prime}=\{\delta_{m(i)},\delta_{m(m(i)+1)},\delta_{m(m(m(i)+1)+1)},\ldots\}\]
contains infinitely many mutually-incomparable candidates (hence, all minimal in \(S^{\prime}\)), which contradicts the well partial orderness of \(\preceq\). \(\Box\)
Although the PLS algorithm is guaranteed to eventually terminate, for infinite hypothesis spaces there is no worst-case bound on the number of iterations required before a covering set has been
found (for finite hypothesis spaces it is of course bounded by the size of the space). Consider, for instance, the Sequence Hypothesis Space with only one fault \(f\) and write \(h_{i}=f^{i}\) (i.e., \(h_{i}\) indicates that \(f\) occurred precisely \(i\) times); assume that the diagnosis is \(\Delta=\{h_{0},h_{1},h_{2},\ldots\}=\mathbb{H}\) (any number of occurrences of \(f\) might have happened); then for any \(i\), PLS could generate this sequence of candidates: \(h_{i},h_{i-1},h_{i-2},\ldots,h_{0}\). All sequences will eventually end with \(h_{0}\), but there is no a-priori bound on their size until (in this instance) the first candidate is found.
PLS computes some candidates and then tries to improve them. Sometimes, however, instead of improving known candidates, it will go sideways and compute other irrelevant candidates. The following example illustrates this problem of slow convergence.
**Example 1**: _Consider a set hypothesis space over a large set of faults \(F\), and a diagnosis problem in which \(\Delta=\mathbb{H}\), i.e., all hypotheses are candidates (this would be the situation for example in a weak-fault model with nominal observations). The minimal diagnosis is then the singleton \(\Delta_{\preceq}=\{h_{0}\}\)._
_All candidates that involve \(\lfloor\frac{|F|}{2}\rfloor\) faults are mutually incomparable, which means the coverage test can iteratively generate all of them, leading to an exponential-time computation._
In order to speed up convergence of PLS, we add an extra step which "refines" each new candidate found into a minimal one. The intuition is that if minimal candidates are generated early, we can avoid exploring "redundant" options. For instance, in Example 1 above, the number of iterations will be at most \(|F|+1\).
The refinement of a candidate \(\delta\) is performed by testing whether \(\delta\) is minimal, i.e., asking Question 2. If \(\delta\) is not minimal, the test returns a preferred candidate; this is repeated until the current candidate is minimal. The revised algorithm, called PLS+r, is shown in Algorithm 2. Note that, in this algorithm, all elements inserted in \(S\) are guaranteed to be minimal. Thus, there is no need to remove non-minimal elements at the end.
```
1:Input: Model Mod, observation \(o\), hypothesis space \(\mathbb{H}\)
2:\(S:=\emptyset\)
3:while\(\neg\)covers(\(S\))do
4: Let \(\delta\) be the candidate found by the coverage test.
5:while\(\neg\)minimal(\(\delta\))do
6: Replace \(\delta\) with the candidate found by the minimality test.
7:endwhile
8:\(S:=S\cup\{\delta\}\)
9:endwhile
10:return\(S\)
```
**Algorithm 2** The preferred-last strategy with refinement (PLS+r)
**Theorem 7**: _PLS+r returns the minimal diagnosis. Furthermore, if the hypothesis space is well partially ordered, then PLS+r terminates._
**Proof:** Any candidate added to \(S\) by PLS that is not also added by PLS+r is non-minimal, and therefore removed from the final set by PLS. Thus, PLS+r returns the same diagnosis. The refinement step effectively changes only the order in which candidates are generated. Since PLS terminates regardless of the order in which the candidates are generated in, PLS+r also terminates under the same condition.
### The Preferred-First Strategy
The preferred-first strategy is based on the following intuition: Because faults are rare events, it can be expected that minimal candidates have small depth. Therefore, a sensible approach to the hypothesis space exploration is to start by testing the most preferred hypotheses; if those hypotheses are proven to be candidates, then their descendants do not need to be explored, since we are only interested in the minimal diagnosis. The basic version of the preferred-first strategy (PFS) is presented in Algorithm 3.
```
1:Input: Model _Mod_, observation \(o\), hypothesis space \(\mathbb{H}\)
2:\(S_{\mathrm{R}}:=\emptyset\)// Will store the result
3:\(S_{\mathrm{O}}:=\min_{\preceq}(\mathbb{H})\)// i.e., \(\{h_{0}\}\)
4:while\(S_{\mathrm{O}}\neq\emptyset\)do
5:\(h:=\mathrm{pop}(S_{\mathrm{O}})\)
6:if\((\exists h^{\prime}\in S_{\mathrm{O}}\cup S_{\mathrm{R}}:h^{\prime}\preceq h)\)then
7:continue
8:endif
9:if\(\mathrm{candidate}(h)\)then
10:\(S_{\mathrm{R}}:=S_{\mathrm{R}}\cup\{h\}\)
11:else
12:\(S_{\mathrm{O}}:=S_{\mathrm{O}}\cup\mathrm{children}(h)\)
13:endif
14:endwhile
15:return\(S_{\mathrm{R}}\)
```
**Algorithm 3** The preferred-first strategy (PFS).
Both \(S_{\mathrm{O}}\) and \(S_{\mathrm{R}}\) are enumerated sets of hypotheses and, because any hypothesis has only a finite set of children, both sets are guaranteed to be finite. The set \(S_{\mathrm{O}}\) contains all hypotheses that are "promising", in the sense that their parents have been ruled out as candidates but the hypotheses themselves have not yet been tested. Starting with the unique most preferred hypothesis \(h_{0}\), the algorithm selects a current hypothesis \(h\) to test, which it removes from \(S_{\mathrm{O}}\) and stores it in \(S_{\mathrm{R}}\) if it is a candidate; otherwise, it adds the children of \(h\) to \(S_{\mathrm{O}}\).
PFS returns the correct diagnosis, but termination is only ensured if the hypothesis space is finite. To demonstrate these results, we first prove the following lemma:
**Lemma 3**: _Whenever the condition of the **while** loop in PFS is tested, the diagnosis is covered by \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\), i.e., \(\forall\delta\in\Delta,\ \exists h\in S_{\mathrm{O}}\cup S_{\mathrm{R}}:\ h\preceq\delta\)._
**Proof:** We prove the lemma by induction.
Initially, \(S_{\mathrm{O}}=\{h_{0}\}\) so the coverage property holds.
Assume that the coverage property is true for some \(S_{\mathrm{O}}\neq\emptyset\) and some \(S_{\mathrm{R}}\). We prove that the property still holds after a single execution of the loop body. Let \(h\in S_{\mathrm{O}}\) be the hypothesis chosen at Line 5. Consider a candidate \(\delta\): by induction, we know that there exists \(h^{\prime}\in S_{\mathrm{O}}\cup S_{\mathrm{R}}\) such that \(h^{\prime}\preceq\delta\). If \(h^{\prime}\neq h\), then the condition still holds in the next iteration, since \(h^{\prime}\) remains in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\). On the other hand, if \(h^{\prime}=h\), then there are three cases:
i) If the condition on Line 6 is true, then there exists \(h^{\prime\prime}\in(S_{\mathrm{O}}\setminus\{h\})\cup S_{\mathrm{R}}\) such that \(h^{\prime\prime}\preceq h\preceq\delta\). Since \(h^{\prime\prime}\) remains in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) at the start of the next iteration, candidate \(\delta\) is covered.
ii) If the condition on Line 9 is true, \(h\) is simply moved from \(S_{\rm O}\) to \(S_{\rm R}\), so \(S_{\rm O}\cup S_{\rm R}\) remains unchanged and the coverage property holds by induction.
iii) If neither of these two conditions is satisfied, \(h\) will be removed from \(S_{\rm O}\) and its children added instead. In this case, since \(h\preceq\delta\) but \(h\) cannot be equal to \(\delta\), we have \(h\prec\delta\). Hence, there exists at least one hypothesis \(h^{\prime\prime}\) such that \(h\prec h^{\prime\prime}\preceq\delta\), and any minimal such hypothesis is a child of \(h\). Hence candidate \(\delta\) is covered at the next iteration by at least one child \(h^{\prime\prime}\) of \(h\) that has been added to \(S_{\rm O}\). \(\Box\)
**Theorem 8**: _PFS returns the minimal diagnosis. Furthermore, if the hypothesis space is finite, then PFS terminates._
**Proof:** Let \(S_{\rm R}\) be the result of the algorithm (assuming it terminates). We prove that \(S_{\rm R}\subseteq\Delta_{\preceq}\), and then that \(\Delta_{\preceq}\subseteq S_{\rm R}\).
\(S_{\rm R}\) is initially empty and elements are added (Line 10) only when they are proved to be candidates: hence \(S_{\rm R}\subseteq\Delta\). Furthermore we know from Lemma 3 that \(S_{\rm R}\cup S_{\rm O}\) covers the diagnosis at all times. Assume the non-minimal diagnosis candidate \(h=\delta\) is added to \(S_{\rm R}\) in some iteration. This means that \(\delta\) is the hypothesis popped from \(S_{\rm O}\) in this iteration. Since \(\delta\) is non-minimal, there exists a preferred candidate \(\delta^{\prime}\preceq\delta\) and this candidate is covered: \(\exists h^{\prime}\in S_{\rm R}\cup S_{\rm O}\). \(h^{\prime}\preceq\delta^{\prime}\). This, however, means that \(h^{\prime}\preceq\delta\), so \(\delta\) cannot have not have passed the test at Line 6. Hence, \(S_{\rm R}\) contains only minimal candidates.
At the end of the algorithm, \(S_{\rm O}\) is empty, so \(S_{\rm R}\) alone covers the diagnosis. Hence, for any minimal candidate \(\delta\), there exists a hypothesis preferred to \(\delta\) that appears in \(S_{\rm R}\). But \(S_{\rm R}\) contains only minimal candidates, and the only candidate \(\delta^{\prime}\) that satisfies \(\delta^{\prime}\preceq\delta\) is \(\delta\). Therefore, all minimal candidates appear in \(S_{\rm R}\).
To show termination, we prove that \(S_{\rm O}\) eventually becomes empty.
At each iteration, one hypothesis \(h\) is removed from \(S_{\rm O}\); under certain conditions, the children of \(h\) are also added to \(S_{\rm O}\). We show that when this happens, the hypothesis \(h\) that was just removed can never re-enter \(S_{\rm O}\) in any future iteration.
A hypothesis \(h\) can be added to \(S_{\rm O}\) only in the interaction in which one of its parents was removed from \(S_{\rm O}\). Thus, if no ancestor of \(h\) is currently in \(S_{\rm O}\) then \(h\) cannot be added to \(S_{\rm O}\) in any future iteration.
Consider a hypothesis \(h\), removed from \(S_{\rm O}\) in the current iteration, and suppose that the algorithm reaches Line 12, so that children of \(h\) are added to \(S_{\rm O}\). This means the condition on Line 6 does not hold, which means there is no ancestor of \(h\) in \(S_{\rm O}\) (or in \(S_{\rm R}\)). Hence, \(h\) can never re-enter \(S_{\rm O}\). \(\Box\)
In general, there is no guarantee that PFS will terminate when the hypothesis space is infinite.
This is illustrated by the two examples below. In the first, lack of termination comes from _useless_ hypotheses which have no candidates among their descendants. As the second example shows even pruning those useless hypotheses is not sufficient to ensure termination.
**Example 2**: _Consider a SqHS with two faults \(f_{1}\) and \(f_{2}\), and suppose that the diagnosis is \(\Delta=\{[f_{1}]\}\). Then, PFS will never end. Table 1 shows a possible evolution of PFS. PFS is unaware of the fact that no descendant of \([f_{2},f_{2},\ldots,f_{2}]\) is a candidate, and will therefore explore this branch for ever._
**Example 3**: _Consider again a SqHS with two faults \(f_{1}\) and \(f_{2}\), and consider that the diagnosis is \(\Delta=\{[f_{1}],[f_{1},f_{2}],[f_{1},f_{2},f_{2}],\ldots\}\), i.e., any hypothesis that starts with \(f_{1}\), followed by any number
of \(f_{2}\). Then, all hypotheses of the form \([f_{2},\ldots,f_{2}]\) have a child that is a candidate (the hypothesis with \(f_{1}\) added to the beginning of the sequence), and hence none of them are useless. This makes it possible for PFS to explore an infinite path in the hypothesis space without encountering any candidate, thus never terminating._
However termination can also be guaranteed by pruning a different type of hypotheses. We call _undesirable_ those hypotheses that are not ancestors of any minimal candidates (formally, descendants\((h)\cap\Delta_{\preceq}=\emptyset\)). Again, assuming that all hypotheses have finite depth then pruning undesirable hypotheses guarantees termination.
In fact, we use an even stronger pruning condition, which discards all undesirable hypotheses as well as some hypotheses that do not satisfy the undesirability condition but are redundant because the candidates they can lead to are covered by some other hypothesis. We call these hypotheses _non-essential_. Pruning non-essential hypotheses works better than pruning only the undesirable hypotheses for two reasons: First, because the undesirability condition cannot be directly tested during search, since the minimal diagnosis \(\Delta_{\preceq}\) is not known; the essentiality property, on the other hand, is straightforward to test. Second, pruning more hypotheses, as long as it does not compromise completeness of the returned diagnosis, is of course preferable since it leads to less search. Note that the part of the proof of Theorem 9 that establishes termination does not actually depend on pruning non-essential hypotheses; it continues to hold also if only undesirable hypotheses are pruned.
A hypothesis \(h\) is said to be non-essential, with respect to \(S_{\mathrm{O}}\) and \(S_{\mathrm{R}}\), if all candidates \(\delta\) that are descendants of \(h\) are also descendants of some other hypothesis in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\). The proof of Theorem 8 relies on the coverage property which states that for every candidate \(\delta\) some \(h\preceq\delta\) (either \(\delta\) or one of its ancestors) appears in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) at the start of every iteration. Therefore, if \((S_{\mathrm{O}}\setminus\{h\})\cup S_{\mathrm{R}}\) covers the diagnosis, then \(h\) can be safely discarded from \(S_{\mathrm{O}}\) without losing the coverage property. Because \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) always covers the diagnosis (by Lemma 3), \(h\) is non-essential exactly when \((S_{\mathrm{O}}\setminus\{h\})\cup S_{\mathrm{R}}\) also covers the diagnosis. Note that an undesirable hypothesis \(h\) is always non-essential w.r.t. \(S_{\mathrm{O}}\) and \(S_{\mathrm{R}}\) if \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) covers the minimal diagnosis. Therefore, any undesirable hypothesis will be pruned by skipping non-essential hypotheses in PFS. The non-essential test is shown in Algorithm 4. It is added to PFS between Lines 8 and 9. We call the resulting algorithm PFS+e.
**Theorem 9**: _PFS+e returns the minimal diagnosis. Furthermore, if all hypotheses of the hypothesis space have finite depth, then PFS+e terminates._
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(S_{\mathrm{O}}\) & \(S_{\mathrm{R}}\) & next element popped \\ \hline \(\{[]\}\) & \(\{\}\) & \([]\) \\ \(\{[f_{1}],[f_{2}]\}\) & \(\{\}\) & \([f_{1}]\) \\ \(\{[f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{2}]\) \\ \(\{[f_{1},f_{2}],[f_{2},f_{1}],[f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{1},f_{2}]\) \\ \(\{[f_{2},f_{1}],[f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{2},f_{1}]\) \\ \(\{[f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{2},f_{2}]\) \\ \(\{[f_{2},f_{2}],[f_{2},f_{1},f_{2}],[f_{2},f_{2},f_{1}],[f_{2},f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{1},f_{2},f_{2}]\) \\ \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \hline \end{tabular}
\end{table}
Table 1: Possible evolution of PFS
**Proof:** That PFS+e returns the minimal diagnosis can be shown simply by proving that the coverage property (Lemma 3) still holds. We now have a fourth case in the induction step of proof: If \(h\) fails the non-essentiality test, then it is discarded without its children being added to \(S_{\mathrm{O}}\). However, this test actually checks the very property that we want to enforce, that \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) covers the diagnosis, so when it triggers and returns to start of the **while** loop, the coverage property also holds.
Next, we consider termination. Let \(S_{\mathrm{O}}@0,\ldots,S_{\mathrm{O}}@i,\ldots\) represent the content of the set \(S_{\mathrm{O}}\) at the start of each iteration of the **while** loop, when the loop condition is evaluated. We need to show that \(S_{\mathrm{O}}@i\) will eventually be empty. To do this, we make use of the following three facts which we then prove:
i) Let \(A=\{h\in\mathbb{H}\mid\exists\delta\in\Delta_{\preceq}.\)\(h\preceq\delta\}\): \(A\) is finite.
ii) \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@k\cap A\Rightarrow\forall j\in\{i, \ldots,k\}.\)\(S_{\mathrm{O}}@j\cap A=S_{\mathrm{O}}@i\cap A\).
iii) \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@(i+1)\cap A\Rightarrow S_{\mathrm{O}}@(i +1)\subset S_{\mathrm{O}}@i\).
Assume the sequence \(S_{\mathrm{O}}@i\) goes on forever. By (iii) and because \(S_{\mathrm{O}}\) is always finite, the intersection of \(S_{\mathrm{O}}\) and \(A\) changes infinitely often. Furthermore, by (i) there is only a finite number of intersections of \(S_{\mathrm{O}}\) and \(A\), which means that the same intersection must eventually reappear. This contradicts (ii).
It remains to prove claims (i) - (iii).
i) First, note that that \(A=\bigcup_{\delta\in\Delta_{\prec}}\mathrm{ancestors}(\delta)\). Because \(\Delta_{\preceq}\) is finite, \(A\) is finite iff the set of ancestors of every minimal candidate is finite. Consider a minimal candidate \(\delta\). Since \(\delta\) has finite depth (assumption of the theorem), its ancestors all have depth \(d\) or less. We prove, by induction, that the set of hypotheses of depth \(d\) or less is finite, for any \(d\). This is true for \(d=1\), since only \(h_{0}\) has depth 1. Assume it is true for \(d-1\). By definition of depth, every hypothesis \(h\) of depth \(d\) is a child of some hypothesis \(h^{\prime}\) of depth \(d-1\). Since there is a finite number hypotheses \(h^{\prime}\) at depth \(d-1\), by inductive assumption, and each of them has a finite number of children (because the hypothesis space is well partially ordered), there can only be a finite number of hypotheses with depth \(d\). Thus, the number of hypotheses of depth \(d\) or less is also finite.
ii) Assume \(i<j<k\) such that \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@k\cap A\) and \(S_{\mathrm{O}}@i\cap A\neq S_{\mathrm{O}}@j\cap A\). Let \(A^{\prime}\subseteq A\) be the set of hypotheses \(h\) that are added at some point between iteration \(i\) and iteration \(k\), that is, \(A^{\prime}=\{h\in A\mid\exists\ell\in\{i,\ldots,k-1\}.\)\(h\not\in S_{\mathrm{O}}@\ell\wedge h\in S_{\mathrm{O}}@(\ell+1)\}\). Clearly \(A^{\prime}\) is not empty: Since \(S_{\mathrm{O}}@i\cap A\neq S_{\mathrm{O}}@j\cap A\), some hypothesis has either been added between \(i\) and \(j\), or some hypothesis has been removed between \(i\) and \(j\), in which case it must added again before iteration \(k\). Let \(h\) be a hypothesis that is minimal in the set \(A^{\prime}\). Since \(h\) is added to \(S_{\mathrm{O}}\) at some point between iteration \(i\) and iteration \(k\), a parent \(h^{\prime}\) of \(h\) must be removed at the same iteration (the only way to add an element to \(S_{\mathrm{O}}\) is through Line 12). However, if \(h^{\prime}\) is removed from \(S_{\mathrm{O}}\), it must be added again to \(S_{\mathrm{O}}\) at some later point, as otherwise \(S_{\mathrm{O}}@k\cap A\) could not equal \(S_{\mathrm{O}}@i\cap A\). This means \(h^{\prime}\) also belongs to \(A^{\prime}\), and since it is a parent of \(h\), this contradicts the choice of \(h\) as a minimal hypothesis in \(A^{\prime}\).
iii) Consider an iteration \(i\) such that \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@(i+1)\cap A\). Because \(S_{\mathrm{O}}\cap A\) is unchanged, the hypothesis \(h\) chosen at iteration \(i\) does not belong to \(A\). Any hypothesis not in \(A\) is, by definition,
undesirable, since \(A\) contains all ancestors of all minimal candidates. Thus, since \(S_{\mathrm{O}}@i\cup S_{\mathrm{R}}@i\) covers the minimal diagnosis (by Lemma 3), so does \((S_{\mathrm{O}}@i\cap A)\cup S_{\mathrm{R}}@i\), and consequently so does \((S_{\mathrm{O}}@i\setminus\{h\})\cup S_{\mathrm{R}}@i\). Thus, \(h\) fails the essentiality test in PFS+e, so no children of \(h\) are added to \(S_{\mathrm{O}}\) and we have \(S_{\mathrm{O}}@(i+1)=S_{\mathrm{O}}@i\setminus\{h\}\). \(\Box\)
### Conflict-Based Strategy
The conflict-based strategy is an improvement of PFS. The idea is to extract the core reason why hypothesis \(h\) is not a candidate, in order to reduce the number of successors of \(h\) that need to be inserted in the open list.
We define a conflict as an implicit representation of a set of hypotheses that are not candidates.
**Definition 5**: _A conflict \(C\) is an object that represents a set \(\mathit{hypos}(C)\) of hypotheses that does not intersect the diagnosis:_
\[\mathit{hypos}(C)\cap\Delta=\emptyset.\]
We now assume that the test solver is not only able to decide whether the diagnosis intersects the specified set of hypotheses, but also to return a conflict in case the test fails. The following definition of a test result extends Definition 3.
**Definition 6**: _The result of a test \(\langle\mathit{Mod},o,H\rangle\) is either a hypothesis \(h\in\Delta(\mathit{Mod},o,\mathbb{H})\cap H\) or a conflict \(C\) such that \(H\subseteq\mathit{hypos}(C)\)._
In the worst case, i.e., if the test solver is not able to provide useful information, the conflict can be defined such that \(\mathit{hypos}(C)=H\).
Two problems need to be solved at this stage: i) how do we compute conflicts, and ii) how do we use conflicts for diagnosis. We first concentrate on the second issue.
#### 6.3.1 Using Conflicts for Diagnosis
A conflict can be useful in two different ways.
First, a conflict can be used to avoid certain tests. For instance, let \(h\) be a hypothesis, the candidacy of which we want to test, and let \(C\) be a conflict that was previously computed. If \(h\in\mathit{hypos}(C)\), then \(h\not\in\Delta\) (by definition of a conflict). Therefore, inclusion in a conflict can serve as an early detection that a hypothesis is not a candidate.
The second use of a conflict is to reduce the number of successors that need to be generated after a candidacy test failed. Again, let \(h\) be a hypothesis and let \(C\) be a conflict such that \(h\in\mathit{hypos}(C)\). Remember that the correctness of PFS relies on the fact that all diagnosis candidates are covered by a hypothesis from the open list or by a hypothesis from the already discovered minimal candidates. When \(h\) is proved to be a non-candidate, we no longer need to get \(h\) covered, but we need to cover the set \(S\) of all strict descendants of \(h\), which is the reason why Algorithm 3 includes all the minimal elements of \(S\) (the children of \(h\)) in the open list. Now however, not only do we know that \(h\) is not a candidate, but the same also applies to all the hypotheses of \(\mathit{hypos}(C)\). Therefore, we may include in the open list the minimal elements of \(S\setminus\mathit{hypos}(C)\). This is illustrated with Algorithm 5 where the conflict is used to compute the set of successors. We call PFS+ec (resp. PFS+c) the variant of PFS+e (resp. PFS) that uses conflicts.
```
if\(\text{candidate}(h)\)then \(S_{\text{R}}:=S_{\text{R}}\cup\{h\}\) else Let \(C\) be the conflict generated by the test. \(S_{\text{O}}:=S_{\text{O}}\cup\min_{\preceq}(\text{descendants}(h)\setminus \text{hypos}(C))\) endif
```
**Algorithm 5** Replacement of the If statement Lines 9-13 of Algorithm 3.
**Theorem 10**: _PFS+ec returns the minimal diagnosis. Furthermore if all hypotheses of the hypothesis space have finite depth, then PFS+ec terminates._
**Proof:** The correct outcome of the algorithm is again proved by updating the coverage property (Lemma 3). Item iii) of the proof needs to be updated as follows. Candidate \(\delta\) is covered by the hypothesis \(h\) that has been disproved (\(\delta\in\text{descendants}(h)\)). Because \(\delta\) is a candidate and \(C\) is a conflict, \(\delta\not\in\text{hypos}(C)\). Hence \(\delta\in\text{descendants}(h)\setminus\text{hypos}(C)\). Since the hypothesis space is well partially ordered, \(\min_{\preceq}(\text{descendants}(h)\setminus\text{hypos}(C))\) is not empty and therefore, when the hypotheses in this set are added to \(S_{\text{O}}\) (line 5), at least one of them will cover \(\delta\) at the next iteration of the algorithm.
The proof for termination of PFS+e also apply to PFS+ec. \(\Box\)
We now illustrate how PFS+ec can accelerate the diagnosis. First it may remove a number of successors.
**Example 4**: _Consider a SqHS with three fault events \(f_{1}\), \(f_{2}\), and \(f_{3}\). PFS+ec first tests the empty sequence \(h_{0}=[]\). Assuming \(h_{0}\) is not a candidate, then PFS would generate three successors, \([f_{1}]\), \([f_{2}]\), and \([f_{3}]\). Assume now the test solver finds the conflict \(C\) that specifies that either fault \(f_{1}\) or fault \(f_{2}\) occurred. This conflict rejects all hypotheses that contain only \(f_{3}\) faults. It is not difficult to show that the minimal elements of \(\text{descendants}(h_{0})\setminus\text{hypos}(C)\) are \([f_{1}]\) and \([f_{2}]\). In other words, the conflict allowed to discard the hypothesis \([f_{3}]\)._
But conflicts can also allow us to consider hypotheses that are "deeper" than the natural successors, thus skipping intermediate steps.
**Example 5**: _Consider the same example as before, but this time with the conflict \(C\) that excludes \(h_{0}\) and all hypotheses with a single fault. Then the successors of \(h_{0}\) become: \([f_{1},f_{1}]\), \([f_{1},f_{2}]\), \([f_{1},f_{3}]\), \([f_{2},f_{1}]\), \([f_{2},f_{2}]\), \([f_{2},f_{3}]\), \([f_{3},f_{1}]\), \([f_{3},f_{2}]\), and \([f_{3},f_{3}]\). PFS does not need to test any \(h_{0}\)'s three children._
#### 6.3.2 Computing Conflicts and Successors
So far, we have merely characterised the set of successors rejected by a conflict, but have not explained how to compute conflicts and successors in practice. A key issue addressed by our approach below is that the set \((\text{descendants}(h)\setminus\text{hypos}(C))\) is infinite in general.
We first discuss the _computation of conflicts_. Whilst our approach restricts the type of conflicts computed, it makes it easy to test for inclusion in a conflict and compute successors. Conflicts
are represented symbolically, similarly to the tested hypotheses. A conflict is a set of hypothesis properties which, as explained in Definition 4, is an implicit representation of a set of hypotheses:
\[C\subseteq\mathbb{P}.\]
To see how conflicts are computed, remember that the test solver is given a set \(P\) of properties that represents exactly the set \(H\) of hypotheses to be tested (\(H=\mathit{hypos}(P)\)). The task of the test solver is essentially to find an "explanation" of the observation that satisfies all these properties. If no such explanation exists (and, consequently, the test fails), then the solver may be able to track all the properties \(P^{\prime}\subseteq P\) that it used to decide the failure. Clearly:
* no hypothesis that satisfies \(P^{\prime}\) is a candidate; hence \(P^{\prime}\) is a conflict;
* the set of hypotheses represented by \(P^{\prime}\) is a superset of the set of hypotheses represented by \(P\): \(P^{\prime}\subseteq P\Rightarrow\mathit{hypos}(P^{\prime})\supseteq\mathit{ hypos}(P)=H\).
Therefore, \(P^{\prime}\) is returned as the result of the diagnosis test.
Given this definition of conflict, we now discuss the efficient _computation of the successors_ of a hypothesis rejected by a conflict. First, observe that PFS searches using Question 1, which, as stated in Subsection 4.3, can be formulated via two properties of the form \(p_{\mathrm{desc}}(\cdot)\) and \(p_{\mathrm{anc}}(\cdot)\), or alternatively via a \(p_{\mathrm{desc}}(\cdot)\) property in conjunction with a set of \(\neg p_{\mathrm{desc}}(\cdot)\) properties. We choose the latter representation, as using more properties will enable the generation of a more general conflict and increase efficiency.
Second, the property of the form \(p_{\mathrm{desc}}(\cdot)\) can be ignored for the purpose of computing successors. This is because the successors of \(h\) (as defined in Algorithm 5) should contradict at least one property of the conflict but cannot contradict a \(p=p_{\mathrm{desc}}(h^{\prime})\) property: clearly if \(p\) is a property of \(h\) then \(h^{\prime}\preceq h\) and all descendants \(h^{\prime\prime}\) of \(h\) satisfy \(h^{\prime}\preceq h\preceq h^{\prime\prime}\), which means that \(p\) is also a property of \(h^{\prime\prime}\). Therefore, no successor of \(h\) will contradict \(p\) and, as a consequence, properties of the form \(p_{\mathrm{desc}}(h^{\prime})\) can be ignored to determine the successors. Formally, \(\mathrm{descendants}(h)\setminus\mathit{hypos}(C)=\mathrm{descendants}(h) \setminus\mathit{hypos}(C^{\prime})\) where \(C^{\prime}\) is the subset of properties of \(C\) that are of type \(\neg p_{\mathrm{desc}}(h^{\prime})\); notice that this does not imply that \(C^{\prime}\) is a conflict.
Now, let \(h\) and \(h^{\prime}\) be two hypotheses. We write \(h\otimes h^{\prime}\) for the set of least common descendants of \(h\) and \(h^{\prime}\), i.e., \(h\otimes h^{\prime}=\min_{\preceq}(\mathrm{descendants}(h)\cap\mathrm{descendants}(h^{ \prime}))\). The following result holds:
**Lemma 4**: _Let \(S\) be a set of hypotheses and let \(C_{S}=\{\neg p_{\mathit{desc}}(h^{\prime})\in\mathbb{P}\mid h^{\prime}\in S\}\) be a set of properties. Let \(h\) be a hypothesis. Then,_
\[\min_{\preceq}(\mathrm{descendants}(h)\setminus\mathit{hypos}(C_{S}))=\min_{ \preceq}(\bigcup_{h^{\prime}\in S}h\otimes h^{\prime}).\]
**Proof:** This proof is in two parts: first, we prove that if \(S_{1}\) covers \(S_{2}\) (i.e., for all hypotheses of \(S_{2}\), there exists a preferred hypothesis in \(S_{1}\)) and conversely, then their minimal sets are equals; second, we prove that the two-way coverage holds for \(S_{1}=\mathrm{descendants}(H)\setminus\mathit{hypos}(C_{S})\) and for \(S_{2}=\bigcup_{h^{\prime}\in S}h\otimes h^{\prime}\).
Let \(S_{1}\) and \(S_{2}\) be two sets of hypotheses such that \(\forall\{i,j\}=\{1,2\}\ \forall h_{i}\in S_{i}\ \exists h_{j}\in S_{j}\ h_{i} \preceq h_{j}\). Consider an element \(h_{i}\in\min_{\preceq}(S_{i})\); since \(h_{i}\in S_{i}\), there exists \(h_{j}\in S_{j}\) such that \(h_{j}\preceq h_{i}\). Furthermore since \(h_{j}\in S_{j}\), there exists \(h^{\prime}_{i}\in S_{i}\) such that \(h^{\prime}_{i}\preceq h_{j}\). Hence \(h^{\prime}_{i}\preceq h_{i}\) and therefore \(h^{\prime}_{i}=h_{i}\) (if \(h^{\prime}_{i}\prec h_{i}\), then \(h_{i}\) would not minimal). Consequently \(h^{\prime}_{i}\preceq h_{j}\preceq h_{i}\) and \(h^{\prime}_{i}=h_{i}\), which implies that \(h_{i}=h_{j}\). Thus \(\min_{\preceq}S_{1}=\min_{\preceq}S_{2}\).
Assume now \(S_{1}=\text{descendants}(h)\setminus\text{\it hypos}(C_{S})\) and \(S_{2}=\bigcup_{h^{\prime}\in S}h\otimes h^{\prime}\). We prove that \(S_{1}\) covers \(S_{2}\), and in the next paragraph we prove that the converse holds as well. Let \(h_{2}\in S_{2}\) and let \(h^{\prime}\in S\) be the hypothesis such that \(h_{2}\in h\otimes h^{\prime}\), then \(h\preceq h_{2}\) and \(h^{\prime}\preceq h_{2}\); hence \(h_{2}\in\text{descendants}(h)\) and \(h_{2}\not\in\text{\it hypos}(C_{S})\) (since \(\text{\it hypos}(C_{S})\) includes \(\neg p_{\text{desc}}(h^{\prime})\)).
Let \(h_{1}\in S_{1}\) be a hypothesis. By definition \(h_{1}\) is a descendant of \(h\) and does not belong to \(\text{\it hypos}(C_{S})\); hence there exists \(h^{\prime}\in S\) such that \(h_{1}\preceq h^{\prime}\). By definition, \(S_{2}=h\otimes h^{\prime}\) contains a hypothesis \(h_{2}\) such that \(h_{2}\preceq h_{1}\). \(\Box\)
Lemma 4 gives us a way to compute the set of successors. Indeed, it should be clear that \(h\otimes h^{\prime}\) is finite for any \(h\) and any \(h^{\prime}\) since the hypothesis space is a well partial order. Therefore, the union in Lemma 4 can be enumerated and the minimal elements found by pairwise hypothesis comparisons.
The implementation of operator \(\otimes\) is often simple. We now give concrete realisations for some of the hypothesis spaces we introduced.
In SHS, a hypothesis is a set of faults. The single hypothesis \(h^{\prime\prime}\) such that \(\{h^{\prime\prime}\}=h\otimes h^{\prime}\) is then \(h^{\prime\prime}=h\cup h^{\prime}\).
In MHS, a hypothesis associates each fault with a number of occurrences. Again, \(h\otimes h^{\prime}\) produces a single hypothesis \(h^{\prime\prime}\), which is defined by \(h^{\prime\prime}(f)=\max\{h(f),h^{\prime}(f)\}\) for all fault \(f\).
In SqHS, multiple hypotheses can be minimal common descendants of \(h\) and \(h^{\prime}\). Such hypotheses \(h^{\prime\prime}\) are such that they contain all faults in \(h\) and \(h^{\prime}\), in the same order. The set of hypotheses can be computed by progressing in \(h\), \(h^{\prime}\), or in both at the same time (if the current fault is the same), until the end of both sequences is reached. Certain non-minimal hypotheses may still slip in, and must be removed. For instance, if \(h=[a,b]\) and \(h^{\prime}=[b,c]\), the procedure described above would produce: \(\{[a,b,b,c],[a,b,c,b],[a,b,c],[b,a,b,c],[b,a,c,b],[b,c,a,b]\}\) but the result is actually \(h\otimes h^{\prime}=\{[a,b,c],[b,a,c,b],[b,c,a,b]\}\).
## 7 Related Work
The AI and control communities have developed a wide spectrum of diagnosis approaches targeting static or dynamic, discrete event, continuous, or hybrid systems. Obviously, we cannot discuss all of these. For instance, we do not cover approaches in state estimation or probabilistic diagnosis whose goal is to compute a probability distribution on candidates (Thorsley & Teneketzis, 2005; Stern, Kalech, Rogov, & Feldman, 2015). Instead, we focus our discussion on the frameworks which ours generalises. This includes in particular the founding works of Reiter (Reiter, 1987), de Kleer and Williams (de Kleer & Williams, 1987), and approaches that employ related algorithmic frameworks (Feldman, Provan, & van Gemund, 2010).
### Connection with Reiter's Theory
Reiter's work (Reiter, 1987) is a key inspiration to the present theory. Similarly to Reiter's, our objective is a general theory of diagnosis from first principles, which determines the preferred diagnosis hypotheses solely from the available description of the system and of its observed behaviour, and which is independent from the way systems, hypotheses, and observations are represented.
Our work generalises Reiter's in two significant ways. First, Reiter only considers the set hypothesis space (SHS). This space has many properties (Staroswiecki, Commault, & Dion, 2012), which allowed Reiter to propose a more specific implementation of PFS+c (diagnose). SHS is finite, which means that termination is not an issue (by no mean does this implies that Reiter and
other researchers did not try to accelerate termination). It is also a lattice, i.e., any pair \(\{h,h^{\prime}\}\) of hypotheses has a unique least upper bound and a unique greatest lower bound; practically, this means that \(h\otimes h^{\prime}\) is always singleton, which simplifies successor computation. Finally, and most importantly, each hypothesis can be defined as the intersection of the set of descendants or non-descendants of hypotheses of depth 1. For instance, if \(F=\{f_{1},f_{2},f_{3}\}\), then \(\{f_{1},f_{2}\}\) is the unique element of descendants\((\{f_{1}\})\cap\mbox{descendants}(\{f_{2}\})\cap(\mathbb{H}\setminus \mbox{descendants}(\{f_{3}\}))\). Similarly, the set of descendants of any hypothesis is the intersection of descendants of hypotheses of depth 1: descendants\((\{f_{1},f_{2}\})=\mbox{descendants}(\{f_{1}\})\cap\mbox{descendants}(\{f_{2}\})\). Practically, this means that there exists a specialised property space that can be used to uniformly represent all hypotheses and that leads to conflicts that generalise well across the hypothesis space. For all these reasons, Reiter did not have to introduce the more complex algorithmic machinery we use in this paper. However, our theory enables much richer hypotheses spaces to be considered.
This leads us to the second main difference with Reiter's work: whilst system-independence was one of Reiter's original aims, his theory was mainly applied to circuits and other static systems (Dague, 1994). Dynamic systems and in particular DESs were investigated using totally different approaches. In part, this can be explained by the immaturity of available consistency-checking tools for DESs (model checkers and AI planners) at the time. However, dynamic systems also naturally lend themselves to diagnostic abstractions richer than the set hypotheses space, such as considering sequences of fault events (Cordier & Thiebaux, 1994).
### Connection with de Kleer's Theory
Reiter's theory applies to weak-fault models, which model only the correct behavior of components. De Kleer and Williams (de Kleer & Williams, 1987) extended Reiter's work to strong-fault models, which incorporate information about faulty behavior. They also used a different computational strategy, exploiting an assumption-based truth maintenance system (ATMS) (de Kleer, 1986). Their approach however still assumes the set hypothesis space.
Strong-fault models bring additional challenges to the development of a general theory of diagnosis. Weak-fault models have a certain monotonicity property: if \(\delta\preceq h\) and \(\delta\) is a candidate, then \(h\) is also a candidate. This is one justification for returning the minimal diagnosis: it implicitly represents all diagnosis candidates. Such a representation however is no longer possible with strong-fault models, and instead, a new notion of "kernel diagnosis" was introduced (de Kleer, Mackworth, & Reiter, 1990). A kernel diagnosis is the conjunction of descendants and non-descendants of specified sets of hypotheses, e.g. descendants\((\{f_{1}\})\cap\mbox{descendants}(\{f_{2}\})\cap(\mathbb{H}\setminus\mbox{ descendants}(\{f_{3}\}))\cap(\mathbb{H}\setminus\mbox{descendants}(\{f_{4}\}))\), and the diagnosis can be represented by a (finite) set of maximal kernel diagnoses. Note that although all minimal candidates belong to some kernel diagnosis, i) this kernel diagnosis is not solely defined by the minimal candidate and ii) not all kernel diagnoses contain a minimal candidate.
The generalisation of a kernel diagnosis to a richer hypothesis space than SHS is not trivial. For strong-fault models, the main benefits of representing the diagnosis as a set of kernel diagnoses over a set of minimal diagnoses are that: i) the candidates can be easily enumerated; and ii) verifying that a hypothesis is a candidate is easy. A kernel diagnosis represented by a set of properties as defined in the present article satisfies these two criteria. However the set of kernel diagnoses may become infinite. To see this, consider the following example over a multiset hypothesis space (MHS) with two fault events \(f_{1}\) and \(f_{2}\); for simplicity a hypothesis will be written \(h_{i,j}\) which means that fault \(f_{1}\) occurred \(i\) times, and fault \(f_{2}\)\(j\) times. We assume that \(\Delta=\{h_{0,j}\mid j\mbox{ mod }2=1\}\), i.e.,
did not occur and \(f_{2}\) occurred an odd number of times. The kernel diagnoses are the following:
\[\text{descendants}(h_{0,1+2i})\setminus\text{descendants}(h_{1,1+2i})\setminus \text{descendants}(h_{0,2+2i}),\quad i\in\mathbf{N}.\]
Such a representation of the diagnosis is infinite, which is why we advocate the computation of the minimal diagnosis.
The second characteristic of the theory developed by de Kleer and Williams is the use of an ATMS to generate all the maximal conflicts before computing the diagnosis. ATMSs compute these conflicts by propagating the consequences of assumptions on the hypothesis properties. However, assuming, as is the case in this article, that the conflicts are convex sets of non-candidate hypotheses, the set of maximal conflicts may be infinite. Consider again the above example, and let \(h_{0,i}\neq h_{0,j}\) be two non-candidate hypotheses. Clearly both hypotheses cannot be in the same convex conflict (at least one hypothesis between them is a candidate). Thus, using an ATMS to pre-generate maximal convex conflicts is not feasible in the context of more general hypothesis spaces.
Furthermore, even when the conflict set is finite, it can be prohibitively large and include many conflicts that are not needed to solve the problem. For instance, many conflicts will discard hypotheses that would not be minimal, even if they were candidates. An example of this, from the example above, is a conflict \(C\) where \(\mathit{hypos}(C)=\{h_{0,2}\}\). Such conflicts are not necessary to compute the minimal diagnosis. In the PFS algorithm, as well as other algorithms for computing hitting sets, the incremental generation of "useful" conflicts is preferrable.
To avoid computing a potentially exponentially long list of minimal candidates, Williams and Ragno (Williams and Ragno, 2007) proposed to compute a subset of candidates that optimise some utility function (for instance, maximises the probability given a priori probabilities on faults).
### PLS-like Systems
Bylander et al. proposed an approach that bears some similarities with PLS+r, in that it finds any diagnosis candidate and then searches for a candidate within its parent set (Bylander et al., 1991). It assumes the set hypothesis space, and that the problem has the monotonicity property (\(\delta\preceq h\ \wedge\ \delta\in\Delta\Rightarrow h\in\Delta\)), like weak-fault models do. This algorithm does not return all minimal candidates.
SAFARI (Feldman et al., 2010) is a variant of this approach. It too assumes the SHS and a weak-fault model. The goal of this algorithm is to avoid the memory requirements associated with computing all the conflicts, as done with the ATMS approach, or maintaining an open list of hypotheses, as in the diagnose algorithm.
SAFARI first computes a diagnostic candidate. It then checks whether any parent of the current candidate is a candidate, in which case it iteratively searches for a candidate parent. Because the model is weak-fault, this approach is guaranteed to return a minimal candidate. When a minimal candidate is found, a new search is started. This approach does not guarantee that all minimal candidates will be found. Furthermore to speed up the implementation, not all the parents are checked: the refinement is stopped as soon as two parent checks fail.
### Explanatory Diagnosis of Discrete-Event Systems
Recently, Bertoglio et al. (Bertoglio, Lamperti, Zanella, & Zhao, 2020, 2020b; Lamperti, Trerotola, Zanella, & Zhao, 2023) proposed the _explanatory diagnosis_ of discrete event systems. They compute all possible sequences of faults that are consistent with the observations. The number of such
sequences can be infinite in general, but they use regular expressions to represent them compactly. This diagnosis is more informative than the diagnosis traditionally computed for DES (set of faults).
There are several important differences between their work and ours. First, they compute the complete diagnosis while we focus on computing the _minimal_ diagnosis; restricting ourselves to minimal diagnosis allows us to use more efficient algorithms, while Bertoglio et al. must explore all behaviours exhaustively. Second, they define diagnosis candidates as _sequences_ of faults while we allow for more definitions. This is not restrictive per se, as the sequences of faults form the most abstract space, but this, again, implies that we can use algorithmic improvements specific to our hypothesis space. Thirdly, we use an approach based on consistency tests while Bertoglio et al. compute all behaviours consistent with the observations. Finally, our approach is not limited to discrete event systems.
### Navigating the Space of Plans
The problem of navigating through the space of possible plans in AI planning is very similar to the problem of diagnosis of discrete event systems. In classical planning, the optimal plan is generally the least expensive one. However the preference relation is sometimes more complex. One example is oversubscription planning, which requires finding a plan that satisfies all the hard goals and a maximal subset of soft goals. Because the planner does not know which combination of soft goals the user would rather see achieved, it should return all (cost optimal) plans that are non-dominated, i.e., such that no other plan achieve a superset of goals.
Such problems can be formulated in our framework. The observations are the language of all plans that reach the hard goals. A "hypothesis" associated with a plan represents the subset of soft goals that this plan achieves. A hypothesis is preferable to another one if it is a superset of the latter. We can then use our search strategies to efficiently search for solutions. Eifler et al. (Eifler, Cashmore, Hoffmann, Magazzeni, & Steinmetz, 2020) propose techniques that are similar to search over hypothesis space as is done in model-based diagnosis. However our approach allows for more sophisticated definitions of classes of plans: rather than two plan belonging to the same class if they achieve the same soft goals, the user could also be interested in the order in which these goals are achieved (for instance the order in which a certain people can be visited). This can be modelled in our framework as a variant of the Sequence Hypothesis Space in which an element appears only once.
### Generation of Conflicts
The theory of diagnosis from first principles relies heavily on the notion of conflicts to explore the hypothesis space efficiently. Junker presented an algorithm dubbed QuickXplain for computing minimal conflicts from a consistency checker that is ignorant of the underlying problem (Junker, 2004). QuickXplain isolates the subset of properties responsible for inconsistency by iteratively splitting the set of properties and testing them separately.
Shchekotykhin et al. improved this work to produce several conflicts in a single pass (Shchekotykhin, Jannach, & Schmidt, 2015).
The applications mentioned in the papers cited above considered the Set Hypothesis Space but these algorithms are applicable to any hypothesis space and can be used in our framework to generate conflicts.
In the context of heuristic search planning, Steinmetz and Hoffmann (Steinmetz & Hoffmann, 2017) presented a technique to find conflicts (that are not guaranteed minimal). A conflict is a conjunction of facts such that any state that satisfies it is a dead-end from which the problem goal cannot be reached. Their algorithm uses the critical path heuristic \(h^{C}\)(Haslum, 2012) which lower bounds the cost of reaching the goal, as a dead-end detector, i.e., when \(h^{C}(s)=\infty\), the state \(s\) is a dead-end. The algorithm incrementally learns the value of the parameter \(C\) - a set of conjunction of facts -- adding new conjunctions when a dead-end unrecognised by \(h^{C}\) is found by the search. In our implementation of a test solver based on heuristic search below, we build on a different planning heuristic, namely LM-cut.
### Other Diagnosis Approaches
Besides test-based approaches to diagnosis, two different classes of approaches have been developed (Grastien, 2013).
The first, which bears some similarities with the test-based approach, consists in determining, off-line, a mapping between assumptions on the diagnosis and patterns satisfied by the observation. Implementations include indicators and ARR (Staroswiecki & Comtet-Varga, 2001), possible conflicts (Pulido & Alonso Gonzalez, 2004), chronicles (Cordier & Dousson, 2000), and, in an extreme interpretation of this class, the Sampath diagnoser (Sampath et al., 1995). The problem with approaches of this kind is the potentially large (exponential, or worse) number of observation patterns that need to be built off-line.
The second approach consists in computing the set of behaviours that are consistent with the model and the observation, and extracting the diagnosis information from these behaviours. The main issue here is finding a representation of the set of behaviours that is compact enough and allows fast extraction of the diagnostic information. In circuit diagnosis, this approach has been pioneered by Darwiche and co-authors, and led to a thorough study of model compilation (Darwiche & Marquis, 2002; Darwiche, 2011). For DES diagnosis, this approach has dominated the research landscape (Pencole & Cordier, 2005; Su & Wonham, 2005; Schumann, Pencole, & Thiebaux, 2007; Kan John & Grastien, 2008; Zanella & Lamperti, 2003). The present paper significantly departs from existing work on DES diagnosis by offering a generalised test-based theory that encompasses DES and other types of dynamic systems.
## 8 Implementations
The framework presented in this paper was initially developed for the diagnosis of discrete event systems. In the DES case, the task of the test solver is to decide if there exists a sequence \(\sigma\) of events that is allowed by the system model (\(\sigma\in\mathit{Mod}\)), consistent with the observation (\(o(\sigma)\) holds) and matching a hypothesis in the test set \(H\) (\(\mathit{hypo}(\sigma)=h\) for some \(h\in H\)). Realistically, we must assume that the model is given in some compact, factored representation, such as a network of partially synchronised automata (Pencole & Cordier, 2005), a Petri net (Benveniste et al., 2003) or a modelling formalism using state variables and actions (Haslum, Lipovetzky, Magazzeni, & Muise, 2019). Even if these representations are in theory equivalent to a single finite automaton, the exponential size of that automaton means it can never be fully constructed in practice. Thus, the test solver must work directly on the factored representation. This is the same problem that is faced in model checking (Clarke et al., 2000) and AI planning (Ghallab et al., 2004), and techniques
from those areas can be adapted to solve it.
In this section, we present two examples of how test solvers for DES diagnosis, over different hypothesis spaces, can be implemented. One implementation uses a reduction to propositional satisfiability (SAT), while the other uses heuristic state space search. To ground the discussion, we first introduce a simple, concrete factored DES representation.
### Representation of Large DES
The representation that we will use to describe the test solver implementations below is a network of partially synchronised automata. This is a commonly used representation for DES diagnosis (Zanella & Lamperti, 2003; Pencole & Cordier, 2005; Su & Wonham, 2005).
The DES is defined by a set of _components_, \(\mathcal{C}\), and a global alphabet of _events_, \(\mathcal{E}\). Each component \(c\) is a finite state machine: it has a set of local states \(S_{c}\) and a local transition relation \(T_{c}\subseteq S_{c}\times\mathcal{E}_{c}\times S_{c}\), where \(\mathcal{E}_{c}\) is the set of events that component \(c\) participates in. As usual, \((s,e,s^{\prime})\in T_{c}\) means the component can change from state \(s\) to \(s^{\prime}\) on event \(e\). The global state of the system is the tuple of component states, and a _global transition_ is a set of simultaneous component transitions. Synchronisation is partial: if \(e\not\in\mathcal{E}_{c}\) then \(c\) does not perform a transition when \(e\) occurs. More formally, given a global state \((s_{1},\ldots,s_{n})\), event \(e\) induces a global transition to a new state \((s^{\prime}_{1},\ldots,s^{\prime}_{n})\) iff for each component \(c_{i}\), either (i) \((s_{i},e,s^{\prime}_{i})\in T_{c_{i}}\), or (ii) \(e\not\in\mathcal{E}_{c_{i}}\) and \(s^{\prime}_{i}=s_{i}\).
A subset \(\mathcal{E}_{O}\) of events are _observable_. In the diagnosis problems we consider, the observation is a sequence of observable events: \(\,o=e^{1}_{o},\ldots,e^{k}_{o}\). Another subset \(\mathcal{F}\subseteq\mathcal{E}\) are designated fault events.
### Implementation of PFS+ec using SAT
Propositional satisfiability (SAT) is the problem of finding a satisfying assignment to a propositional logic formula on conjunctive normal form (CNF), or prove that the formula is inconsistent. SAT has many appealing characteristics as a basis for implementing a test solver: modern SAT solvers based on clause learning are very efficient, both when the answer to the question is positive and negative, and can easily be modified to return a conflict. Reductions to SAT have previously been used to solve discrete event reachability problems for diagnosis (Grastien et al., 2007; Grastien & Anbulagan, 2013), AI planning (Kautz & Selman, 1996) and model checking (Biere, Cimatti, Clarke, Strichman, & Zhu, 2003).
The main disadvantage of reducing the reachability problem to SAT is that it requires a bound on the "parallel length" \(n\) of the sequence \(\sigma\) that is sought, and the size of the SAT encoding grows proportionally to this parameter.2 For the benchmark problem that we consider in our experiments (described in Section 9.3.1) this is not problematic: the structure of this benchmark allows us to prove that the maximum number of local transitions that can take place in any component between two observable events is at most 7, and therefore the parallel length of the sequence is bounded by \(7\times|o|\), where \(|o|\) is the number of observed events. For diagnosis of DESs where such a bound cannot be proven, however, this can be an issue.
Footnote 2: The SAT encoding allows parallel execution of non-synchronised local transitions in separate components. The semantics of such parallel execution is simple: a parallel set of global transitions is permitted iff every linearisation of it would be. The purpose of allowing this form of parallelism is only to reduce the size of the encoding.
In order to represent a path of parallel length \(n\) (where we take \(n=7\times|o|\)), we define SAT variables that model the state of every component between every consecu
as well as variables that model which event occurred on each transition. For each \(s\in S_{c}\) of some component and each "timestep" \(t\in\{0,\ldots,n\}\), the propositional variable \(s@t\) will evaluate to _true_ iff the state of component \(c\) is \(s\) after the \(t\)-th transition. Similarly, for every event \(e\in\mathcal{E}\) and every timestep \(t\in\{1,\ldots,n\}\), the propositional variable \(e@t\) will evaluate to _true_ if event \(e\) occurred in the \(t\)-th transition. For simplicity, we also define the propositional variable \(tr@t\) which represents whether the (component) transition \(tr\) was triggered at timestep \(t\).
The SAT clauses are defined to ensure that any solution to the SAT problem represents a path that satisfies the following three constraints (Grastien et al., 2007): (i) it should be allowed by the model; (ii) it should be consistent with the observations; (iii) its corresponding hypothesis should belong to the specified set \(H\).
The translation of the first constraint into SAT is summarised on Table 2. The first two lines ensure that the origin and target states of each transition are satisfied. The third line encodes the frame axiom, which specifies that a component state changes only as an effect of a transition. The fourth line is a cardinality constraint (Marques Silva & Lynce, 2007) which indicates that a component can only be in one state at a time. The fifth and sixth lines ensure that the transitions and events match and the seventh line is a cardinality constraint whereby only one event can take place at a time for each component. The last line defines the initial state (component \(c\) starts in state \(s_{c0}\)).
The second constraint (i.e., that the path matches the observation) is very easy to encode. Given that the \(i\)-th observed event took place at timestep \(7\times i\), we know which observable events occurred at which timestep. This information is simply recorded as unit clauses, i.e., if observable event \(e\) occurred at timestep \(t\) the clause \(e@t\) is created, otherwise the clause \(\overline{e@t}\) is created. More complex observations can be encoded in SAT, for instance if the order between the observed event is only partially known (Haslum & Grastien, 2011).
Finally the last constraint is that the hypothesis associated with the path should belong to the specified set. Remember that the set is implicitly represented by a collection of hypothesis properties. We have shown in Section 5.2 how hypothesis properties can be seen as regular languages or intersections of such languages; these languages can be represented as finite state machines which in turn can be translated to SAT similarly to the translation to SAT of the model.
However, for a given hypothesis space, it is usually possible to find a simpler, more compact, yet logically equivalent encoding of hypothesis properties. Let us first consider the set hypothesis space: The property of being a descendant from \(h\subseteq F\) can be represented by the clauses
\[f@1\vee\ldots\lor f@n,\quad\forall f\in h,\]
\begin{table}
\begin{tabular}{|l l l|} \hline \(\forall c\in\mathcal{C}\). \(\forall tr=(s,e,s^{\prime})\in T_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(tr@t\to s^{\prime}@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall tr=(s,e,s^{\prime})\in T_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(tr@t\to s@(t-1)\) \\ \(\forall c\in\mathcal{C}\). \(\forall s\in S_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \((s@t\wedge s@(t-1))\rightarrow\bigvee_{tr\in T_{c}}tr@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall t\in\{0,\ldots,n\}\) & \(=_{1}\{s@t\mid s\in S_{c}\}\) \\ \(\forall c\in\mathcal{C}\). \(\forall t=(s,e,s^{\prime})\in T_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(tr@t\to e@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall e\in\mathcal{E}_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(e@t\rightarrow\bigvee_{tr\in T_{c}}tr@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall t\in\{1,\ldots,n\}\) & \(\leq_{1}\{e@t\mid e\in\mathcal{E}_{c}\}\) \\ \(\forall c\in\mathcal{C}\) & & \(s_{c0}@0\) \\ \hline \end{tabular}
\end{table}
Table 2: Ensuring that the SAT solutions represent paths accepted by the model
which state that the faults \(f\in h\) must occur in \(\sigma\). On the other hand, the property of being an ancestor of \(h\) can be represented by the unit clauses
\[\overline{f\mbox{\raisebox{0.86pt}{$\boxplus$}}t},\quad\forall f\in F\setminus h,\ t\in\{1,\ldots,n\},\]
which state that the faults \(f\not\in h\) should not occur in \(\sigma\). For the multiset hypothesis space, these properties can be represented in a similar way using cardinality constraints: \(\sigma\) corresponds to a descendant (resp. ancestor) of \(h\) iff for all fault \(f\), \(\sigma\) exhibits more (resp. less) than \(h(f)\) occurrences of \(f\).
The encoding for the sequence hypothesis space is more complex. Let \(h=[f_{1},\ldots,f_{k}]\) be a hypothesis for which the property \(p_{\mathrm{desc}}(h)\) must be encoded. We write \(\{h_{0},\ldots,h_{k}\}\) the set of prefixes of \(h\) such that \(h_{k}=h\). Consider another hypothesis \(h^{\prime}\succeq h_{i}\) for some \(i\in\{0,\ldots,k-1\}\), and assume \(f\in F\) is appended to \(h^{\prime}\); then \(h^{\prime}f\succeq h_{i}\). Furthermore if \(f=f_{i+1}\) then \(h^{\prime}f\succeq h_{i+1}\). To model this, we introduce fresh SAT variables \(dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) that evaluate to _true_ iff the trajectory \(\sigma\) until timestep \(t\) corresponds to a hypothesis that is a descendant of \(h_{i}\). Clearly, \(dh_{0}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) is _true_ for all \(t\); furthermore \(dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}0\) is _false_ for all \(i>0\). The value of \(dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) (\(i>0\)) can be enforced by the following constraints:
\[dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\ \longleftrightarrow\ dh_{i}\mbox{ \raisebox{0.86pt}{$\boxplus$}}(t-1)\vee(dh_{i-1}\mbox{\raisebox{0.86pt}{$ \boxplus$}}(t-1)\wedge f_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t)\,.\]
Encoding the ancestor property is more difficult. Consider a hypothesis \(h^{\prime}\preceq h_{j}\) for some \(j\in\{0,\ldots,k\}\), and assume \(f\in F\) is appended to \(h^{\prime}\); then \(h^{\prime}f\preceq h_{i}\) for any \(i\) such that \(f\) appears in \(\{f_{j+1},\ldots,f_{i}\}\). The negation of this expression is modelled as follows: \(h^{\prime}f\) is not an ancestor of \(h_{i}\) if \(h^{\prime}\) is not an ancestor of \(h_{i}\) or there exists a \(0\leq j<i\) such that \(f\notin\{f_{j+1},\ldots,f_{i}\}\) and \(h^{\prime}\) is not an ancestor of \(h_{j}\). As was the case for descendant properties, we create SAT variables \(ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\). For all \(i\), \(ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}0\) is _true_. The value of \(ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) is then ensured by
\[\overline{ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t}\ \longleftrightarrow\ \overline{ah_{i}\mbox{\raisebox{0.86pt}{$ \boxplus$}}(t-1)}\vee\bigvee_{j<i}\left(\bigvee_{f\in F\setminus\{f_{j+1}, \ldots,f_{i}\}}\left(\overline{ah_{j}\mbox{\raisebox{0.86pt}{$\boxplus$}}(t-1 )}\wedge f\mbox{\raisebox{0.86pt}{$\boxplus$}}t\right)\right).\]
### Implementation using Heuristic State Space Search
State space exploration algorithms are widely used in model checking and AI planning. They construct part of the explicit representation on-the-fly, while searching for a state satisfying a given goal condition. The use of heuristic guidance enables these algorithms to focus the search towards the goal and explore only a very small fraction of the state space before the goal is found. Problem-independent heuristics are derived automatically from the factored problem representation (Bonet & Geffner, 2001).
To take advantage of the very effective heuristics and search algorithms that exist, we need to express the hypothesis test as a state reachability problem, i.e., as a condition on the goal state to be found. This is straightforward to do with the help of some auxiliary components.
The main goal is to find a sequence of events that generates the observation: Suppose, for simplicity, that the observation is a sequence of events, \(e^{1}_{o},\ldots,e^{n}_{o}\). We add new component \(c_{o}\) with states \(0,\ldots,n\), which tracks how far along the sequence we are. Its local transitions are \((i-1,e^{i}_{o},i)\); thus, any transition that emits an observable event will synchronise with \(c_{o}\), ensuring that these events match the observation. The goal condition is then to reach \(c_{o}=n\). Transitions that emit an observable event not in the sequence will never be applicable, and can simply be removed.
The formulation of the hypothesis, or set of hypotheses, to be tested is more complex. Unlike in the SAT encoding, we cannot just provide an encoding of each diagnosis property in isolation and specify the test by their conjunction. Instead, we provide encodings of two of the diagnostic questions described in Section 4.3 that are required by the pls and pfs algorithms. We will use the multiset hypothesis space as the illustrative example. Encodings of the set hypothesis space are also easy to define (they work exactly the same but consider only the presence/absence of each fault, rather than the number of times it occurred). Encoding tests in the sequence hypothesis space is much more complicated.
_Question 1:_ candidate(\(h\)). Recall that a multiset hypothesis is a mapping \(h:\mathcal{F}\rightarrow\mathbb{N}\). \(h\) is a candidate if there is an event sequence \(\sigma\) that includes each fault \(f\in\mathcal{F}\) exactly \(h(f)\) times. We can track the occurrences of fault events in the same way as the observation: For each fault \(f\), introduce a component \(c_{f}\) with states \(0,\ldots,h(f)\), and local transitions \((i-1,f,i)\). This construction ensures that the sequence contains no more than \(h(f)\) occurences of each fault \(f\). Adding \(c_{f}=h(f)\), for each \(f\), to the goal condition also ensures that the sequence exhibits exactly the specified fault counts.
_Generating conflicts._ A complete search on the formulation above will find an event sequence witnessing that \(h\) is a candidate if such a sequence exists. If it does not, however, the search will only return the answer "no", after exhausting the reachable fraction of the state space. To generate a conflict, we need a small modification to both the encoding and the search algorithm.
We extend each fault-counting component \(c_{f}\) with an extra state \(h(f)+1\), and the local transitions \((h(f),f,h(f)+1)\) and \((h(f)+1,f,h(f)+1)\). This allows event sequences that contain more occurrences of faults than \(h\) specifies. (We also remove \(c_{f}=h(f)\) from the goal.) But we also assign a cost to each transition: the cost is one for these transitions that correspond to additional faults, and zero for all other transitions. This means that instead of every sequence that reaches the goal being a witness for \(h\), every sequence with a total cost of zero is such a witness.
We then run an optimal A\({}^{\star}\) search (Hart, Nilsson, & Raphael, 1968) using the admissible LM-Cut heuristic (Helmert & Domshlak, 2009), but interrupt the search as soon as the optimal solution cost is proven to be greater than zero. At this point, every state on the search frontier (open list) is either reached by a non-zero cost transition (corresponding to an additional fault not accounted for by \(h\)), or has a heuristic estimate greater than zero, indicating that some additional fault transition must take place between the state and the goal. Here, the specific heuristic that we use becomes important: The LM-Cut heuristic solves a relaxed version of the problem and finds a collection of sets of transitions (with non-zero cost) such that at least one transition from every set in the collection must occur between the current state and the goal. Each such set is what is known as a _disjunctive action landmark_ in the planning literature. Thus, this heuristic tells us not only that some additional fault transition must take place, but gives us a (typically small) set of which additional faults. Taking the union of these sets (or the singleton set of the fault transition already taken) over all states on the search frontier gives us a set \(F^{\prime}\) of faults such any candidate descendant of \(h\) must include at least one fault in \(F^{\prime}\) in addition to those accounted for by \(h\), and that is our conflict.
_Question 3:_ covers(\(S\)). This question asks whether there is any candidate \(h^{\prime}\) such that \(h\not\preceq h^{\prime}\) for every \(h\in S\). For the multiset hypothesis space, this means finding an event sequence \(\sigma\) such that for each \(h\in S\) there is some fault \(f\) that occurs in \(\sigma\) strictly fewer times than \(h(f)\). As above, we introduce components \(c_{f}\) to count the number of fault occurrences in the sequence. We set the maximum count to \(n_{f}=\max_{h\in S}h(f)\), but add the local transition \((n_{f},f,n_{f})\), so that
means "\(n_{f}\) or more occurrences of \(f\)". That \(h\not\preceq hypo(\sigma)\) can then be expressed by the disjunction \(\bigvee_{f\in\mathcal{F}}c_{f}<n_{f}\). The goal condition, that \(h\not\preceq hypo(\sigma)\) for all \(h\in S\), is simply the conjunction of these conditions for all \(h\in S\).
## 9 Experiments
In this section, we apply implementations of different diagnosis algorithms derived from our theoretical framework to two realistic diagnosis problems, and benchmark them against other algorithms from the literature.
### Competing Algorithms
We compare the sat-based and planning-based implementations of the algorithms presented in this paper with existing algorithms from the literature.
#### 9.1.1 Diagnoser
The seminal work on diagnosis of discrete event systems introduced the diagnoser (Sampath et al., 1995). The diagnoser is a deterministic finite automaton (DFA) whose transitions are labeled with observations and states are labeled with the diagnosis. Given a sequence of observations one simply needs to follow the single path labeled by this sequence and the diagnosis is the label of the state reached in this way. There are several issues with the diagnoser that prevented the use of this approach.
First its size (the number of states of the DFA) is exponential in the number of states of the model and double exponential in the number of faults (for the set hypothesis space) (Rintanen, 2007). For the power network that we use as a benchmark in 9.3, the average number of possible fault events per component is 9, and the average number of states per component is well over 100; the number of components is over 10,000. A Sampath diagnoser for this system will have over \(100^{10,000}\times 2^{(9*10,000)}\simeq 10^{50,000}\) states. This method is therefore inapplicable but for small systems or systems that can be strongly abstracted.
Second the diagnoser is originally designed for totally ordered observations. In our application many observations have the same time stamp, meaning that the order in which they were emitted is unknown. The diagnoser, as presented by Sampath et al., can certainly be adapted to account for bounded partial observable order but this would augment the size of the diagnoser to additional orders of magnitude.
Third the approach is not applicable to infinite hypothesis spaces since the DFA would be infinitely large.
#### 9.1.2 Automata
Automata-based approaches consist in computing an implicit representation of the set of all sequences (traces) of events consistent with both the model and the observations. This representation is useful if it allows one to quickly infer some information about the actual system behaviour. For instance the implicit representation as a single finite state machine whose language is exactly said set makes it easy (in linear time) to decide whether a specific fault could have or definitely has occurred.
A significant part of the work in discrete event systems aims at finding such representations that are compact, that can be computed quickly, and that allow for fast inferences (Su & Wonham, 2005; Pencole & Cordier, 2005; Cordier & Grastien, 2007).
We chose an approach based on junction trees (Kan John & Grastien, 2008), the state-of-the-art in automata-based diagnosis of discrete event systems. This approach is based on the property that local consistency in tree structures is equivalent to global consistency, which result we explain now.
Consider a finite set \(S\) of automata that implicitely represents the automaton \(A\) obtained by standard synchronisation of the automata in \(S\). Each automaton \(A_{i}\in S\) of this set is characterised by a set of events \(E_{i}\). A property of this setting is that every trace obtained by projecting a trace of \(A\) on \(E_{i}\) is a trace of \(A_{i}\); intuitively this means that a sequence of events allowed by \(A\) is (by definition of the synchronisation) allowed by every \(A_{i}\). The converse, the property of global consistency, is generally not true: \(A_{i}\) could contain traces that are the synchronisation of no trace from \(A\). Global consistency is a very powerful property, because it allows us to answer many questions regarding \(A\) by only using \(S\) (typically, questions such as whether a given fault certainly/possibly occurred). In general global consistency can only be obtained by computing \(A\) and then projecting \(A\) on every set of events \(E_{i}\) (in case this type of operations is repeated several times, a minimisation operation is necessary to reduce space explosion issues); this is computationally infeasible outside trivial problems.
Local consistency is the property that every pair of automata in \(S\) is consistent, i.e., the property of consistency holds for the set \(\{A_{i},A_{j}\}\) and the synchronisation of \(A_{i}\) and \(A_{j}\). Local consistency does not imply global consistency. It is now possible to view the set \(S\) as a graph, where each node maps to an automaton and there is a path between two automata that share an event (\(E_{i}\cap E_{j}=E_{ij}\neq\emptyset\)) such that all automata on this path also share these events \(E_{ij}\). If this graph is a tree, then local consistency of \(S\) implies global consistency. In other words global consistency of \(S\) can be achieved without computing the automaton \(A\).
There remains the issue of making \(S\) represent a tree. A technique used to transform an arbitrary graph into a tree is to construct a hyper-graph where the hyper-nodes are subsets of nodes of the original graph: a junction tree (Jensen & Jensen, 1994), aka decomposition tree. Accordingly, a new set \(S^{\prime}\) of automata is defined whose automata \(A^{\prime}_{i}\) are defined as the synchronisation of subsets of \(S\). In order to reduce the cost of these synchronisations the junction tree should have hyper-nodes of minimal cardinality. The decision problem associated with finding the optimal junction tree is NP-hard but there exist polynomial algorithms that provide good trees (Kjaerulff, 1990).
We start of with \(S\) defined as the set of "local diagnoses" where each local diagnosis is the synchronisation of each component's model with its local observations. The local observations are not totally independent but are defined as batches of independent events. Therefore each batch is separated by a synchronisation tick that ensures that two ordered observations are indeed ordered.
From a tree-shaped locally consistent representation \(S^{\prime}\), one needs to extract the minimal diagnosis. Assuming that the hypothesis space is defined over a subset \(F\) of fault events (as is the case with SHS, MHS, and SqHS), one option would be to compute the language \(\mathcal{L}_{F}\), defined as the projection of the language of \(S^{\prime}\) onto \(F\), and then extract its minimal words, a problem similar to the _enumeration problem_(Ackerman & Shallit, 2009). How to perform it efficiently given our definition of minimality, and how to perform it without explicitly computing \(\mathcal{L}_{F}\) is an open question. For this reason, we only provide the runtime for computing \(S^{\prime}\) as it gives us a good estimate of the overall performance of this approach.
#### 9.1.3 Bdd
A different approach to diagnosis of discrete event system consists in i) embedding in the system state the diagnostic hypothesis associated with the paths that lead to this state and ii) computing the set of states ("belief state") that the system may be in after generating the observations. The diagnosis is the set of hypotheses that label some state of the final belief state.
The first point is rather easy to solve for some hypothesis spaces. For the set hypothesis space simply add a state variable \(v_{f}\) for each fault \(f\) that records the past occurrence of \(f\): \(v_{f}\) is false in the initial state and it switches to true whenever a transition labeled by \(f\) is encountered on the path. Other hypothesis spaces could be defined as easily, but the problem is that it requires an infinite number of state variables in general. It seems that there is no practical upper bound on this number but for trivial problems.
The second point can be described easily. The model can be rewritten as a function that associates every observable event \(o\) with a set \(T_{o}\) of pairs of states (the event \(o\) is generated only when the state changes from \(q\) to \(q^{\prime}\), where \(\langle q,q^{\prime}\rangle\in T\)) as well as a set \(T_{\epsilon}\) of pairs for unobservable transitions. Starting from a given set of states \(\mathcal{B}\), the set of states reached by any number of unobservable events, written \(\mathit{silent}(\mathcal{B})\), is the minimal set of states that satisfies \(\mathcal{B}\subseteq\mathit{silent}(\mathcal{B})\) and \(q\in\mathit{silent}(\mathcal{B})\ \land\ \langle q,q^{\prime}\rangle\in T_{ \epsilon}\Rightarrow q^{\prime}\in\mathit{silent}(\mathcal{B})\); this set can be easily obtained by adding to \(\mathcal{B}\) states \(q^{\prime}\) as defined above until the set remains stable. Starting from a set of states \(\mathcal{B}\), the set of states reached by a single observable event \(o\), written \(\mathit{next}_{o}(\mathcal{B})\), is the set of states defined by the relation \(T_{o}\): \(\{q^{\prime}\mid\exists q\in\mathcal{B}.\ \langle q,q^{\prime}\rangle\in T_{o}\}\).
We first assume that the observations are just a sequence \(o_{1},\ldots,o_{k}\) of observed events. The belief state at the end of the sequence of observations can be computed incrementally by alternating the two functions presented before: \(\mathcal{B}=\mathit{silent}\circ\mathit{next}_{o_{1}}\circ\mathit{silent} \cdots\mathit{silent}\circ\mathit{next}_{o_{k}}\circ\mathit{silent}(\mathcal{ B}_{0})\) where \(\mathcal{B}_{0}\) is the initial belief state.
Our observations are not a single sequence of observed events: the order between some observation fragments are unknown. One way to solve this issue is by computing all possible sequences of observations and computing the union of the belief states obtained for each sequence. We use a more sophisticated approach. Because the observations are batches of unordered events, we compute, for each batch \(b_{j}\), all possible sequences; we compute then the belief state from \(\mathcal{B}_{j-1}\) for each sequence, and we obtain the belief state at the end of batch \(b_{j}\) as the union of the belief state for each sequence.
So far we have not described how the belief states are represented. Because a state is an assignment of state variables to Boolean values, a state can be seen as a formula in propositional logic. A set of states is also a formula and the union (resp. the intersection) of two sets are implemented as the logical disjunction (resp. the conjunction). Sets of pairs of states as \(T\) can also be represented as a formula, but this requires a copy \(v^{\prime}\) for each state variable \(v\). The set of states \(q^{\prime}\) associated with at least one state of a specified set \(Q\), formally \(\{q^{\prime}\mid\exists q\in Q.\ \langle q,q^{\prime}\rangle\in T\}\), can be represented by the propositional formula: \(\exists V.\ (\Phi_{T}\land\Phi_{Q})[V^{\prime}/V]\) where \(V^{\prime}\) is the list of copied variables, \(\Phi_{T}\) is the formula representing \(T\), \(\Phi_{Q}\) is the formula representing \(Q\), and \([V^{\prime}/V]\) is the operation that consists in renaming in a formula all variables \(v^{\prime}\) with \(v\).
Practically, for applications as model checking (Burch, Clarke, Long, McMillan, & Dill, 1994), classical planning (Kissman & Edelkamp, 2011), and diagnosis (Schumann et al., 2007), these formulas are represented using BDDs (Bryant, 1986).
Finally one important issue when using BDDs is that of variable order. We make sure that every state variable \(v\) is followed by its copy \(v^{\prime}\). Furthermore we define all variables of each component
in a single sequence.
### Setup
We benchmark six different implementations of algorithms derived from our framework. These are: pfs+ec using the SAT-based test solver, applied to the set, multiset and sequence hypothesis spaces; pfs+c using the test solver based on heuristic search, applied to the set hypothesis space; and pls using the heuristic search-based test solver, applied to the set and multiset hypothesis spaces. Recall that the basic version of pfs, without the essentiality test, is only guaranteed to terminate when used with the finite set hypothesis space. Code can be downloaded here: github.com/alban-grastien/diagfwork.
In addition, we compare the performance of these algorithms with two diagnosis methods presented in the previous subsection: the junction tree (JT) approach and the BDD-based (bdd) approach.
JT, bdd, and the pfs variants using the SAT-based test solver are implemented in Java. The SAT-based test solver itself is a version of minisat 2.0 (Een & Sorensson, 2003), modified to return conflicts, and is implemented in C. The pfs and pls variants using the heuristic search-based solver are implemented in Lisp; the test solver is based on the HSP* AI planner (Haslum, 2008), which is implemented in C++. A new test solver instance is invoked for each test, without reuse of information from previous tests. For the SAT-based solver, it is likely that using incremental SAT (Hooker, 1993) could improve the aggregate performance over multiple tests. Remember, as we discussed in Subsection 3.4, that computing the diagnosis in the set, multiset and sequence hypothesis spaces is increasingly harder.
### First Benchmark: Diagnosis of a Power Transmission Network
#### 9.3.1 The Diagnosis Problem
The problem we consider is that of intelligent alarm processing for a power transmission network, as introduced by Bauer et al. (Bauer et al., 2011). The observations are alarms, generated by equipment in the network such as protection devices, switchgear, voltage and current monitors, etc. The objective of intelligent alarm processing is to reduce the volume of alarms, which can get very high, particularly in severe fault situations, by determining which alarms are "secondary", meaning they can be explained as follow-on effects of others. This is not simply a function of the alarm itself, but depends on the context. As a simple example, if we can deduce that a power line has become isolated, then an alarm indicating low or zero voltage on that line is secondary (implied by the fact that the line is isolated); but in other circumstances, a low voltage alarm can be the primary indicator of a fault.
The power network is modelled, abstractly, as a discrete event system. The number of states in each component ranges between \(8\) and \(1,024\), with most components having well over a hundred states. The entire network has over \(10,000\) components, but for each problem instance (partially ordered set of alarms), only a subset of components are relevant to reasoning about that set of alarms; the number varies between \(2\) and \(104\) components in the benchmark problem set. The initial state is only partially known, and certain components have up to \(128\) initial states. There are \(129\) instances in the benchmark set, and the number of observations (alarms) in each ranges from \(2\) to \(146\).
#### 9.3.2 Results
A summary of results, in the form of runtime distributions, is shown in Figure 3.
The complexity of the benchmark instances varies significantly. Many problems are quite simple, but the complexity rises sharply in the larger instances. Thus, solving a handful more problems is actually a pretty good result.
JT solves only \(23\) out of the \(129\) instances. As soon as the problem includes a transmission line, the problem becomes too hard: the transmission line component has \(1,024\) states, and \(64\) possible initial states, which makes the automata determinisation required by JT too expensive.
Comparing all the diagnosers operating on the set hypothesis space (SHS), PFS, with both test solvers, solves more problems than bdd (\(4\) more with the heuristic search-based solver, \(12\) more with the SAT-based solver), which in turn solves \(9\) more problems than pls. However, it is worth noting that PFS and pls can return some diagnosis candidates even when they fail to complete within the given time limit. All candidates found by PFS are minimal, and so form a subset of the minimal diagnosis. The instances not solved by PFS+ec/SAT (SHS) are also not solved by any other diagnoser, so we cannot determine how much of the minimal diagnosis has been found. Concerning pls/H.S., in \(17\%\) of the instances that it does not solve but for which the minimal diagnosis is known (because they are solved by some other diagnoser), the candidate set found by pls/H.S. is in fact the minimal diagnosis; it is only the last test, proving that there is no uncovered candidate, that fails to finish. This can be attributed to the asymmetric performance of the heuristic search
Figure 3: Runtime distribution (number of problems solved vs. time limit) for all diagnosis algorithms compared in the experiment.
based test solver: heuristically guided state space search can be quite effective at finding a solution when one exists, but is generally no more efficient than blind search at proving that no solution exists.
It is also interesting to note that the performance of pfs+ec in the three different hypothesis spaces (SHS, MHS and SqHS) follows the expected hierarchy of problem hardness: fewer instances are solved in the sequence hypothesis space, which is a harder diagnosis problem, than in the easier multiset hypothesis space, and still more instances are solved in the easiest, the set hypothesis space, though the difference between MHS and SHS is only two problems. It turns out that most problem instances have the same number of minimal candidates for these hypothesis spaces. Only two instances solved by both diagnosers show different numbers: problem chunk-105, for example, has two minimal MHS candidates, \(\{\texttt{Line\_X9\_X10.fault}\to 1,\texttt{Breaker\_X1\_X2.fault}\to 1\}\) and \(\{\texttt{Breaker\_X1\_X2.fault}\to 2\}\), which lead to a single minimal SHS candidate, \(\{\texttt{Breaker\_X1\_X2.fault}\}\). Because the size of the minimal diagnoses are similar, the number of tests is also very similar and incurs only a small penalty for pfs (MHS). On the contrary, because MHS tests are more precise (specifying the exact number of faults), and because pfs does not use incremental solving, each individual MHS test may be easier to solve.
### Second Benchmark: The Labour Market Database
#### 9.4.1 The Diagnosis Problem
This diagnosis problem is based on the data cleansing problem that we already discussed in 2.1.
Specifically, we consider the database provided by (Boselli et al., 2014) that records the employment history in the Italian Labour Market. This history is subject to logical and legal constraints (e.g., a job can end only after it has started; a person cannot hold two full-time jobs at the same time). Data cleansing is the problem of correcting the database to restore its integrity.
Because the constraints apply to each person individually, we created one problem centered around each person whose history does not satisfy the constraints. For each problem, we considered all the relevant environment, in particular the list of employers mentioned in this person's records. For employers, employees, and jobs, we built generic automata modelling all histories consistent with the integrity rules. We further added faulty transitions modelling how the records could be incorrectly inserted into the database, e.g., transitions representing the fact that an employment cessation was filed for the wrong employer. A diagnosis is then a sequence of such incorrect operations.
We end up with 600 problems. The systems are fairly small: in the worst case, a worker was in contact with five different employers, which translates into six automata with no more than six states each, and up to 46 events per component.
#### 9.4.2 Results
A summary of results, in the form of runtime distributions, is shown in Figure 4.
The maximum number of minimal candidates in any of the solved instances is 450, and the maximum number of faults in such a candidate is 12. These numbers are very high, and suggest that the problem definition could be refined. For instance, in the Set Hypothesis Space, the preference relation could be enriched by saying that a hypothesis is preferred over another hypothesis that contains two more faults. This type of constraint can be easily handled by our framework.
Figure 4: Runtime distribution (number of problems solved vs. time limit) for all diagnosis algorithms compared in the experiment.
The profile of the algorithms' performance is very different from the first experiments. We believe that this is due to the features of the problems that differ significantly from the power network domain.
The Junction Tree algorithm is able to solve a large majority of the instances. This is due to the fairly small number of states in the diagnosed system. As a consequence, the necessary operations, such as the automata determinisations, are relatively quick. On the other side of the spectrum, the approach based on BDD is able to solve only a small number of instances; this is due to the large number of events and transitions, as well as the number of fault events, that makes each iteration very expensive. For most instances in which we let the BDD-based diagnoser run longer than 900s, the computer ran out of memory, which suggests that this approach will not be able to catch up with the other approaches beyond the time limit.
Comparing the different algorithms presented in this paper, we see that PFS is still better, in particular when combined with SAT. The performance of PFS and PLS are however similar for an oracle using heuristic search planning.
## 10 Conclusion and Future Work
Prior to our work, diagnosis of discrete event systems has followed its own path, distinct from that initiated by de Kleer, Reiter, and Williams for diagnosis for static systems (Reiter, 1987; de Kleer & Williams, 1987).
In this article, we extended the consistency-based theory of model based diagnosis to handle diagnosis of systems beyond these static ones. We showed how to apply the consistency-based approach to all types of systems, notably discrete event systems and hybrid dynamic ones. We showed that, for such systems, diagnosis can be computed via a series of consistency tests that each decide whether the model allows for a behaviour that i) satisfies certain specified assumptions and ii) agrees with the observations. We also showed how to perform these tests in practice, e.g., by using propositional SAT or classical planning.
Extending the consistency-based diagnosis approach to a larger class of systems, and, in particular, to dynamic systems prompted us to consider more elaborate definitions of diagnosis and minimal diagnosis. Some applications, for instance, require us to determine the number or order of fault occurrences. In other applications, certain faults are orders of magnitude less likely than others and should therefore be ignored whenever more likely behaviours exist, which leads us to unconventional definitions of preferred hypotheses. These diagnosis problems are not trivial to solve a priori as they now feature an infinite search space, but we showed that our theory can easily handle them as it only requires us to specify the assumptions appropriately in the consistency tests. Specifically, as we proved, each assumption should indicate that the diagnosis hypothesis of the behaviour that the test is looking for should be better, not better, worse, or not worse than a specified diagnosis hypothesis. We then just need the test solver to be able to express these assumptions.
We proposed several strategies to generate the diagnosis tests and showed properties and termination conditions for these strategies. We also extended the definition of conflict, a central concept in model based diagnosis, to align with our general theory.
Since the beginning of this work, we applied this theory to a range of applications. We used this theory in combination with SAT modulo theory (SMT) to diagnose hybrid systems (Grastien, 2014);
and with model checking to diagnose timed automata (Feng & Grastien, 2020). The consistency-based approach also allowed us to reason about the observations themselves and compute a subset of observations that are sufficient to derive the diagnosis (Christopher, Cordier, & Grastien, 2014).
There has been recently an increased interest in planning for richer problems involving constraints similar to the ones used in diagnosis tests. This is motivated by a range of applications:
**Top-\(k\) Planning**: computes several plans that need to be significantly different (Nguyen, Do, Gerevini, Serina, Srivastava, & Kambhampati, 2012; Katz & Sohrabi, 2020).
**Legibility**: asks for a plan whose purpose is clear for the observer (Chakraborti, Kulkarni, Sreedharan, Smith, & Kambhampati, 2019).
**Normative Constraint Signalling**: requires the plan to communicate to a (partial) observer that it follows some normative constraints (Grastien, Benn, & Thiebaux, 2021).
**Model Reconciliation**: assumes two agents with different models of the world, and searches for a minimal change to one model so that the agents agree on the optimal plan (Chakraborti, Sreedharan, Zhang, & Kambhampati, 2017).
**Goal Recognition**: is the problem of determining what an agent is trying to achieve (Pereira, Vered, Meneguzzi, & Ramirez, 2019).
**Plan Explanation**: provides an explanation alongside the plan that justifies why there is no better plan than the proposed one (Eifler et al., 2020).
All these problems require reasoning about plans with similar or different properties. The search strategies developed in this paper can be used to help some these problems.
| モデルベースの診断は、人工知能、形式的処理、制御など、様々なコミュニティでアクティブな研究課題となっています。これは、異なるクラスのシステムに対処する様々なアプローチを導き出し、異なる診断の形態を求めています。本論文では、リターの理論を、システムの種類や診断に無条件に適用することで、そのような不整合性を解消します。この診断のより一般的な理論は、最小診断を、推測空間における好ましい診断候補の集合として定義しています。最小診断を計算するには、診断推測の空間を探索し、システムのモデルと観察と整合性のある推測セットをテストし、排除する衝突を生成します。比較的緩やかな仮定のもとでは、アルゴリズムは正しく、好ましい診断候補の集合を計算します。ここで困難な点は、推測空間がリターの理論におけるパワーセットではなく、その結果、多くの暗黙の特性(例:推測空間 |
2301.13802 | Armouring of a frictional interface by mechanical noise | A dry frictional interface loaded in shear often displays stick-slip. The
amplitude of this cycle depends on the probability that a slip event nucleates
into a rupture, and on the rate at which slip events are triggered. This rate
is determined by the distribution $P(x)$ of soft spots which yields if the
shear stress is increased by some amount $x$. In minimal models of a frictional
interface that include disorder, inertia and long-range elasticity, we
discovered an 'armouring' mechanism, by which the interface is greatly
stabilised after a large slip event: $P(x)$ then vanishes at small arguments,
as $P(x)\sim x^\theta$ [1]. The exponent $\theta>0$, which exists only in the
presence of inertia (otherwise $\theta=0$), was found to depend on the
statistics of the disorder in the model, a phenomenon that was not explained.
Here, we show that a single-particle toy model with inertia and disorder
captures the existence of a non-trivial exponent $\theta>0$, which we can
analytically relate to the statistics of the disorder. | Elisa El Sergany, Matthieu Wyart, Tom W. J. de Geus | 2023-01-31T17:42:54 | http://arxiv.org/abs/2301.13802v1 | # Armouring of a frictional interface by mechanical noise
###### Abstract
A dry frictional interface loaded in shear often displays stick-slip. The amplitude of this cycle depends on the probability that a slip event nucleates into a rupture, and on the rate at which slip events are triggered. This rate is determined by the distribution \(P(x)\) of soft spots which yields if the shear stress is increased by some amount \(x\). In minimal models of a frictional interface that include disorder, inertia and long-range elasticity, we discovered an 'armouring' mechanism, by which the interface is greatly stabilised after a large slip event: \(P(x)\) then vanishes at small arguments, as \(P(x)\sim x^{\theta}\)[1]. The exponent \(\theta>0\), which exists only in the presence of inertia (otherwise \(\theta=0\)), was found to depend on the statistics of the disorder in the model, a phenomenon that was not explained. Here, we show that a single-particle toy model with inertia and disorder captures the existence of a non-trivial exponent \(\theta>0\), which we can analytically relate to the statistics of the disorder.
## 1 Introduction
We study systems in which disorder and elasticity compete, leading to intermittent, avalanche-type response under loading. Examples include an elastic line being pulled over a disordered pinning potential, or frictional interfaces [2, 3, 4]. When subject to an external load \(f\), such systems are pinned by disorder when the load is below a critical value \(f_{c}\). At \(f>f_{c}\), the system moves forward at a finite rate. At \(f=f_{c}\) the system displays a crackling-type response described by avalanches whose sizes and durations are distributed according to powerlaws.
A key aspect of such systems is the distribution of soft spots [5]. If we define \(x\) as the force increase needed to trigger an instability locally, then increasing the remotely applied force by \(\Delta f\) will trigger \(n_{a}\propto\int_{0}^{\Delta f}P(x)dx\) avalanches, with \(P(x)\) the probability density of \(x\). The relevant behaviour of \(P(x)\) therefore is that at small \(x\). Let us assume that \(P(x)\sim x^{\theta}\) at small \(x\), such that \(n_{a}\propto(\Delta f)^{\theta+1}\).
Classical models used to study the depinning transition consider an over-damped dynamics [2]. In that case, it can be shown that \(\theta=0\)[2]. This result is not true for certain phenomena, including the plasticity of amorphous solids or mean-field spin glasses. In these cases, due to the fact that elastic interactions are long-range and can vary in sign (which is not the case for the depinning transition, where a region that is plastically rearranged can only destabilise other regions), one can prove that \(\theta>0\), as reviewed in [5, 6].
Recently, we studied simple models of dry frictional interface [1, 7]. We considered disorder, long-range elastic interactions along the interface. These interactions are strictly positive as in the usual class of the depinning transition. However, we studied the role of inertia, that turns out to have dramatic effects. Inertia causes transient overshoots and undershoots of the stress resulting from a local plastic event. It thus generates a mechanical noise, that lasts until damping ultimately takes place. Remarkably, we found that right after system-spanning slip events, \(\theta>0\)[1] in the presence of inertia. Intuitively, such an 'armouring' mechanism results from the mechanical noise stemming inertial effects, that destabilises spots close to an instability (i.e. small \(x\)), thus depleting \(P(x)\) at small argument. This property is consequential: the number of avalanches of plastic events triggered after a system-spanning rupture is very small. As a consequence, the interface can increase its load when driven quasistatically in a finite system, without much danger of triggering large slip events. The interface therefore present larger stick-slip cycles due to this effect, as sketched in Fig. 1. Thus, one of the central quantities governing the stick-slip amplitude is \(\theta\)[1].
Our previous model [1] divided the interface in blocks whose mechanical response was given by a potential energy landscape that, as a function of slip, comprised a sequence of parabolic wells with equal curvature. We drew the widths \(w\) of each well randomly from a Weibull distribution, such that its dis
tribution \(P_{w}(w)\sim w^{k}\) at small \(k\). We empirically found \(\theta\simeq 2.5\) for \(k=1\) and \(\theta\simeq 1.4\) for \(k=0.2\).
Here we present a toy model for a region of space that stops moving at the end of a large slip event. In the most idealised view, we describe this region as a single particle that moves over a disordered potential energy landscape, and that slows down due to dissipation. We model this potential energy landscape by a sequence of parabolic potentials that have equal curvature \(\kappa\) but different widths taken from \(P_{w}(w)\), with \(w\) the width of a parabola. In this model, \(x=\kappa w/2\) and is thus proportional to the width of the well in which the particle stops. Below we prove that for such a model, \(P(x)\sim x^{k+2}\) if \(P_{w}(w)\sim w^{k}\). This result explains both why \(\theta>0\) and why this exponent in non-universal, as it depends on \(k\) that characterises the disorder. Although this prediction does not match quantitatively our previous observations, the agreement is already noticeable for such a simple model. We support our argument with analytical proofs, and verify our conclusion numerically. The generality of our argument suggests that the presence of a non-trivial exponent \(\theta\) may hold in other depinning systems, as long a inertia is present.
## 2 Model
During a big slip event, all regions in space are moving but eventually slow down and stop. We model this by considering a single region in space in which a particle of finite mass is thrown into the potential energy landscape at a finite velocity. In the simplest case, this particle is "free", such that it experiences no external driving and stops due to dissipation, see Fig. 2. This corresponds to the Prandtl-Tomlinson [8, 9, 10] model that describes the dynamics of one (driven) particle in a potential energy landscape. The equation of motion of the "free" particle reads
\[m\ddot{r}=f_{e}(r)-\eta\dot{r}. \tag{1}\]
with \(r\) the particle's position, \(m\) its mass, and \(\eta\) a damping coefficient. \(f_{e}(r)\) is the restoring force due to the potential energy landscape. We consider a potential energy landscape that consists of a sequence of finite-sized, symmetric, quadratic wells, such that the potential energy inside a well \(i\) is given by \(U(r)=(\kappa/2)(r-r_{\rm min}^{i})^{2}+U_{0}^{i}\) for \(r_{y}^{i}<r\leq r_{y}^{i+1}\), with \(w_{i}\equiv r_{y}^{i+1}-r_{y}^{i}\) the width of the well, \(\kappa\) the elastic constant, \(r_{\rm min}^{i}\equiv(r_{y}^{i}+r_{y}^{i+1})/2\) the position of the center of the well, and \(U_{0}^{i}=\kappa(w^{i})^{2}/8\) an unimportant offset. The elastic force deriving from this potential energy is \(f_{e}(r)\equiv-\partial_{x}U(r)=\kappa(r_{\rm min}^{i}-r)\). With \(\kappa\) is constant, the landscape is parameterised by the distance between two subsequent cusps \(w_{i}\), which we assume identically distributed (iid) according to a distribution \(P_{w}(w)\). We consider underdamped dynamics corresponding to \(\eta^{2}<4m\kappa\). Within a well, the dynamics is simply that of a underdamped oscillator, as recalled in Appendix A.
Figure 1: (a) Sketch of stick-slip response: “slip” events punctuate periods in which the interface is macroscopically stuck, but microscopic events (“avalanches”) do occur. The number of avalanches \(n_{a}\propto(\Delta f)^{\theta+1}\), which can be linked to (b) the distribution of soft spots. \(x\) is thereby the amount of force needed to trigger an instability locally. Right after a large slip event, its distribution empirically scales like \(P(x)\sim x^{\theta}\) at small \(x\) as indicated (log-scale implied).
Figure 2: Evolution of the kinetic energy \(E\) as a function of position \(r\) (in red) of the “free” particle ‘thrown’ into a potential energy landscape (shown in the inset). Every entry into a new well is indicated using a marker. A thin green line shows the evolution of the total energy (with the definition of the inset, it has the local minimum of the last well as arbitrary offset).
Stopping well
Distribution.We are interested in the width of the well in which the particle eventually stops. Suppose that a particle enters a well of width \(w\) with a kinetic energy \(\mathcal{E}\). The particle stops in that well if \(\mathcal{E}<E_{c}(w)\), with \(E_{c}\) the minimum kinetic energy with which the particle needs to enter a well of width \(w\) to be able to exit. The distribution of wells in which particles stop in that case is
\[P_{s}(w)\sim P_{w}(w)P(\mathcal{E}<E_{c}(w)), \tag{2}\]
with \(P_{w}(w)\) the probability density of well widths, and \(P_{s}(w)\) the probability of well widths in which the particle stops. Within one well, the particle is simply a damped harmonic oscillator as has been studied abundantly. In the limit of a weakly damped system, the amount of kinetic energy lost during one cycle is \(\Delta E=\kappa w^{2}(1-\exp(-2\pi/Q))/8\) with the quality factor \(Q=\sqrt{4m\kappa/\eta^{2}-1}\). The minimal kinetic energy with which the particle needs to enter the well in order to be able to exist is thus \(E_{c}=\Delta E\propto w^{2}\) (see Appendix B for the exact calculation of \(E_{c}\)). Furthermore, if \(P(\mathcal{E})\) is a constant at small argument (as we will argue below), then
\[P(\mathcal{E}<E_{c}(w))=\int_{0}^{E_{c}}P(\mathcal{E})\mathrm{d}\mathcal{E} \sim E_{c}(w). \tag{3}\]
Therefore, the particle stops in a well whose width is distributed as
\[P_{s}(w)\sim w^{2}P_{w}(w). \tag{4}\]
Central result.Once stopped, the force, \(x\), by which we need to tilt the well in which the particle stopped, in order for it to exit again is \(x=\kappa w/2\)1, such that our central result is that
Footnote 1: Without external forces, the particle ends in the local minimum – the center of the well.
\[P(x)\sim x^{2}P_{w}(x). \tag{5}\]
For example, if \(P_{w}(w)\sim w^{k}\) at small \(w\), we predict that
\[P(x)\sim x^{2+k}. \tag{6}\]
Energy at entry.We will now argue that the density of kinetic energy with which the particle enters the final well, \(P(\mathcal{E})\), is finite at small \(\mathcal{E}\). For one realisation, \(\mathcal{E}\) results from passing many wells with random widths. If its kinetic energy is much larger than the potential energy of the typical wells, it will not stop. We thus consider that the particle energy has decreased up to some typical kinetic energy \(E_{0}\) of the order of the typical potential energy \(\kappa\langle w^{2}\rangle/8\). If the particle exits the next well, at exit it will have a kinetic energy \(\mathcal{K}=E_{0}-\Delta E(E_{0},w)\). For a given \(E_{0}\) and distributed \(w\), we have:
\[P(\mathcal{E})=\int dw\,P_{w}(w)\,\delta(\mathcal{K}(E_{0},w)-\mathcal{E}). \tag{7}\]
It thus implies that:
\[P(\mathcal{E}=0)=P_{w}(w^{*})/\left|\partial_{w}\mathcal{K}\right|_{w=w^{*}} \tag{8}\]
\(w^{*}\) is the well width for which the particle reaches the end of the well with zero velocity, i.e. \(E_{0}=E_{c}(w^{*})\). By assumption, \(P_{w}(w^{*})>0\). Furthermore we prove in Appendix C that \(\partial_{w}\mathcal{K}|_{w=w^{*}}=\kappa w^{*}/2>0\). Overall, it implies that \(P(\mathcal{E}=0)>0\), i.e. \(P(\mathcal{E})\) does not vanish as \(E\to 0\), from which our conclusions follow.
Here we give a simple argument for \(\partial_{w}\mathcal{K}|_{w=w^{*}}=\kappa w^{*}/2>0\). Given \(E_{0}\), but an infinitesimally smaller well of width \(w^{*}-\delta w\), the particle will enter the next well. Because the velocity is negligible in the vicinity of \(w^{*}\), the damping is negligible. Therefore, \(\delta\mathcal{K}\) is of the order of the difference in potential energy on a scale \(\delta w\), \(\delta U=U(w^{*})-U(w^{*}-\delta w)\approx\kappa w^{*}\delta w/2\), as we illustrate in Fig. 3. We thus find that \(\partial_{w}\mathcal{K}|_{w=w^{*}}=\lim_{\delta w\to 0}\delta K/\delta w= \kappa w^{*}/2\).
Figure 3: Evolution of the kinetic energy \(E\) (red), potential energy \(U\) (black), and total energy \(E+V\) (green) for a particle that has entered a well of width \(w^{*}\) with a kinetic energy \(E_{0}=E_{c}(w^{*})\) such that it stops just. Consequently, \(\partial_{r}(E+V)|_{w^{*}/2}=0\), which can be decomposed in \(\partial_{r}V|_{w^{*}/2}=\kappa w^{*}/2\) such that \(\partial_{r}E|_{w^{*}/2}=-\kappa w^{*}/2\), as indicated using thin lines.
Numerical support
Objective.We now numerically verify our prediction that \(P(x)\sim x^{k+2}\) (Eq. (6)). We simulate a large number of realisations of a potential energy landscape constructed from randomly drawn widths (considering different distributions \(P_{w}(w)\)) and constant curvature. We study the distribution of stopping wells if a "free" particle is 'thrown' into the landscape at a high initial velocity (much larger than \(v_{c}(\langle w\rangle)\) such that particle transverses many wells before stopping).
Map.We find an analytical solution for Eq. (1) in the form of a map. In particular, we derive the evolution of the position in a well based on an initial position \(-w/2\) and velocity in Appendix A. This maps the velocity with which the particle enters a well at position \(w/2\), to an exit velocity which corresponds to the entry velocity of the next well, etc.
Stopping well.We record the width of the stopping well, \(x\), and the velocity \(\mathcal{V}\) with which the particle enters the final well. We find clear evidence for the scaling \(P(x)\sim x^{k+2}\) in Fig. 4. Perturbing the evolution with random force kicks2 changes nothing to our observations, as included in Fig. 4 (see caption). We, furthermore, show that the probability density of the kinetic energy with which the particle enters the final well, \(P(\mathcal{E})\), is constant as small argument in Fig. 5.
Footnote 2: Such the for each well is tilted with a random force that we take independent and identically distributed (iid) according to a normal distribution with zero mean.
## 5 Concluding remarks
Our central result is that \(P(x)\sim x^{2}P_{w}(x)\) in our toy model. For a disorder \(P_{w}(w)\sim w^{k}\) we thus find \(P(x)\sim x^{k+2}\). We expect this result to qualitatively apply to generic depinning systems in the presence of inertia. In particular they are qualitatively (but not quantitatively) consistent with our previous empirical observations \(\theta\simeq 2.5\) for \(k=1\)[1] and \(\theta\simeq 1.4\) for \(k=0.2\). A plausible limitation of our approach is underlined by the following additional observation: in Ref. [1], it was found that for \(x\) to be small, the stopping well was typically small (by definition), but also that the next well had to be small. Such correlations can exist only if the degree of freedom considered had visited the next well, before coming back and stopping. This scenario cannot occur in our simple description where the particle only moves forward, except when it oscillates in its final well.
Figure 4: Width of the stopping well, \(x\), for different \(P_{w}(w)\): a uniform, Weibull, and powerlaw distribution, that scale as \(P_{w}(w)\sim w^{k}\) at small \(w\), as indicated in the legend (the bottom row for each distribution corresponds to perturbing the dynamcs with random force kicks, tilting individual wells by a force \(F=\mathcal{N}(0,0.1)\), with \(\mathcal{N}\) the normal distribution; the top row corresponds to \(F=0\)). To emphasise the scaling, the distributions have been rescaled by a fit of the prefactors: \(P(x)=c_{x}x^{k+2}\). Furthermore, we use \(m=\kappa=1\), \(\eta=0.1\), \(v_{0}=\mathcal{N}(100,10)\), and \(\langle w\rangle\approx 1\).
Figure 5: The kinetic energy with which the particle enters the well in which it stops for different realisations, \(P(\mathcal{E})\), normalised by its prefactor \(c_{e}\) (that is here simply the density of the first bin). See Fig. 4 for legend. | 摩擦的な界面に干渉する乾いた領域は、滑り-滑り現象を示すことがよくあります。このサイクルの大きさは、滑りの発生が破断へと発生する確率、および滑りの発生がトリガーされる速度に依存しています。この速度は、Shearストレスが特定の量$x$増加すると、軟弱な領域の分布$P(x)$によって決定されます。最小モデルにおいて、不秩序を含んだ摩擦的界面では、滑りの発生を伴う大きな応力変化の後に界面が大きく安定化する「鎧」のメカニズムを発見しました。$P(x)$は小さな値のときに消失し、$P(x)\sim x^\theta$ [1] となります。存在する$\theta>0$の指数は、慣性だけが存在する場合のみ、$\theta=0$ になります。この指数は、モデルの不秩序の統計に依存し、これは説明されませんでした |
2302.14383 | Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models | We investigate compositional structures in data embeddings from pre-trained
vision-language models (VLMs). Traditionally, compositionality has been
associated with algebraic operations on embeddings of words from a pre-existing
vocabulary. In contrast, we seek to approximate representations from an encoder
as combinations of a smaller set of vectors in the embedding space. These
vectors can be seen as "ideal words" for generating concepts directly within
the embedding space of the model. We first present a framework for
understanding compositional structures from a geometric perspective. We then
explain what these compositional structures entail probabilistically in the
case of VLM embeddings, providing intuitions for why they arise in practice.
Finally, we empirically explore these structures in CLIP's embeddings and we
evaluate their usefulness for solving different vision-language tasks such as
classification, debiasing, and retrieval. Our results show that simple linear
algebraic operations on embedding vectors can be used as compositional and
interpretable methods for regulating the behavior of VLMs. | Matthew Trager, Pramuditha Perera, Luca Zancato, Alessandro Achille, Parminder Bhatia, Stefano Soatto | 2023-02-28T08:11:56 | http://arxiv.org/abs/2302.14383v3 | # Linear Spaces of Meanings:
###### Abstract
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs). Traditionally, compositionality has been associated with algebraic operations on embeddings of words from a pre-existing vocabulary. In contrast, we seek to approximate representations from an encoder as combinations of a smaller set of vectors in the embedding space. These vectors can be seen as "ideal words" for generating concepts directly within the embedding space of the model. We first present a framework for understanding compositional structures from a geometric perspective. We then explain what these compositional structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice. Finally, we empirically explore these structures in CLIP's embeddings and we evaluate their usefulness for solving different vision-language tasks such as classification, debiasing, and retrieval. Our results show that simple linear algebraic operations on embedding vectors can be used as compositional and interpretable methods for regulating the behavior of VLMs.
## 1 Introduction
In natural language, few primitive concepts or words can be used compositionally to generate a large number of complex meanings. For example, Figure 1 shows a simple example of composed phrases \(\{\)rainy, sunny\(\}\times\{\)morning, evening\(\}\), to which one could add more factors in the form of adjectives or attributes. The hidden representations provided by a neural model, on the other hand, a priori _do not_ have a similar compositional structure. In contextual text embeddings, in particular, the representation of a string of text is jointly affected by all of its tokens simultaneously, which means that there is no simple relationship between the representations of the entire text and the words that appear in it.
In this paper, we investigate the existence of latent compositional structures in the embedding space. That is, we aim to decompose composite concepts as linear combinations of embedding vectors associated with different factors, as illustrated in Figure 1. If such vectors exist, they can be treated as _ideal words_ for composing new concepts directly within the representation space of the model. The first
Figure 1: Words and concepts in natural language can be composed to generate complex meanings efficiently. Embeddings from transformer-based models a priori do not have a similar structure. In this paper, we argue that representations of composite concepts admit a linear decomposition based on embedding vectors that can be viewed as “ideal words.”
application that we envision is for vision-language models (_e.g_., CLIP [40]) where embeddings of text labels are often used for image classification or retrieval. In this setting, linear compositionality would imply that we could classify an image with \(n_{1}\ldots n_{k}\) composite labels--where \(n_{i}\) indicates the number of options for each factor--by comparing each image with only \(n_{1}+\ldots+n_{k}\) ideal words, since by linearity the inner product of an image with a composed label is the sum of the product with the corresponding ideal words. Moreover, linear decompositions can be used for "post-hoc" manipulations of pre-trained data representations (_e.g_., amplifying or reducing the importance of certain factors), which can be helpful to control the behavior of neural models.
In general, the meaning of words in language is always _contextual_, in the sense that their interpretation depends on any text that surrounds them. However, language would be completely impractical if words did not also have some stability in their meaning. The main benefit of the usage of words is, in fact, that meaning can be mostly inferred compositionally by combining meanings of words or phrases. There is, therefore, _a natural tension between compositionality and contextuality_: the former requires some amount of independence from context, while the latter allows for general dependencies. In a sense, our goal in this work is to consider representations of meanings that were originally learned as contextual, and to later approximate them as needed with compositional ones based on ideal words. This combines the flexibility and expressiveness of contextuality with the structural efficiency of compositionality. Our main contributions can be summarized as follows:
* We describe compositional linear structures from a geometric perspective and explain how these structures can be approximately recovered from arbitrary collections of vectors associated with a product of "factors." We also relate these structures with previous definitions of disentangled representations that were based on mathematical representation theory [26] (Section 3).
* We consider embeddings arising from visual-language models (VLMs) and show that the existence of linearly factored embeddings is equivalent to the conditional independence of the factors for the probability defined by the model. We also discuss some relaxations of this result that illustrate how linear structures may emerge even when if true data distribution satisfies weaker "disentanglement" conditions (Section 4).
* We empirically show that embeddings of composite concepts can often be well-approximated as linear compositional structures, and that this leads to simple but effective strategies for solving classification and retrieval problems in a compositional setting. We also visualize manipulations of factored embeddings using a CLIP-guided diffusion model (Stable Diffusion [41]).
## 2 Related Work
Compositionality has long been recognized to be a fundamental principle in cognition [20]. It has been a central in theme in Gestalt psychology [16], cognitive sciences [19], and pattern theory [24]. The main benefit of compositional representations is that they avoid the combinatorial explosion that occurs if all composed concepts are considered to be completely distinct. This property is of course a characteristic feature of natural languages, which use a fixed vocabulary for all representions, making "infinite use of finite means" (von Humboldt) [10]. However, while there is large body of work in NLP devoted to learning compositional representations of language (_e.g_.,[36, 12, 5, 22, 13]), modern text representations based on transformer architectures [46] are a priori _not_ compositional in any way. Some works have studied whether compositionality is implicitly present in neural networks, for example by evaluating the ability of these models to generalize beyond the training data [27]. More relevant to our purposes, [3] proposed a framework for evaluating the compositionality of a network's internal representations, by searching for representational primitives; however, finding such compositional primitives requires solving an optimization problem. In a broad sense, compositionality can be seen as a particular way of exploiting or imposing _structure_ in the inner representations of a network. It has also been argued that data representations should be concentrated in low-dimensional linear spaces [33, 9], or even be "disentangled" with respect to factors of variation in the data [26, 8, 1]. Our perspective on compositional representations is closely related to the definition of disentanglement given in [26]. As argued above, compositionality of text representations is naturally in tension with _contextuality_. Since their introduction in NLP around 2018 [39, 15], contextual text embeddings have been extremely successful, and are part of modern transformer-based architectures. The amount of contextuality in these word embeddings has been quantified using different metrics in [17].
Linear compositionality for embeddings is often associated with popular "vector analogies" that are known to roughly hold for (non-contextual) word embeddings such as word2vec [35] and GloVe [38]. Several works have proposed theoretical justifications for this property [29, 4, 25, 2, 18, 44]. To our knowledge, however, similar properties for contextual embeddings of language models have not been considered, although [45] has evaluated the performance of transformer-based models on analogy tasks. Various limitations of linear analogies have also been pointed out [30, 7].
In the context of image generation, compositional ap
proaches for controlling the output of diffusion models have been recently proposed in [31, 47]. In particular, [47] introduced a "concept agebra" that is formally similar to our factored representations; however, their notion of "concept" is based on score representations (gradient of log-probabilities), rather than on embedding vectors, which leads to a different probabilistic characterization of compositionality. Finally, [11] introduced a method for removing biases and spurious correlations from pre-trained VLM embeddings for both discriminative and generative tasks; since their proposed approach consists in applying certain linear projections to textual embeddings (with some calibration adjustments), it can seen as conceptually similar to an application of our ideal word decompositions.
## 3 Linearly Factored Embeddings
We begin by discussing from a purely geometric perspective what we mean by "linear compositionality." We consider a finite set \(\mathcal{Z}=\mathcal{Z}_{1}\times\ldots\times\mathcal{Z}_{k}\) that we view as representing a factored set of "concepts." For example, the set \(\mathcal{Z}\) may be a collection of strings of text organized in a structured way, _e.g_., according to attribute-object-context. We then consider an arbitrary embedding map \(r:\mathcal{Z}\to V\) of \(\mathcal{Z}\) into a vector space \(V\).
**Definition 1** (Linearly factored embeddings).: A collection of vectors \(r(\mathcal{Z})=\{\mathbf{u}_{x}\colon z\in\mathcal{Z}\}\subset V\) parameterized by \(\mathcal{Z}=\mathcal{Z}_{1}\times\ldots\times\mathcal{Z}_{k}\) is _linearly factored_ if there exist vectors \(\mathbf{u}_{z_{i}}\in V\) for all \(z_{i}\in\mathcal{Z}_{i}\) (\(i=1,\ldots,k\)) such that
\[\mathbf{u}_{z}=\mathbf{u}_{z_{1}}+\ldots+\mathbf{u}_{z_{k}}, \tag{1}\]
for all \(z=(z_{1},\ldots,z_{k})\).
This notion is very intuitive and can be seen as a generalization the additive compositionality that has been considered for (pairwise) analogies and word embeddings [35].
**Lemma 2**.: _1) A collection of vectors \(r(\mathcal{Z})\) is linearly factored if and only if the vector difference \(\mathbf{u}_{z}-\mathbf{u}_{z^{\prime}}\) does not depend on the components that \(z,z^{\prime}\in\mathcal{Z}\) share in common. 2) If \(|\mathcal{Z}_{i}|=n_{i}\), then the dimension of \(Span(r(\mathcal{Z}))\) is at most \(1+\sum_{i=1}^{k}(n_{i}-1)\)._
It is easy to realize that if a collection of vectors \(r(\mathcal{Z})\) is linearly factored, then the vectors appearing on the right of equation 1 are _never_ uniquely determined. In particular, even though each \(\mathbf{u}_{z_{i}}\) is associated with a value of a factor \(z_{i}\in\mathcal{Z}_{i}\), that vector cannot carry any "semantic" content. However, we can recover uniqueness in the components by simply turning to a "centered" decomposition.
**Lemma 3** (Centered decomposition).: _If a collection of vectors \(r(\mathcal{Z})\) is linearly factored, then there exist unique vectors \(\mathbf{u}_{0}\in V\) and \(\mathbf{u}_{z_{i}}\in V\) for all \(z_{i}\in\mathcal{Z}_{i}\) (\(i=1,\ldots,k\)) such that \(\sum_{z_{i}\in\mathcal{Z}_{i}}\mathbf{u}_{z_{i}}=0\) for all \(i\) and_
\[\mathbf{u}_{z}=\mathbf{u}_{0}+\mathbf{u}_{z_{1}}+\ldots+\mathbf{u}_{z_{k}}, \tag{2}\]
_for all \(z=(z_{1},\ldots,z_{k})\)._
In the the previous decomposition, the vectors \(\mathbf{u}_{i}\) are now uniquely associated with the value of a factor \(z_{i}\in\mathcal{Z}_{i}\), but are _relative_ to the other values in \(\mathcal{Z}_{i}\) (since they sum to zero). Similarly, the vector spaces \(V_{\mathcal{Z}_{i}}:=Span(\mathbf{u}_{z_{i}}\colon z_{i}\in\mathcal{Z}_{i})\) are uniquely associated with each factor \(\mathcal{Z}_{i}\). In our applications, we will refer to \(\mathbf{u}_{i}\) as the _ideal words_ of the linear factorization and to each \(V_{\mathcal{Z}_{i}}\) as the _semantic space_ associated with \(\mathcal{Z}_{i}\). Despite its simplicity, we believe that the decomposition in Lemma 3 paints an interesting intuitive picture of linear models of "meaning." In this setting, the origin is not a universally meaningful point; for example, the origin of text embeddings does not correspond to the null string. Thus, meanings might be best viewed as an _affine space_, where the origin is only chosen as a particular reference that may depend on context. Ideal words, on the other hand, provide _relative meanings_ with respect to the context.
From Lemma 2, it follows that factored representations must be very low-dimensional and, in particular, "generic" embeddings will _not_ be factored. However, it is very easy to recover the nearest factored approximation for any given set of vectors \(\mathbf{u}_{z},z\in\mathcal{Z}\).
**Proposition 4**.: _Let \(\alpha_{z_{i}}\)\(z_{i}\in\mathcal{Z}_{i}\) be arbitrary positive weights such that \(\sum_{z_{i}\in\mathcal{Z}_{i}}\alpha_{i}=1\), and define \(\beta_{z}:=\prod_{i}\alpha_{z_{i}}\) for all \(z=(z_{1},\ldots,z_{k})\). Then, for any norm \(\|\cdot\|\) induced by an inner product on \(V\), we have that_
\[\arg\min_{\tilde{\mathbf{u}}_{z}} \sum_{z\in\mathcal{Z}}\beta_{z}\|\mathbf{u}_{z}-\tilde{\mathbf{u}}_{z}\|^ {2}, \tag{3}\] \[s.t. \{\tilde{\mathbf{u}}_{z}\}\text{ is linearly factored},\]
_is given by \(\tilde{\mathbf{u}}_{z}=\mathbf{u}_{0}+\mathbf{u}_{z_{1}}+\ldots+\mathbf{u}_{z_{k}}\) where_
\[\mathbf{u}_{0}:=\sum_{z}\beta_{z}\mathbf{u}_{z},\ \mathbf{u}_{z_{i}}:=\frac{1}{\alpha_{i}} \sum_{\begin{subarray}{c}z^{\prime}=(z^{\prime}_{1},\ldots,z^{\prime}_{k})\\ z^{\prime}_{i}=z_{i}\end{subarray}}\beta_{z}\mathbf{u}_{z^{\prime}}-\mathbf{u}_{0}. \tag{4}\]
This fact shows that computing linearly factored approximations amounts to performing simple weighted averages of the original vectors. In many cases, we will consider \(\alpha_{z_{i}}=\frac{1}{n_{i}}\) and \(\beta_{z}=\prod\frac{1}{n_{i}}\), however it can be useful to allow for additional "knobs," as the following example illustrates.
**Example 5**.: One of our main motivations to consider linearly factored structures is to approximate (pre-trained) contextual text embeddings to obtain representations that are _interpretable_ and _compositional_. More concretely, assume that each factor \(\mathcal{Z}_{i}\) represents a finite collection of
strings and that the representation \(r:\mathcal{Z}_{1}\times\ldots\times\mathcal{Z}_{k}\to V\) is defined by concatenating strings and then embedding the result using a contextual language encoder. For a very simple example, consider
\[\mathcal{Z}=\{\text{a blue, a red, a green}\}\times\{\text{bike, house}\},\]
which leads to six possible strings and six distinct embedding vectors. Using Proposition 4, we can easily find a factored approximation \(\boldsymbol{u}_{(col,obj)}\approx\boldsymbol{u}_{0}+\boldsymbol{u}_{col}+ \boldsymbol{u}_{obj}\), where \(\boldsymbol{u}_{col}\) and \(\boldsymbol{u}_{obj}\) are the ideal words representing a particular object and color from \(\mathcal{Z}\). As we will see, these vectors can be used for semantic manipulations of embeddings. Note that ideal words are not the same as the encodings of the original words or substrings. In fact, quite intuitively, the meaning of ideal word vectors is determined entirely by the way in which the corresponding string interacts with other factors. For example, we have \(\boldsymbol{u}_{\text{green}}=\alpha_{car}\boldsymbol{u}_{(\text{green\, car})}+\alpha_{house}\boldsymbol{u}_{(\text{green\,house})}-\boldsymbol{u}_{0}\) where \(\boldsymbol{u}_{0}\) is the mean of all six embeddings. In this particular example, "green house" has distinct contextual meaning, but this can be controlled by using appropriate weights, if desired. See Section 5 and Figure 3 for more discussions on similar examples.
We conclude this section by pointing out a connection between linearly factored embeddings and a notion of "disentangled representations" proposed in [26]. We refer to the Appendix for a short summary of the relevant mathematical background and for additional discussions. In a broad sense, we can say that an embedding map \(r:\mathcal{Z}\to V\) into a vector space \(V\) is "linearly compositional" with respect to some group of transformations \(G\) if 1) \(G\) acts on the set \(\mathcal{Z}\) 2) \(G\) acts on \(V\) as invertible linear transformations, and 3) \(r\) is a \(G\)-morphism, that is, if \(r(g\cdot z)=g\cdot r(z)\). In our case of interest, the set \(\mathcal{Z}=\mathcal{Z}_{1}\times\ldots\times\mathcal{Z}_{k}\) is a finite set of composite concepts (_e.g._, \(\{\text{trainy, sunny}\}\times\{\text{morning, evening}\}\)) and \(G=\mathfrak{S}_{n_{1}}\times\ldots\times\mathfrak{S}_{n_{k}}\) is a product of symmetric groups that acts on \(\mathcal{Z}\) by varying each component separately (_e.g._, swapping "rainy" \(\leftrightarrow\) "sunny" and "morning" \(\leftrightarrow\) "evening," independently). Following [26], we say that the action of \(G\) on \(V\) is "linearly disentangled" if there exists a decomposition \(V=V_{1}\oplus\ldots\oplus V_{k}\) such that \(g=(g_{1}v_{1},\ldots,g_{k}v_{k})\) for all \(v=(v_{1},\ldots,v_{k})\in V\) and \(g=(g_{1},\ldots,g_{k})\in G\). Intuitively, this means that we can permute the different factors independently by acting with linear transformations on the embedding space. With these definitions in place we have that linear factorizations of embeddings are intimately related to disentangled compositional representations.
**Proposition 6**.: _Let \(r(\mathcal{Z})\) be a set of linearly factored vectors of maximal dimension. Then \(r\) is compositional for some disentangled action of \(G=\mathfrak{S}_{n_{1}}\times\ldots\times\mathfrak{S}_{n_{k}}\) on \(V\). Conversely, if \(r\) is compositional for a disentangled action of \(G\), then the vectors \(r(\mathcal{Z})\) are linearly factored._
## 4 Linearly Factored Embeddings in Visual Language Models
In this section, we discuss linear factorizations from a probabilistic viewpoint in the context of vision-language models (VLMs). A priori, it may not be clear why the geometric notion of factored embeddings should be relevant in practice--for example, in the case of CLIP's normalized embeddings, it may seem that non-linear spherical geometry should come into play. In this section, however, we argue that vector factorizations have simple probabilistic interpretations, and in particular, we should expect these structures to be present in real data embeddings.
In the following, we write \(\mathcal{X}\) for a set of texts and \(\mathcal{Y}\) for a set of images (for simplicity, we consider a finite set of text and images, which will always be the case in practice). We consider a VLM that uses parametric encoders of texts \(x\mapsto\boldsymbol{u}_{x}\) and of images \(y\mapsto\boldsymbol{v}_{y}\) into \(V=\mathbb{R}^{d}\) to model the conditional log-probabilities of \(x\) given \(y\) and \(y\) given \(x\) in a bilinear fashion:
\[p(x\,|\,y)=\frac{\exp\boldsymbol{u}_{x}^{\top}\boldsymbol{v}_{y}}{\sum_{x^{ \prime}}\exp\boldsymbol{u}_{x^{\prime}}^{\top}\boldsymbol{v}_{y}},\quad p(y\,| \,x)=\frac{\exp\boldsymbol{u}_{x}^{\top}\boldsymbol{v}_{y}}{\sum_{y^{\prime}} \exp\boldsymbol{u}_{x^{\prime}}^{\top}\boldsymbol{v}_{y^{\prime}}}. \tag{5}\]
For example, CLIP [40] uses both expressions in equation 5 to optimize a symmetric cross-entropy. This setup is similar to the one used in NLP for context-based embeddings [35] and also in transformer-based language modeling [46], the main difference being that in those cases only one of the two expressions in equation 5 is used (to model words based on context). Much of the discussion that follows can be applied to these cases as well, but we focus on VLMs for clarity.
For any given pair of embeddings \(\boldsymbol{u}_{x},\boldsymbol{u}_{y}\) there exists a unique probability \(p(x,y)\) on \(\mathcal{X}\times\mathcal{Y}\) compatible with these embeddings which satisfies
\[\log p(x,y)=\boldsymbol{u}_{x}^{\top}\boldsymbol{v}_{y}+c,\quad c\in\mathbb{R}. \tag{6}\]
In the following, we consider the distribution on \(\mathcal{X}\times\mathcal{Y}\) expressed by a model and defined by equation 6. After the learning stage, this distribution should reflect a "true" distribution on the same space. We remark, however, that the embedding dimension \(d\) is in practice much smaller than the number of images or texts used in training, which means that we are actually imposing a _low-rank constraint_ on the joint probability distribution. In NLP, this effect has been referred to as the "softmax bottleneck" [48].
We now consider a set of factors \(\mathcal{Z}=\mathcal{Z}_{1}\times\ldots\times\mathcal{Z}_{k}\) and assume that each \(z\in\mathcal{Z}\) is represented by a string \(x(z)\in\mathcal{X}\). Note that formally we could have associated factors with images rather than texts, however it is more natural to express discrete concepts as text. The factors can correspond to combinations of particular tokens (_e.g._, attributes and objects) but the association with strings could potentially be
more complex (_e.g_., ("royal", "man") \(\mapsto\) "king"). The VLM model now provides an embedding of \(\mathcal{Z}\) via \(z\mapsto\boldsymbol{u}_{x(z)}\).
**Proposition 7**.: _In the setting described above, and assuming that \(Span(\boldsymbol{v}_{y},y\in\mathcal{Y})=\mathbb{R}^{d}\), the embedding \(z\mapsto\boldsymbol{u}_{x(z)}\) of \(\mathcal{Z}\) is linearly factored in the sense of Definition 1 if and only if there exists functions \(q_{0},\ldots,q_{k}\) such that_
\[p(x(z),y)=q_{0}(y)q_{1}(z_{1},y)\ldots q_{k}(z_{k},y), \tag{7}\]
_for all \(z=(z_{1},\ldots,z_{k})\in\mathcal{Z}\) and \(y\in\mathcal{Y}\)._
**Corollary 8**.: _Under the assumptions of Proposition 7, an embedding \(z\mapsto\boldsymbol{u}_{x(z)}\) of \(\mathcal{Z}\) is linearly factored if only if the factors \(z_{i}\) are conditionally independent given any image \(y\)._
It is perhaps not surprising that the log-linear form of the model translates multiplicative decompositions into additive ones. It may be counterintuitive, however, that the conditional probabilities \(p(z_{i}|y)\) as \(y\) varies actually depend on _all_ of the ideal word vectors \(\boldsymbol{u}_{z_{i}}\), since normalizing constants can change with \(y\). Indeed we have that
\[p(z_{i}\,|\,y)=\exp(\boldsymbol{u}_{z_{i}}^{\top}\boldsymbol{v}_{y})h( \mathcal{Z}_{j\neq i},y), \tag{8}\]
where \(h(\mathcal{Z}_{j\neq i},y)\) is a function that depends on \(y\) and all vectors corresponding to \(\mathcal{Z}_{j}\) with \(j\neq i\). In this sense, the geometric perspective of factorization is simpler since it disregards this dependence as \(y\) varies.
The conditional independence from Proposition 7 may seem like a strict requirement and may not be obviously true in the real world. For this reason, we discuss some relaxed conditions and explain what they imply in terms of linearly factored structures. First, given an image \(y\in\mathcal{Y}\), we say that the probability \(p(x(z),y)\) is _mode-disentangled_ (for the factor \(\mathcal{Z}_{i}\)) if
\[\operatorname*{arg\,max}_{z_{i}\in\mathcal{Z}_{i}}p(x(z_{i},z_{-i}),y)= \operatorname*{arg\,max}_{z_{i}\in\mathcal{Z}_{i}}p(x(z_{i},z_{-i}^{\prime}),y), \tag{9}\]
for all \(z_{-i}:=(z_{1},\ldots,z_{i-1},z_{i+1},\ldots,z_{k})\) and \(z_{-i}^{\prime}:=(z_{1}^{\prime},\ldots,z_{i-1}^{\prime},z_{i+1}^{\prime}, \ldots,z_{k}^{\prime})\). Intuitively, this simply means means that it is possible to determine the most likely value of the factor \(\mathcal{Z}_{i}\) by disregarding all of the remaining factors. Similarly, we say that \(p(x(z),y)\) is _order-disentangled_ (for the factor \(\mathcal{Z}_{i}\)) if
\[\begin{split}& p(x(z_{i},z_{-i}),y)\geq p(x(z_{i}^{\prime},z_{-i}),y) \\ &\qquad\Longleftrightarrow p(x(z_{i},z_{-i}^{\prime}),y)\geq p (x(z_{i}^{\prime},z_{-i}^{\prime}),y).\end{split} \tag{10}\]
for all \(z_{-i}\) and \(z_{-i}^{\prime}\). This now means that it is possible to _rank_ the values of the factor \(\mathcal{Z}_{i}\) by disregarding all of the remaining factors. It is easy to see that conditional independence implies order-disentanglement which in turn implies mode-disentanglement. If \(|\mathcal{Z}_{i}|\leq 2\), then mode-disentanglement and order-disentanglement are equivalent.
**Proposition 9** (Relaxed feasibility of linear factorizations).: _1) If \(y\in\mathcal{Y}\) is such that \(p(x(z),y)\) is mode-disentangled, then one can replace the embedding vectors \(\boldsymbol{u}_{x(z)}\) with their linearly factored approximations \(\tilde{\boldsymbol{u}}_{x(z)}\) from Proposition 4 (for any choice of weights) and obtain the same prediction for \(z\) given \(y\); 2) If \(p(x(z),y)\) is order-disentangled for all images \(y\) sampled from a distribution with full support over the unit sphere, then the vectors \(\boldsymbol{u}_{x(z)}\) are necessarily linearly factored._
The second part of this statement means that, roughly speaking, we should espect that imposing order-disentanglement for an increasing number of images would gradually lead to linearly factored embeddings.
**Example 10**.: Let \(\mathcal{Z}\) be of the form \(\{o_{1},o_{2}\}\times\{c_{1},c_{2}\}\) (objects, contexts) and let \(x(z)\) be the corresponding collection of strings (_e.g_., \(x(o_{i},c_{j})=\)"a photo of a [\(o_{i}\)] in [\(c_{j}\)]"). Then mode and order disentanglement are equivalent and mean that
\[\begin{split}& p(x(o_{1},c_{1})|y)>p(x(o_{2},c_{1})|y)\\ &\qquad\Leftrightarrow p(x(o_{1},c_{2})|y)>p(x(o_{2},c_{2})|y), \\ &\qquad p(x(o_{1},c_{1})|y)>p(x(o_{1},c_{2})|y)\\ &\qquad\Leftrightarrow p(x(o_{2},c_{1})|y)>p(x(o_{2},c_{2})|y). \end{split} \tag{11}\]
These are reasonable conditions on the probability \(p(x(z),y)\) since it is normally possible to discriminate object and context in an image independently. If \(p(x(z),y)\) and \(y\) satisfy equation 11, then the first part of Proposition 9 means that we can use two (approximate) "ideal word" vectors \(\boldsymbol{u}_{o_{1}}=-\boldsymbol{u}_{o_{2}}\) and \(\boldsymbol{u}_{c_{1}}=-\boldsymbol{u}_{c_{2}}\) instead of the four original vectors \(\boldsymbol{u}_{x(o_{i},c_{j})}\) to assign the correct label to \(y\). The second part of Proposition 9 means that if equation 11 holds for "all" images \(y\) (_i.e_., vectors covering the unit sphere), then the original vectors \(\boldsymbol{u}_{x(o_{i},c_{j})}\) are actually linearly factored.
## 5 Experiments
We now empirically investigate the presence and usefulness of linearly factored structures in real VLM embeddings. In all of our experiments, we use a pre-trained CLIP encoder [40]1. We use different datasets that have a compositional nature: MIT-states [28] and UTZappos [49], that are image classification datasets where labels are pairs attribute-object; CelebA [32] and Waterbirds [43] in which images have a label and a spurious attribute; and DeepFashion2 [23] with PerVL annotations from [14], where the goal is to retrieve object instances from different contexts. We also include a visualization of ideal words using a CLIP-guided diffusion model (Stable Diffusion 2.12) [42]. We
emphasize that our goal is not to achieve state-of-the-art results, although we will see that linear manipulations can be surprisingly effective and sometimes outperform significantly more complex methods. Rather, we aim to show that linear factored structures in embedding spaces provide a useful conceptual and practical framework for _understanding_ and _controlling_ the behavior of pre-trained VLMs.
Visualization of embeddings.Figure 2 shows some examples of embeddings of composite strings, visualized in 3D using PCA. In the top row, we show examples of manually constructed strings. In order: "a photo of a {red, blue, pink} \(\times\) {car, house}"; "a photo of a {big, small} \(\times\) {cat, dog} \(\times\) {eating, drinking}"; "{a photo of a, a picture of a} \(\times\) {place, object, person}"; "king, queen, man, woman, boy, girl" (where one factor would correspond to male-female and the other to a generic context). In the bottom row, we present strings of the type "an image of a [a] [o]" for randomly chosen attributes and objects from MIT-states [28] and UTZappos [49] (first using two attributes and three objects, and then using three attributes and two objects). Here we always use either \(2\times 3\) or \(2\times 2\times 2\) concepts since these factored structures have expected affine dimension 4, or linear dimension 3. The presence of roughly parallel edges and faces in these figures indicate that embeddings are approximately linearly factored. We note that in many of these examples the factorization of the concepts is already reflected in the _syntax_ of the strings, _i.e_., in the presence of repeated substrings in prompts with similar meaning. However, factorized vectors also encode semantic aspects, as can be seen in the last two examples from the first row. In the fourth example, the encoded strings have no repeated substrings, so the structure is "emergent"; in the third example, the factor corresponding to {a photo of a, a picture of a} results in an ideal word vector with a smaller norm compared to the to other directions (resulting in a "squashed" triangular prism), as one might expect since this factor is not semantically significant. We refer to the Appendix for a more in-depth discussion.
Compositional classification.We evaluate the usefulness of linear factored approximations for object-attribute labels of the MIT-states [28] and UTZappos [49] datasets. The default strategy for applying CLIP in a zero-shot fashion on these datasets is to use text captions such as \(x(a,o)\)="an image of a [\(a\)] [\(o\)]." This results in \(n_{obj}\times n_{attr}\) captions that each image must be compared with. We want to explore whether the embedding vectors \(\mathbf{u}_{x(a,o)}\) can be approximated with a linearly factored set \(\tilde{\mathbf{u}}_{x(a,o)}=\mathbf{u}_{0}+\mathbf{u}_{a}+\mathbf{u}_{o}\), so that inference can be performed using only \(n_{obj}+n_{attr}\) embedding vectors. The intuitive choice for such vectors would be to use the representations of captions such as "image of a [\(a\)] object" and "image of a [\(o\)]." We compare this choice with using the "ideal words" associated with the original captions, where the representation of an object \(o\) is simply given by \(\mathbf{u}_{o}:=\frac{1}{n_{attr}}\sum_{a}\mathbf{u}_{x(a,o)}\), and similarly for attributes, as in Proposition 4 (in this setting, there is no need to remove the mean vector \(\mathbf{u}_{0}\) since it is multiplied with every image vector). The resulting disjoint representations for objects and attributes (\(\mathbf{u}_{o}\) and \(\mathbf{u}_{a}\)) are "contextualized," in the sense that they optimally approximate the original pairwise embeddings. In Table 1, "pair" refers to using the original pairwise labels, "real words" uses the embeddings of words corresponding to objects and attributes using "image of a [\(a\)] object" and "image of a [\(o\)].", while "ideal words" computes the vector ideal words for the factorization. We see that ideal words clearly outperform the _real words_ baseline, and often even surpass the accuracy of _pair_. For MIT-States, using factored labels translates into using 360 vs. 28175 class vectors (78\(\times\) gain), and inference on 12995 test samples goes from 614ms to 9ms (63\(\times\)gain).
Debiasing.We can apply the decomposition into ideal words as a baseline strategy to remove contexts or biases from embeddings. The debiasing task can be formalized using the group robustness framework proposed in [43]. In this setting, we are given a collection of labels \(\mathcal{Y}\) and spurious attributes \(\mathcal{A}\), and we define a "group" as a pair \(g\in\mathcal{Y}\times\mathcal{A}\). Assuming that each group corresponds to a probability \(P_{g}\) on an input space \(\mathcal{X}\), the goal is to find a classifier \(f:\mathcal{X}\rightarrow\mathcal{Y}\) that leads to a small gap between worst-group error and average error:
\[\max_{g}\mathbb{E}_{x\sim P_{g}}\ell(f(x),y)-\mathbb{E}_{x\sim P}\ell(f(x),y)). \tag{12}\]
In a zero-shot setting with CLIP, classifiers are prompts that inherit biases from the dataset used in pre-training, so group robustness is not guaranteed. To address this problem, the authors of [11] propose a method for debiasing prompts that
Figure 2: **Visualization of embeddings.**_Top_: projected embeddings of manually constructed strings associated with factored concepts. _Bottom:_ projected embeddings for strings of the type “an image of a [a] [o]” for randomly chosen attributes and objects from MIT-states [28] and UTZappos [49]. Symmetric structures indicate that embeddings are approximately linearly factored. See text for details.
finds a projection map that makes spurious prompts irrelevant (following [6]) and then additionally regularizes the projection map to ensure that certain prompts are mapped near each other in embedding space. Here we note that a much simpler baseline would be to use ideal words to leverage the joint label-attribute representation provided by the pre-trained VL model and "average out" spurious attributes. More precisely, starting from a set of embeddings \(\mathbf{u}_{(y,a)}\) corresponding to prompts representing each group \(g=(y,a)\), ideal words suggest to define the encoding of each label \(y\) to be \(\mathbf{u}_{y}:=\frac{1}{|\mathcal{A}|}\sum_{a\in\mathcal{A}}\mathbf{u}_{(y,a)}.\) Once again, this is the same as the (shifted) ideal word corresponding to \(y\), obtained by approximating pairwise embeddings of labels and attributes in a linearly factored way. Following [11], we evaluate group robustness of unbiased prompts on the Waterbird [43] and CelebA [32] datasets. For the Waterbird dataset, the labels are "landbird" and "waterbird," and the confounding factor is water/land background. For the CelebA dataset, the labels are "blond" and "dark" hair and the confounding factor is the binary gender. For our simple unbiasing method, we prepend prompts associated with labels with prompts associated with spurious attributes, and then average over all the spurious prompts. In both datasets, we consider exactly the same prompts for spurious attributes and labels used in [11] (see the Appendix for a description). Our results are shown in Table 2. On the CelebA dataset, our simple averaging strategy achieves a much smaller gap between average and worst group accuracy than the method proposed in [11] (1.6 vs 10.1). For Waterbird datsets, the gap is larger but comparable, and average accuracy is higher.
Composing concepts and contexts.We perform experiments using the DeepFashion2 dataset [23] with the captions provided in PerVL [14]. This dataset contains images of 100 unique fashion items ("concepts") with textual descriptions. The task is to retrieve an image given a text query that includes a personalized concept that is specified using a small number of examples (5 samples). An example of a text query is "The [CONCEPT] is facing a glass store display." In [14], the authors propose a method called PALAVRA that trains new CLIP tokens to be associated with the custom concept; the learned tokens can then be used within natural language for retrieving images. The authors compare their method with a baseline approach dubbed "AvgIm+Text" which consists in averaging the CLIP embedding of the concept support images and of the embedded text query. This strategy is presented as the second best approach after PALAVRA. Inspired by our linear factorization of concepts and contexts, we propose to use a modification of AvgIm+Text where instead of averaging text and image embeddings, we add to the text embedding the _difference_ between mean image embeddings of the specialized concept ("my shirt") and the mean embeddings of the general (coarse-grained) concept images (all images of shirts in the dataset). For a concrete example, if [CONCEPT] is a particular instance of a shirt, then the AvgIm+Text approach would be as follows:
\[\textbf{AvgIm+Text}:\] \[\mathbf{u}(\text{``A person wearing [CONCEPT] sitting on a couch})\] \[\approx\mathbf{u}(\text{``A person wearing a shirt sitting on a couch})\] \[+\mathrm{Norm}(\mathrm{Mean}\{\mathbf{v}(\mathrm{CONCEPT})\}),\]
where \(\mathbf{u}\) is the text embedding and \(\mathbf{v}\) is the image embedding, \(\mathrm{Mean}\) means the mean over supporting samples, and \(\mathrm{Norm}\) means normalization. In contrast, we propose to use
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & **Method** & **Pair Acc** & **Attr Acc** & **Obj Acc** \\ \hline \multirow{3}{*}{MIT-states [28]} & pair & 7.7\% & 16.2\% & 47.8\% \\ \cline{2-5} & real words & 10.0\% & 19.3\% & 49.3\% \\ & ideal words & **11.5\%** & **21.4\%** & **50.8\%** \\ \hline \multirow{3}{*}{UT Zappos [49]} & pair & **12.4\%** & 17.1\% & **55.7\%** \\ \cline{2-5} & real words & 8.4\% & 10.3\% & 51.0\% \\ \cline{1-1} & ideal words & 10.8\% & **19.2\%** & 55.3\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Zero-shot image classification results on compositional datasets.** Here “pair” refers to using all attribute-object pairs as candidate labels; “real words” refers to using labels corresponding to real words (_i.e._, separate attribute and object labels); “ideal words” refers to using compositional labels based on ideal words. Ideal words always lead to better accuracy than real words and often even outperform pairwise labels.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & Test Only & AvgImg+Text & PALAVRA [14] & IW \\ \hline DeepFashion2 [23] & \(17.6\pm 0.0\) & \(21.7\pm 2.4\) & \(28.4\pm 0.7\)* & \(\mathbf{37.0}\pm 1.1\) \\ \hline \hline \multicolumn{5}{|c|}{|} & IW w.e. mean removal & IW with Norm on mean & IW \\ \hline DeepFashion2 [23] & \(22.1\pm 2.4\) & \multicolumn{2}{c|}{\(36.5\pm 1.4\)} & \(\mathbf{37.0}\pm 1.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Concept retrieval results.** Mean Reciprocal Rank retrieval metric on the DeepFashion2 [23] with annotations from PerVL [14]. Numbers with * are taken from [14].
the following strategy:
**Ideal Words** :
\[\mathbf{u}(\text{``A person wearing [CONCEPT] sitting on a couch})\] \[\approx\mathbf{u}(\text{``A person wearing a shift sitting on a couch})\] \[-\operatorname{Mean}\{\mathbf{v}(\text{shirt})\}+\operatorname{Mean} \{\mathbf{v}(\text{CONCEPT})\}.\]
Our results are shown in Table 3. Remarkably, this simple strategy that uses CLIP embeddings and _does not require any training_ outperforms PALAVRA by a large margin (in our experiments, we used the implementation and evaluation code provided in [14] with only minimal changes). This modified approach can be interpreted from the perspective of linearly factored embeddings, since we are assuming that \(\mathbf{u}(\text{context},\text{CONCEPT})-\mathbf{u}(\text{context},\text{shirt})\) does not significantly depend on the context and can be approximated as the difference mean vectors representing the specific CONCEPT and the generic shirt. Table 3 also includes ablations for the two modifications we made w.r.t. to AvgIm+Text proposed in [14] (_i.e_. skipping the normalization step and removing the mean of the coarse-grained concept).
Visualizing ideal words.We propose to visualize the effect of linear-algebraic operations with ideal words using a CLIP-guided diffusion model (Stable Diffusion 2.1). In this setting, we compute ideal words of factored strings in the same way as before (as in Proposition 4 and Example 5), with the only difference that we now consider the encoded representation of the entire string before the final projection layer of the text encoder (treating the concatenated token representations as a long vector), since this is required for conditioning the diffusion model. An illustrative example is shown Figure 3. We mention that [47, 31] have also proposed algebraic manipulations to control visual generation in a compositional way; however both of those works perform operations on score functions rather than on embedding vectors, which means that their approach requires modifying the diffusion process. In contrast, similar to the prompt debiasing method from [11], we simply modify the prompt embeddings that condition the generation. In this paper, we use generative models as a qualitative proof of the validity of ideal words as approximations for embeddings; we leave a detailed exploration of applying these decompositions for controlling image generation to future work.
## 6 Conclusion
We have investigated compositional structures in VLM embeddings and argued that contextual text embeddings are often well-approximated by linear combinations of smaller sets of vectors. Optimal choices for these vectors are not embeddings of actual words, but rather "ideal words" that can be easily obtained as weighted averages of embeddings of longer strings of text. We showed that this simple idea can be used to design effective baseline methods for different visual language tasks (compositional classification/retrieval, debiasing, and image generation) and to control the behavior of VLMs.
In the future, we will focus on practical applications of ideal word decompositions such as compositional image generation. Furthermore, we would like to find ways of customizing ideal words using training data, for example by incorporating linear factorizations in fine-tuning strategies, or by introducing kernelized versions of these decompositions that have learnable parameters.
| **データエンベディングから事前学習された視覚言語モデル (VLMs) のCompositional構造を調査しています。従来の構成性に対するアプローチは、単語のエンベディングをアルゲブラ的演算で表現していました。一方、私たちは、エンコーダの表現をより少ないベクトルで組み合わせることで、エンベディング空間からの表現を近似することを目指しています。これらのベクトルは、モデルのエンベディング空間内で直接コンセプトを生成するための「理想的な単語」と見なされます。まず、この構造を幾何学的観点から理解するためのフレームワークを提示します。その後、VLMsのエンベディングにおけるこの構造を確率的に説明し、これらの構造が実世界の状況でどのように生じるのかを説明します。最後に、CLIPのエンベディングにおけるこれらの構造を、分類、デバイasing、検索などの視覚言語タスクの解決に用いる実験を行います。結果 |
2309.07933 | A Lean-Congruence Format for EP-Bisimilarity | Enabling preserving bisimilarity is a refinement of strong bisimilarity that
preserves safety as well as liveness properties. To define it properly,
labelled transition systems needed to be upgraded with a successor relation,
capturing concurrency between transitions enabled in the same state. We enrich
the well-known De Simone format to handle inductive definitions of this
successor relation. We then establish that ep-bisimilarity is a congruence for
the operators, as well as lean congruence for recursion, for all (enriched) De
Simone languages. | Rob van Glabbeek, Peter Höfner, Weiyou Wang | 2023-09-13T20:51:32 | http://arxiv.org/abs/2309.07933v1 | # A Lean-Congruence Format for EP-Bisimilarity
###### Abstract
Enabling preserving bisimilarity is a refinement of strong bisimilarity that preserves safety as well as liveness properties. To define it properly, labelled transition systems needed to be upgraded with a successor relation, capturing concurrency between transitions enabled in the same state. We enrich the well-known De Simone format to handle inductive definitions of this successor relation. We then establish that ep-bisimilarity is a congruence for the operators, as well as lean congruence for recursion, for all (enriched) De Simone languages.
## 1 Introduction
Recently, we introduced a finer alternative to strong bisimilarity, called enabling preserving bisimilarity. The motivation behind this concept was to preserve liveness properties, which are _not_ always preserved by classical semantic equivalences, including strong bisimilarity.
**Example 1.1** ([14]): Consider the following two programs, and assume that all variables are initialised to 0.
**while(true)do**
**choose**
**if**truetheny := y+1;
**if**x = 0thenx := 1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**while(true)do**
**y := y+1;
**end**
**end**
**while(true)do**
**y := y+1;
**end**
**end**
**while(true)do**
**y := y+1;
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
end**end**
**end**
**end**
end**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
end**
**end**
**end**
**end**
**end**end**
**end**
**end**
end**end**
**end**
**end**
**end**
**end**
**end**
**end**
end**
**end**
**end**
end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**end**
**end**
**end**end**
**end**
**end**
**end**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**
**end**end**
**end**
**end**
**end**
**end**
**end**end**
**end**
**end**
**end**
**end**end**
**end**
**end**end**
**end**
**end**
**end**
**end**
**end**end**
**end**
**end**end**
**end**
**end**end**
**end**
**end**
**end**end**
**end**end**
**end**end**
**end**end**
**end**
**end**
**end**end**
**end**
[MISSING_PAGE_POST]
**end**
[MISSING_PAGE_POST]
**end
[MISSING_PAGE_POST]
**end**end**
end**end**end**
**end**end**
**end**end**
**end**end**
**end**end**
**end**end**
**end**end**
**end**end**
**end**end**end**
**end**end**
**end**end**
end**end**end**
**end**end**
**end**end**
**end**end**end**
**end**end**
**end**end**
**end**end**
**end**end**
**end**end**end**
**end**
**end**end**
for each pair of related states \(p\) and \(q\) a relation \(R\) between the transitions enabled in \(p\) and \(q\), and this relation should be preserved when matching related transitions in the bisimulation game. When formalising this, we need transition systems upgraded with a _successor relation_ that matches each transition \(t\) enabled in a state \(p\) to a transition \(t^{\prime}\) enabled in \(p^{\prime}\), when performing a transition from \(p\) to \(p^{\prime}\) that does not affect \(t\). Intuitively, \(t^{\prime}\) describes the same system behaviour as \(t\), but the two transitions could be formally different as they may have different sources. It is this successor relation that distinguishes the transition systems in the example above.
In [14], we showed that ep-bisimilarity is a congruence for all operators of Milner's Calculus of Communication Systems (CCS), enriched with a successor relation. We extended this result to the Algebra of Broadcast Communication with discards and Emissions (ABCdE), an extension of CCS with broadcast communication, discard actions and signal emission. ABCdE subsumes many standard process algebras found in the literature.
In this paper, we introduce a new congruence format for structural operational semantics, which is based on the well-known De Simone Format and respects the successor relation. This format allows us to generalise the results of [14] in two ways: first, we prove that ep-bisimilarity is a congruence for all operators of _any_ process algebras that can be formalised in the De Simone format with successors. Applicable languages include CCS and ABCdE. Second, we show that ep-bisimilarity is a lean congruence for recursion [10]. Here, a lean congruence preserves equivalence when replacing closed subexpressions of a process by equivalent alternatives.
## 2 Enabling Preserving Bisimilarity
To build our abstract theory of De Simone languages and De Simone formats, we briefly recapitulate the definitions of labelled transition systems with successors, and ep-bisimulation. A detailed description can be found in [14].
A _labelled transition system (LTS)_ is a tuple \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell)\) with \(S\) and \(\mathit{Tr}\) sets of _states_ and \(\mathit{transitions}\), \(\mathit{source},\mathit{target}:\mathit{Tr}\to S\) and \(\ell:\mathit{Tr}\to\mathcal{L}\), for some set \(\mathcal{L}\) of transition labels. A transition \(t\in\mathit{Tr}\) of an LTS is _enabled_ in a state \(p\in S\) if \(\mathit{source}(t)=p\). The set of transitions enabled in \(p\) is \(\mathit{en}(p)\).
**Definition 2.1** (Ltss [14]): A _labelled transition system with successors (LTSS)_ is a tuple \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell,\leadsto\omega)\) with \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell)\) an LTS and \(\leadsto\subseteq\mathit{Tr}\times\mathit{Tr}\times\mathit{Tr}\) the _successor relation_ such that if \((t,u,v)\in\leadsto\) (also denoted by \(t\leadsto_{u}v\)) then \(\mathit{source}(t)=\mathit{source}(u)\) and \(\mathit{source}(v)=\mathit{target}(u)\).
**Example 2.2**: Remember that the 'classical' LTSs of Example 1.1 are identical. Let \(t_{1}\) and \(t_{2}\) be the two transitions corresponding to y:=y+1 in the first and second state, respectively, and let \(u\) be the transition for assignment x:=1. The assignments of x and y in the right-hand program are independent, hence \(t_{1}\leadsto_{u}t_{2}\) and \(u\leadsto_{t_{1}}u\). For the other program, the situation is different: as the instructions correspond to a single component (program), all transitions affect each other, i.e. \(\leadsto=\emptyset\).
**Definition 2.3** (Ep-bisimilarity [14]): Let \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell,\leadsto\omega)\) be an LTSS. An _enabling preserving bisimulation (ep-bisimulation)_ is a relation \(\mathcal{R}\subseteq S\times S\times\mathcal{P}(\mathit{Tr}\times\mathit{Tr})\) satisfying
1. if \((p,q,R)\in\mathcal{R}\) then \(R\subseteq\mathit{en}(p)\times\mathit{en}(q)\) such that 1. \(\forall t\in\mathit{en}(p)\). \(\exists\,u\in\mathit{en}(q)\). \(t\ R\ u\), 2. \(\forall u\in\mathit{en}(q)\). \(\exists t\in\mathit{en}(p)\). \(t\ R\ u\), and 3. if \(t\ R\ u\) then \(\ell(t)=\ell(u)\); and
2. if \((p,q,R)\in\mathcal{R}\) and \(v\ R\ w\), then \((\mathit{target}(v),\mathit{target}(w),R^{\prime})\in\mathcal{R}\) for some \(R^{\prime}\) such that 1. if \(t\ R\ u\) and \(t\leadsto_{v}t^{\prime}\) then \(\exists\,u^{\prime}\). \(u\leadsto_{w}u^{\prime}\wedge t^{\prime}\ R^{\prime}\ u^{\prime}\), and 2. if \(t\ R\ u\) and \(u\leadsto_{w}u^{\prime}\) then \(\exists\,t^{\prime}\). \(t\leadsto_{v}t^{\prime}\wedge t^{\prime}\ R^{\prime}\ u^{\prime}\).
Two states \(p\) and \(q\) in an LTSS are _enabling preserving bisimilar (ep-bisimilar)_, denoted as \(p\leftrightarroweq_{ep}q\), if there is an enabling preserving bisimulation \(\mathcal{R}\) such that \((p,q,R)\in\mathcal{R}\) for some \(R\).
Without Items 2.a and 2.b, the above is nothing else than a reformulation of the classical definition of strong bisimilarity. An ep-bisimulation additionally maintains for each pair of related states \(p\) and \(q\) a relation \(R\) between the transitions enabled in \(p\) and \(q\). Items 2.a and 2.b strengthen the condition on related target states by requiring that the successors of related transitions are again related relative to these target states. It is this requirement which distinguishes the transition systems for Example 1.1. [14]
**Lemma 2.4**: [Proposition 10 of [14]]\(\leftrightarroweq_{ep}\) is an equivalence relation.
## 3 An Introductory Example: CCS with Successors
Before starting to introduce the concepts formally, we want to present some motivation in the form of the well-known Calculus of Communicating Systems (CCS) [18]. In this paper we use a proper recursion construct instead of agent identifiers with defining equations. As in [4], we write \(\langle X|S\rangle\) for the \(X\)-component of a solution of the set of recursive equations \(S\).
CCS is parametrised with set \(\mathcal{C}\) of _handshake communication names_. \(\tilde{\mathcal{C}}\coloneqq\{\tilde{c}\mid c\in\mathcal{C}\}\) is the set of _handshake communication co-names_. \(Act_{CCS}\coloneqq\mathcal{C}\cup\tilde{\mathcal{C}}\cup\{\tau\}\) is the set of _actions_, where \(\tau\) is a special _internal action_. Complementation extends to \(\mathcal{C}\cup\tilde{\mathcal{C}}\) by \(\tilde{c}\coloneqq c\).
Below, \(c\) ranges over \(\mathcal{C}\cup\tilde{\mathcal{C}}\) and \(\alpha\), \(\ell\), \(\eta\) over \(Act_{CCS}\). A _relabelling_ is a function \(f:\mathcal{C}\rightarrow\mathcal{C}\); it extends to \(Act_{CCS}\) by \(f(\tilde{c})=\overline{f(c)}\), \(f(\tau)\coloneqq\tau\).
The process signature \(\Sigma\) of CCS features binary infix-written operators \(+\) and \(|\), denoting _choice_ and _parallel composition_, a constant \(\mathbf{0}\) denoting _inaction_, a unary _action prefixing_ operator \(\alpha\_{\_}{\ldots}\) for each action \(\alpha\in Act_{CCS}\), a unary _restriction_ operator \(\_\backslash L\) for each set \(L\subseteq\mathcal{C}\), and a unary _relabelling_ operator \(\_\backslash f\) for each relabelling \(f:\mathcal{C}\rightarrow\mathcal{C}\).
The semantics of CCS is given by the set \(\mathcal{R}\) of _transition rules_, shown in Table 1. Here \(\overline{L}\coloneqq\{\tilde{c}\mid c\in L\}\). Each rule has a unique name, displayed in blue.2 The rules are displayed as templates, following the standard convention of labelling transitions with _label variables_\(c\), \(\alpha\), \(\ell\), etc. and may be accompanied by side conditions in green, so that each of those templates corresponds to a set of (concrete) transition rules where label variables are "instantiated" to labels in certain ranges and all side conditions are met. The rule names are also schematic and may contain variables. For example, all instances of the transition rule template \(+_{\_}\) are named \(+_{\_}\), whereas there is one rule name \(\stackrel{{\alpha}}{{\rightarrow}}\) for each action \(\alpha\in Act_{CCS}\).
Footnote 2: Our colourings are for readability only.
\begin{table}
\begin{tabular}{|c c c|} \hline \(\overline{\alpha.x\stackrel{{\alpha}}{{\longrightarrow}}x} \stackrel{{\alpha}}{{\rightarrow}}\) & \(\overline{x\stackrel{{\alpha}}{{\longrightarrow}}x^{\prime}} \stackrel{{\alpha}}{{\longrightarrow}}+_{\_}\) & \(\overline{x\stackrel{{\alpha}}{{\longrightarrow}}y^{\prime}} \stackrel{{\alpha}}{{\longrightarrow}}y^{\prime}\) \\ \(\overline{x\stackrel{{\eta}}{{\longrightarrow}}x^{\prime}}|_{\_}\) & \(\underset{x\stackrel{{ c}}{{\longrightarrow}}x^{\prime}}{x}|y^{ \prime}\) & \(\underset{c}{{\longrightarrow}}\) & \(\overline{x\stackrel{{\eta}}{{\longrightarrow}}y^{\prime}} \stackrel{{\alpha}}{{\longrightarrow}x^{\prime}}|_{\_}\) \\ \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}\stackrel{{ \ell}}{{\longrightarrow}x^{\prime}}\backslash L\) & \(\underset{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) & \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) & \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) & \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) \\ \hline \end{tabular}
\end{table}
Table 1: Structural operational semantics of CCS
The transition system specification \((\Sigma,\mathcal{R})\) is in De Simone format [23], a special rule format that guarantees properties of the process algebra (for free), such as strong bisimulation being a congruence for all operators. Following [14], we leave out the infinite sum \(\sum_{i\in t}x_{i}\) of CCS [18], as it is strictly speaking not in De Simone format.
In this paper, we will extend the De Simone format to also guarantee properties for ep-bisimulation. As seen, ep-bisimulation requires that the structural operational semantics is equipped with a successor relation \(\leadsto\). The meaning of \(\chi\leadsto\zeta\)\(\chi^{\prime}\) is that transition \(\chi\) is unaffected by \(\zeta\) - denoted \(\chi\leadsto\zeta\) - and that when doing \(\zeta\) instead of \(\chi\), afterwards a variant \(\chi^{\prime}\) of \(\chi\) is still enabled. Table 2 shows the _successor rules_ for CCS, which allow the relation \(\leadsto\) to be derived inductively. It uses the following syntax for transitions \(\chi\), which will be formally introduced in Section 6. The expression \(t\!+\!_{\!\!\!\perp}Q\) refers to the transition that is derived by rule \(+\!_{\!\!\!\perp}\) of Table 1, with \(t\) referring to the transition used in the unique premise of this rule, and \(Q\) referring to the process in the inactive argument of the \(+\)-operator. The syntax for the other transitions is analogous. A small deviation of this scheme occurs for recursion: \(rec_{Act}(X,S,t)\) refers to the transition derived by rule \(rec_{Act}\) out of the premise \(t\), when deriving a transition of a recursive call \(\langle X|S\rangle\).
In Table 2 each rule is named, in orange, after the number of the clause of Definition 20 in [14], were it was introduced.
The primary source of concurrency between transition \(\chi\) and \(\zeta\) is when they stem from opposite sides of a parallel composition. This is expressed by Rules 7a and 7b. We require all obtained successor statements \(\chi\leadsto\zeta\)\(\chi^{\prime}\) to satisfy the conditions of Definition 2.1 - this yields \(Q^{\prime}=target(w)\) and \(P^{\prime}=target(v)\); in [14]\(Q^{\prime}\) and \(P^{\prime}\) were written this way.
In all other cases, successors of \(\chi\) are inherited from successors of their building blocks.
When \(\zeta\) stems from the left side of a \(+\) via rule \(+\!_{\!\!\!\perp}\) of Table 1, then any transition \(\chi\) stemming from the right is discarded by \(\zeta\), so \(\chi\not\to\zeta\). Thus, if \(\chi\leadsto\zeta\) then these transitions have the form \(\chi=t\!+\!_{\!\!\!\perp}Q\) and \(\zeta=v\!+\!_{\!\!\!\perp}Q\), and we must have \(t\leadsto v\). So \(t\leadsto v\,t^{\prime}\) for some transition \(t^{\prime}\). As the execution of \(\zeta\) discards the summand \(Q\), we also obtain \(\chi\leadsto\zeta\,t^{\prime}\). This motivates Rule 3a. Rule 4a follows by symmetry.
In a similar way, Rule 8a covers the case that \(\chi\) and \(\zeta\) both stem from the left component of a parallel composition. It can also happen that \(\chi\) stems form the left component, whereas \(\zeta\) is a synchronisation, involving both components. Thus \(\chi=t|_{\!\!\!\perp}Q\) and \(\zeta=v|_{\!\!\!\perp}w\). For \(\chi\leadsto\zeta\) to hold, it must be that \(t\leadsto v\), whereas the \(w\)-part of \(\zeta\) cannot interfere with \(t\). This yields the Rule 8b. Rule 8c is explained in a similar train from the possibility that \(\zeta\) stems from the left while \(\chi\) is a synchronisation of both components. Rule 9 follows by symmetry. In case both \(\chi\) and \(\zeta\) are synchronisations involving both components, i.e., \(\chi=t|_{\!\!\!\perp}u\) and \(\zeta=v|_{\!\!\!\perp}w\), it must be that \(t\leadsto v\) and \(u\leadsto w\). Now the resulting variant \(\chi^{\prime}\) of \(\chi\) after \(\zeta\) is simply \(t^{\prime}|u^{\prime}\), where \(t\leadsto v\,t^{\prime}\) and \(u\leadsto v\,u^{\prime}\). This underpins Rule 10.
If the common source \(O\) of \(\chi\) and \(\zeta\) has the form \(P[f]\), \(\chi\) and \(\zeta\) must have the form \(t[f]\) and \(v[f]\). Whether \(t\) and \(v\) are concurrent is not influenced by the renaming. So \(t\leadsto v\). The variant of \(t\) that remains after doing \(v\) is also not affected by the renaming, so if \(t\leadsto v\,t^{\prime}\) then \(\chi\leadsto\zeta\,t^{\prime}[f]\). The case that \(O=P\backslash L\) is equally trivial. This yields Rules 11a and 11b.
In case \(O=\langle X|S\rangle\), \(\chi\) must have the form \(rec_{Act}(X,S,t)\), and \(\zeta\) has the form \(rec_{Act}(X,S,v)\), where \(t\) and \(v\) are enabled in \(\langle S_{X}|S\rangle\). Now \(\chi\leadsto\zeta\) only if \(t\leadsto v\), so \(t\leadsto v\,t^{\prime}\) for some transition \(t^{\prime}\). The recursive call disappears upon executing \(\zeta\), and we obtain \(\chi\leadsto\zeta\,t^{\prime}\). This yields Rule 11c.
**Example 3.1**: The programs from Example 1.1 could be represented in CCS as \(P\!:=\langle X|S\rangle\) where \(S=\left\{\begin{array}{l}X=a.X+b.Y\\ Y=a.Y\end{array}\right\}\) and \(Q:=\langle Z|\{Z=a.Z\}\rangle|b.\mathbf{0}\). Here \(a,b\in Act_{CCS}\) are the atomic actions incrementing \(y\) and \(x\). The relation matching \(P\) with \(Q\) and \(\langle Y,S\rangle\) with \(\langle Z|\{Z=a.Z\}\rangle|\mathbf{0}\) is a strong bisimulation. Yet, \(P\) and \(Q\) are not ep-bisimilar, as the rules of Table 2 derive \(u\leadsto_{t_{1}}u\) (cf. Example 2.2)
where \(u=\langle Z|\{Z=a.Z\}\rangle|_{\mbox{\tiny{R}}}\overset{b}{\to}\mathbf{0}\) and \(t_{1}=rec_{Act}(Z,\{Z{=a.Z\},\overset{a}{\to}Q\})|_{\mbox{\tiny{L}}}b.\mathbf{0}\). This cannot be matched by \(P\), thus violating condition 2.b. of Definition 2.3.
In this paper we will introduce a new De Simone format for transition systems with successors (TSSS). We will show that \(\trianglelefteq_{ep}\) is a congruence for all operators (as well as a lean congruence for recursion) in any language that fits this format. Since the rules of Table 2 fit this new De Simone format, it follows that \(\trianglelefteq_{ep}\) is a congruence for the operators of CCS.
Informally, the conclusion of a successor rule in this extension of the De Simone format must have the form \(\zeta\leadsto_{\xi}\zeta^{\prime}\) where \(\zeta\), \(\xi\) and \(\zeta^{\prime}\) are _open transitions_, denoted by _transition expressions_ with variables, formally introduced in Section 6. Both \(\zeta\) and \(\xi\) must have a leading operator R and S of the same type, and the same number of arguments. These leading operators must be rule names of the same type. Their arguments are either process variables \(P,Q,...\) or transition variables \(t,u,...\), as determined by the trigger sets \(I_{\mbox{\tiny{R}}}\) and \(I_{\mbox{\tiny{S}}}\) of R and S. These are the sets of indices listing the arguments for which rules R and S have a premise. If the \(i^{\mbox{\tiny{th}}}\) arguments of R and S are both process variables, they must be the same, but for the rest all these variables are different. For a subset \(I\) of \(I_{\mbox{\tiny{R}}}\cap I_{\mbox{\tiny{S}}}\), the rule has premises \(t_{i}\leadsto_{u_{i}}t_{i}^{\prime}\) for \(i\in I\), where \(t_{i}\) and \(u_{i}\) are the \(i^{\mbox{\tiny{th}}}\) arguments of R and S, and \(t_{i}^{\prime}\) is a fresh variable. Finally, the right-hand side of the conclusion may be an arbitrary univariate transition expression, containing no other variables than:
* the \(t_{i}^{\prime}\) for \(i\in I\),
* a \(t_{i}\) occurring in \(\zeta\), with \(i\notin I_{\mbox{\tiny{S}}}\),
* a fresh process variable \(P_{i}^{\prime}\) that must match the target of the transition \(u_{i}\) for \(i\in I_{\mbox{\tiny{S}}}\setminus I\),
* _or_ a fresh transition variable whose source matches the target of \(u_{i}\) for \(i\in I_{\mbox{\tiny{S}}}\setminus I\), and
* any \(P\) occurring in both \(\zeta\) and \(\xi\), _or_ any fresh transition variable whose source must be \(P\).
The rules of Table 2 only feature the first three possibilities; the others occur in the successor relation of ABCdE - see Section 8.
## 4 Structural Operational Semantics
Both the De Simone format and our forthcoming extension are based on the syntactic form of the operational rules. In this section, we recapitulate foundational definitions needed later on. Let \(\mathcal{V}_{\mathcal{P}}\) be an infinite set of _process variables_, ranged over by \(X,Y,x,y,x_{i}\), etc.
\begin{table}
\begin{tabular}{|c|} \hline \(t\leadsto_{v}t^{\prime}\) \(3a\) \(\frac{u\leadsto_{w}u^{\prime}}{P_{+\mbox{\tiny{R}}}u\leadsto_{p_{+\mbox{\tiny{R }}}}u^{\prime}}\)\(4a\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q\leadsto_{v_{\mbox{\tiny{R}}} }t^{\prime}|_{\mbox{\tiny{L}}}Q}\)\(7a\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}u\leadsto_{v_{\mbox{\tiny{V}}} }u^{\prime}}\)\(10\) \(\frac{u\leadsto_{w}u^{\prime}}{P|_{\mbox{\tiny{R}}}u\leadsto_{v_{\mbox{\tiny{V}}} }P^{\prime}|_{\mbox{\tiny{R}}}u}\)\(7b\) \\ \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q\leadsto_{v_{\mbox{\tiny{V}}} \parallel_{\mbox{\tiny{L}}}Q}t^{\prime}|_{\mbox{\tiny{L}}}Q}\)\(8a\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q}\)\(8b\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q}\)\(8c\) \(\frac{u\leadsto_{w}u^{\prime}}{P|_{\mbox{\tiny{R}}}u\leadsto_{v_{\mbox{\tiny{V}}} }P^{\prime}|_{\mbox{\tiny{R}}}u^{\prime}}\)\(9a\) \(\frac{u\leadsto_{w}u^{\prime}}{P|_{\mbox{\tiny{R}}}u\leadsto_{v_{\mbox{\tiny{V}}} }w^{\prime}|_{\mbox{\tiny{R}}}u^{\prime}}\)\(9b\) \(\frac{u\leadsto_{w}u^{\prime}}{t|_{\mbox{\tiny{L}}}u\leadsto_{v_{\mbox{\tiny{V}}} }P^{\prime}|_{\mbox{\tiny{R}}}u^{\prime}}\)\(9c\) \(\frac{t\leadsto_{v}t^{\prime}}{t^{\prime}|\mbox{\tiny{L}}}\)\(11a\)\(\frac{t\leadsto_{v}t^{\prime}}{t^{\prime}|\mbox{\tiny{L}}}\)\(11b\)\(\frac{t\leadsto_{v}t^{\prime}}{rec_{Act}(X,S,t)\leadsto_{rec_{Act}(X,S,t)}t^{\prime}}\)\(11c\) \\ \hline \end{tabular}
\end{table}
Table 2: Successor rules for CCS
**Definition 4.1** (Process Expressions [9]): An _operator declaration_ is a pair \((Op,n)\) of an _operator symbol_\(Op\notin\mathcal{V}_{\mathcal{P}}\) and an _arity_\(n\in\mathbb{N}\). An operator declaration \((c,0)\) is also called a _constant declaration_. A _process signature_ is a set of operator declarations. The set \(\mathbb{P}^{\,r}(\Sigma)\) of _process expressions_ over a process signature \(\Sigma\) is defined inductively by:
* \(\mathcal{V}_{\mathcal{P}}\subseteq\mathbb{P}^{\,r}(\Sigma)\),
* if \((Op,n)\in\Sigma\) and \(p_{1},\ldots,p_{n}\in\mathbb{P}^{\,r}(\Sigma)\) then \(Op(p_{1},\ldots,p_{n})\in\mathbb{P}^{\,r}(\Sigma)\), and
* if \(V_{S}\subseteq\mathcal{V}_{\mathcal{P}}\), \(S:V_{S}\rightarrow\mathbb{P}^{\,r}(\Sigma)\) and \(X\in V_{S}\), then \(\langle X|S\rangle\in\mathbb{P}^{\,r}(\Sigma)\).
A process expression \(c()\) is abbreviated as \(c\) and is also called a _constant_. An expression \(\langle X|S\rangle\) as appears in the last clause is called a _recursive call_, and the function \(S\) therein is called a _recursive specification_. It is often displayed as \(\{X=S_{X}\mid X\in V_{S}\}\). Therefore, for a recursive specification \(S\), \(V_{S}\) denotes the domain of \(S\) and \(S_{X}\) represents \(S(X)\) when \(X\in V_{S}\). Each expression \(S_{Y}\) for \(Y\in V_{S}\) counts as a subexpression of \(\langle X|S\rangle\). An occurrence of a process variable \(y\) in an expression \(p\) is _free_ if it does not occur in a subexpression of the form \(\langle X|S\rangle\) with \(y\in V_{S}\). For an expression \(p\), \(\mathit{var}(p)\) denotes the set of process variables having at least one free occurrence in \(p\). An expression is _closed_ if it contains no free occurrences of variables. Let \(\mathbb{P}^{\,r}(\Sigma)\) be the set of closed process expressions over \(\Sigma\).
**Definition 4.2** (Substitution): A _\(\Sigma\)-substitution_\(\sigma\) is a partial function from \(\mathcal{V}_{\mathcal{P}}\) to \(\mathbb{P}^{\,r}(\Sigma)\). It is _closed_ if it is a total function from \(\mathcal{V}_{\mathcal{P}}\) to \(\mathbb{P}^{\,r}(\Sigma)\).
If \(p\in\mathbb{P}^{\,r}(\Sigma)\) and \(\sigma\) a \(\Sigma\)-substitution, then \(p[\sigma]\) denotes the expression obtained from \(p\) by replacing, for \(x\) in the domain of \(\sigma\), every free occurrence of \(x\) in \(p\) by \(\sigma(x)\), while renaming bound process variables if necessary to prevent name-clashes. In that case \(p[\sigma]\) is called a _substitution instance_ of \(p\). A substitution instance \(p[\sigma]\) where \(\sigma\) is given by \(\sigma(x_{i})=q_{i}\) for \(i\in I\) is denoted as \(p[q_{i}/x_{i}]_{i\in I}\), and for \(S\) a recursive specification \(\langle p|S\rangle\) abbreviates \(p[\langle Y|S\rangle/Y]_{Y\in V_{S}}\).
These notions, including "free" and "closed", extend to syntactic objects containing expressions, with the understanding that such an object is a substitution instance of another one if the same substitution has been applied to each of its constituent expressions.
We assume fixed but arbitrary sets \(\mathcal{L}\) and \(\mathcal{N}\) of _transition labels_ and _rule names_.
**Definition 4.3** (Transition System Specification [17]): Let \(\Sigma\) be a process signature. A _\(\Sigma\)-(transition) literal_ is an expression \(p\stackrel{{ a}}{{\longrightarrow}}q\) with \(p,q\in\mathbb{P}^{\,r}(\Sigma)\) and \(a\!\in\!\mathcal{L}\). A _transition rule_ over \(\Sigma\) is an expression of the form \(\frac{H}{\lambda}\) with \(H\) a finite list of \(\Sigma\)-literals (the _premises_ of the transition rule) and \(\lambda\) a \(\Sigma\)-literal (the _conclusion_). A _transition system specification (TSS)_ is a tuple \((\Sigma,\mathcal{R},\mathbb{N})\) with \(\mathcal{R}\) a set of transition rules over \(\Sigma\), and \(\mathbb{N}:\mathcal{R}\rightarrow\mathcal{N}\) a (not necessarily injective) _rule-naming function_, that provides each rule \(r\in\mathcal{R}\) with a name \(\mathbb{N}(r)\).
**Definition 4.4** (Proof): Assume literals, rules, substitution instances and rule-naming. A _proof_ of a literal \(\lambda\) from a set \(\mathcal{R}\) of rules is a well-founded, upwardly branching, ordered tree where nodes are labelled by pairs \((\mu,\mathbb{r})\) of a literal \(\mu\) and a rule name \(\mathbb{R}\), such that
* the root is labelled by a pair \((\lambda,\mathbb{s})\), and
* if \((\mu,\mathbb{r})\) is the label of a node and \((\mu_{1},\mathbb{r}_{1}),\ldots,(\mu_{n},\mathbb{r}_{n})\) is the list of labels of this node's children then \(\frac{\mu_{1},\ldots,\mu_{n}}{\mu}\) is a substitution instance of a rule in \(\mathcal{R}\) with name \(\mathbb{R}\).
**Definition 4.5** (Associated LTS [13]): The _associated LTS_ of a TSS \((\Sigma,\mathcal{R},\mathbb{N})\) is the LTS \((S,\mathit{Tr},\mathit{source},\)\(\mathit{target},\ell)\) with \(S\coloneqq\mathbb{P}^{\,r}(\Sigma)\) and \(\mathit{Tr}\) the collection of proofs \(\pi\) of closed \(\Sigma\)-literals \(p\stackrel{{ a}}{{\longrightarrow}}q\) from \(\mathcal{R}\), where \(\mathit{source}(\pi)=p\), \(\ell(\pi)=a\) and \(\mathit{target}(\pi)=q\).
Above we deviate from the standard treatment of structural operational semantics [17, 9] on four counts. Here we employ CCS to motivate those design decisions.
In Definition 4.5, the transitions \(\mathit{Tr}\) are taken to be proofs of closed literals \(p\stackrel{{ a}}{{\longrightarrow}}q\) rather than such literals themselves. This is because there can be multiple \(a\)-transitions from \(p\) to \(q\) that need to be distinguished when taking the concurrency relation between transitions into account. For example, if \(p:=\langle X|\{X=a.X+c.X\}\rangle\) and \(q:=\langle Y|\{Y=a.Y\}\rangle\) then \(p|q\) has three outgoing transitions:
\[\begin{array}{cc}\infer{\begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p\stackrel{{ a}}{{\longrightarrow}}p\stackrel{{ a}}{{\longrightarrow}}}\\ \infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\\ \infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\\ \infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.q\stackrel{{ a}}{{\longrightarrow}}q\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.q\stackrel{{ a}}{{\longrightarrow}}q\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.q\stackrel{{ a}}{{\longrightarrow}}q\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.q\stackrel{{ a}}{{\longrightarrow}}q \stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.q\stackrel{{ a}}{{\longrightarrow}}q \stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.
**Definition 5.1** (De Simone Format): A TSS \((\Sigma,\mathcal{R},\textsc{n})\) is in _De Simone format_ if for every recursive call \(\langle X|S\rangle\) and every \(\alpha\in Act\) and \(\ell\in\mathcal{L}\backslash Act\), it has transition rules
\[\frac{\langle S_{X}|S\rangle\stackrel{{\alpha}}{{ \longrightarrow}}y}{\langle X|S\rangle\stackrel{{\alpha}}{{ \longrightarrow}}y}\ rec_{Act}\qquad\text{and}\qquad\frac{\langle S_{X}|S \rangle\stackrel{{\ell}}{{\longrightarrow}}y}{\langle X|S \rangle\stackrel{{\ell}}{{\longrightarrow}}\langle X|S\rangle}\ rec_{In}\quad \text{for some}\quad y\notin var(\langle S_{X}|S\rangle),\]
and each of its other transition rules (_De Simone rules_) has the form
\[\frac{\{x_{i}\stackrel{{ a_{i}}}{{\longrightarrow}}y_{i}\mid i \in I\}}{Op(x_{1},\ldots,x_{n})\stackrel{{ a}}{{\longrightarrow}}q}\]
where \((Op,n)\in\Sigma\), \(I\subseteq\{1,\ldots,n\}\), \(a,a_{i}\in\mathcal{L}\), \(x_{i}\) (for \(1\leq i\leq n\)) and \(y_{i}\) (for \(i\in I\)) are pairwise distinct process variables, and \(q\) is a univariate process expression containing no other free process variables than \(x_{i}\) (\(1\leq i\leq n\wedge i\notin I\)) and \(y_{i}\) (\(i\in I\)), having the properties that
* each subexpression of the form \(\langle X|S\rangle\) is closed, and
* if \(a\in\mathcal{L}\backslash Act\) then \(a_{i}\in\mathcal{L}\backslash Act\) (\(i\in I\)) and \(q=Op(z_{1},\ldots,z_{n})\), where \(z_{i}:=\begin{cases}y_{i}&\text{if }i\in I\\ x_{i}&\text{otherwise}.\end{cases}\)
Here _univariate_ means that each variable has at most one free occurrence in it. The last clause above guarantees that for any indicator transition \(t\), one with \(\ell(t)\in\mathcal{L}\backslash Act\), we have \(target(t)=source(t)\). For a De Simone rule of the above form, \(n\) is the _arity_, \((Op,n)\) is the _type_, \(a\) is the _label_, \(q\) is the _target_, \(I\) is the _trigger set_ and the tuple \((\ell_{i},\ldots,\ell_{n})\) with \(\ell_{i}=a_{i}\) if \(i\in I\) and \(\ell_{i}=*\) otherwise, is the _trigger_. Transition rules in the first two clauses are called _recursion rules_.
We also require that if \(\textsc{n}(r)=\textsc{n}(r^{\prime})\) for two different De Simone rules \(r,r^{\prime}\in\mathcal{R}\), then \(r,r^{\prime}\) have the same type, target and trigger set, but different triggers. The names of the recursion rules are as indicated in blue above, and differ from the names of any De Simone rules.
Many process description languages encountered in the literature, including CCS [18] as presented in Section 3, SCCS [19], ACP [4] and Meije [3], are De Simone languages.
## 6 Transition System Specifications with Successors
In Section 4, a _process_ is denoted by a closed process expression; an open process expression may contain variables, which stand for as-of-yet unspecified subprocesses. Here we will do the same for transition expressions with variables. However, in this paper a transition is defined as a proof of a literal \(p\stackrel{{ a}}{{\longrightarrow}}q\) from the operational rules of a language. Elsewhere, a transition is often defined as a provable literal \(p\stackrel{{ a}}{{\longrightarrow}}q\), but here we need to distinguish transitions based on these proofs, as this influences whether two transitions are concurrent.
It turns out to be convenient to introduce an _open proof_ of a literal as the semantic interpretation of an open transition expression. It is simply a proof in which certain subproofs are replaced by proof variables.
**Definition 6.1** (Open Proof): Given definitions of literals, rules and substitution instances, and a rule-naming function n, an _open proof_ of a literal \(\lambda\) from a set \(\mathcal{R}\) of rules using a set \(\mathcal{V}\) of _(proof) variables_ is a well-founded, upwardly branching, ordered tree of which the nodes are labelled either by pairs \((\mu,\textsc{r})\) of a literal \(\mu\) and a rule name r, or by pairs \((\mu,px)\) of a literal \(\mu\) and a variable \(px\in\mathcal{V}\) such that
* the root is labelled by a pair \((\lambda,\chi)\),
* if \((\mu,px)\) is the label of a node then this node has no children,
* if two nodes are labelled by \((\mu,px)\) and \((\mu^{\prime},px)\) separately then \(\mu=\mu^{\prime}\), and
* if \((\mu,\textsc{R})\) is the label of a node and \((\mu_{1},\chi_{1}),\ldots,(\mu_{n},\chi_{n})\) is the list of labels of this node's children then \(\frac{\mu_{1},\ldots,\mu_{n}}{\mu}\) is a substitution instance of a rule named R.
Let \(\mathcal{V}_{\mathcal{T}}\) be an infinite set of _transition variables_, disjoint from \(\mathcal{V}_{\mathcal{P}}\). We will use \(tx,ux,vx,ty,tx_{i}\), etc. to range over \(\mathcal{V}_{\mathcal{T}}\).
**Definition 6.2** (Open Transition): Fix a TSS \((\Sigma,\mathcal{R},\textsc{N})\). An _open transition_ is an open proof of a \(\Sigma\)-literal from \(\mathcal{R}\) using \(\mathcal{V}_{\mathcal{T}}\). For an open transition \(\hat{t}\), \(var_{\mathcal{T}}(\hat{t})\) denotes the set of transition variables occurring in \(\hat{t}\); if its root is labelled by \((p\xrightarrow{a}q,\chi)\) then \(src_{\circ}(\hat{t})=p\), \(\ell_{\circ}(\hat{t})=a\) and \(tar_{\circ}(\hat{t})=q\). The _binding function_\(\beta_{\hat{t}}\) of \(\hat{t}\) from \(var_{\mathcal{T}}(\hat{t})\) to \(\Sigma\)-literals is defined by \(\beta_{\hat{t}}(tx)=\mu\) if \(tx\in var_{\mathcal{T}}(\hat{t})\) and \((\mu,tx)\) is the label of a node in \(\hat{t}\). Given an open transition, we refer to the subproofs obtained by deleting the root node as its _direct subtransitions_.
All occurrences of transition variables are considered _free_. Let \(\mathbb{T}^{r}(\Sigma,\mathcal{R},\textsc{N})\) be the set of open transitions in the TSS \((\Sigma,\mathcal{R},\textsc{N})\) and \(\mathbb{T}^{r}(\Sigma,\mathcal{R},\textsc{N})\) the set of closed open transitions. We have \(\mathbb{T}^{r}(\Sigma,\mathcal{R},\textsc{N})=\textit{Tr}\).
Let \(en_{\circ}(p)\) denote \(\{\hat{t}\mid src_{\circ}(\hat{t})=p\}\).
**Definition 6.3** (Transition Expression): A _transition declaration_ is a tuple \((\textsc{R},n,I)\) of a _transition constructor_ R, an arity \(n\in\textsc{N}\) and a trigger set \(I\subseteq\{1,\ldots,n\}\). A _transition signature_ is a set of transition declarations. The set \(\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\) of _transition expressions_ over a process signature \(\Sigma_{\mathcal{P}}\) and a transition signature \(\Sigma_{\mathcal{T}}\) is defined inductively as follows.
* if \(tx\in\mathcal{V}_{\mathcal{T}}\) and \(\mu\) is a \(\Sigma\)-literal then \((tx::\mu)\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\),
* if \(E\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\), \(S:\mathcal{V}_{\mathcal{P}}\rightarrow\mathbb{P}^{r}(\Sigma_{\mathcal{P}})\) and \(X\in\mathrm{dom}(S)\) then \(rec_{Act}(X,S,E),rec_{In}(X,S,E)\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P }},\Sigma_{\mathcal{T}})\), and
* if \((\textsc{R},n,I)\in\Sigma_{\mathcal{T}}\), \(E_{i}\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\) for each \(i\in I\), and \(E_{i}\in\mathbb{P}^{r}(\Sigma_{\mathcal{P}})\) for each \(i\in\{1,\ldots,n\}\setminus I\), then \(\textsc{R}(E_{1},\ldots,E_{n})\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P }},\Sigma_{\mathcal{T}})\).
Given a TSS \((\Sigma,\mathcal{R},\textsc{N})\) in De Simone format, each open transition \(\hat{t}\in\mathbb{T}^{r}(\Sigma,\mathcal{R})\) is named by a unique transition expression in \(\mathbb{T}\mathbb{E}^{r}(\Sigma,\Sigma_{\mathcal{T}})\); here \(\Sigma_{\mathcal{T}}=\{(\textsc{N}(r),n,I)\mid r\in\mathcal{R}\text{ is a De Simone rule, }n\text{ is its arity and }I\text{ is its trigger set}\}\):
* if the root of \(\hat{t}\) is labelled by \((\mu,tx)\) where \(tx\in\mathcal{V}_{\mathcal{T}}\) then \(\hat{t}\) is named \((tx::\mu)\),
* if the root of \(\hat{t}\) is labelled by \((\langle X|S\rangle\xrightarrow{a}q,\textsc{R})\) where \(a\in Act\) then \(\hat{t}\) is named \(rec_{Act}(X,S,E)\) where \(E\) is the name of the direct subtransition of \(\hat{t}\),
* if the root of \(\hat{t}\) is labelled by \((\langle X|S\rangle\xrightarrow{\ell}\langle X|S\rangle,\textsc{R})\) where \(\ell\in\mathcal{L}\backslash Act\) then \(\hat{t}\) is named \(rec_{In}(X,S,E)\) where \(E\) is the name of the direct subtransition of \(\hat{t}\), and
* if the root of \(\hat{t}\) is labelled by \((Op(p_{1},\ldots,p_{n})\xrightarrow{a}q,\textsc{R})\) then \(\hat{t}\) is named \(\textsc{R}(E_{1},\ldots,E_{n})\) where, letting \(n\) and \(I\) be the arity and the trigger set of the rules named R, \(E_{i}\) for each \(i\in I\) is the name of the direct subtransitions of \(\hat{t}\) corresponding to the index \(i\), and \(E_{i}=p_{i}\) for each \(i\in\{1,\ldots,n\}\setminus I\).
We now see that the first requirement for the rule-naming function in Definition 5.1 ensures that every open transition is uniquely identified by its name.
**Definition 6.4** (Transition Substitution): Let \((\Sigma,\mathcal{R},\textsc{N})\) be a TSS. A \((\Sigma,\mathcal{R})\)_-substitution_ is a partial function \(\sigma_{\mathcal{T}}:(\mathcal{V}_{\mathcal{P}}\rightarrow\mathbb{P}^{r}( \Sigma))\cup(\mathcal{V}_{\mathcal{T}}\rightarrow\mathbb{T}^{r}(\Sigma, \mathcal{R}))\). It is _closed_ if it is a total function \(\sigma_{\mathcal{T}}:(\mathcal{V}_{\mathcal{P}}\rightarrow\mathbb{P}^{r}( \Sigma))\cup(\mathcal{V}_{\mathcal{T}}\rightarrow\mathbb{T}^{r}(\Sigma, \mathcal{R}))\). A \((\Sigma,\mathcal{R})\)-substitution \(\sigma_{\mathcal{T}}\)_matches_ all process expressions. It matches an open transition \(\hat{t}\) whose binding function is \(\beta_{\hat{t}}\) if for all \((tx,\mu)\in\beta_{\hat{t}}\), \(\sigma_{\mathcal{T}}(tx)\) being defined and \(\mu=(p\xrightarrow{a}q)\) implies \(\ell_{\circ}(\sigma_{\mathcal{T}}(tx))=a\) and \(src_{\circ}(\sigma_{\mathcal{T}}(tx)),tar_{\circ}(\sigma_{\mathcal{T}}(tx))\) being the substitution instances of \(p,q\) respectively by applying \(\sigma_{\mathcal{T}}\!\mid\!\mathcal{V}_{\mathcal{P}}\).
If \(E\in\mathbb{P}^{r}(\Sigma)\cup\mathbb{T}^{r}(\Sigma,\mathcal{R})\) and \(\sigma_{\mathcal{T}}\) is a \((\Sigma,\mathcal{R})\)-substitution matching \(E\), then \(E[\sigma_{\mathcal{T}}]\) denotes the expression obtained from \(E\) by replacing, for \(tx\in\mathcal{V}_{\mathcal{T}}\) in the domain of \(\sigma_{\mathcal{T}}\), every subexpression of the form
\((tx\mathrel{\mathop{:}\mskip
* if \(i\in I\) then \(xe_{i}=(tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{\prime})\) and \(ye_{i}=(ty_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}y_{i}^{ \prime})\),
* if \(i\notin I\) then \(xe_{i}\) is either \(x_{i}\) or \((tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{ \prime})\), and \(ye_{i}\) is either \(x_{i}\) or \((ty_{i}::x_{i}\stackrel{{ya_{i}}}{{\longrightarrow}}y_{i}^{ \prime})\),
* R and S are \(n\)-ary transition constructors such that the open transitions \(\textsc{R}(xe_{1},\ldots,xe_{n})\), \(\textsc{S}(ye_{1},\ldots,ye_{n})\) and \(\hat{v}\) satisfy \[src_{\circ}(\textsc{R}(xe_{1},\ldots,xe_{n}))=src_{\circ}(\textsc{S}(ye_{1}, \ldots,ye_{n}))\] and \(src_{\circ}(\hat{v})=tar_{\circ}(\textsc{S}(ye_{1},\ldots,ye_{n}))\),
* \(\hat{v}\) is univariate and contains no other variable expressions than
* \(x_{i}\) or \((tz_{i}::x_{i}\stackrel{{za_{i}}}{{\longrightarrow}}z_{i}^{ \prime})\) (\(1\leq i\leq n\wedge xe_{i}=ye_{i}=x_{i}\)),
* \((tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{ \prime})\) (\(1\leq i\leq n\wedge xe_{i}\neq x_{i}\wedge ye_{i}=x_{i}\)),
* \(y_{i}^{\prime}\) or \((tz_{i}::y_{i}^{\prime}\stackrel{{za_{i}}}{{\longrightarrow}}z_{i}^ {\prime})\) (\(1\leq i\leq n\wedge i\notin I\wedge ye_{i}\neq x_{i}\)),
* \((tz_{i}::y_{i}^{\prime}\stackrel{{za_{i}}}{{\longrightarrow}}z_{i}^ {\prime})\) (\(i\in I\)), and
* if \(\ell_{\circ}(\textsc{S}(ye_{1},\ldots,ye_{n}))\in\mathscr{L}\backslash Act\) then for \(i\in I\), \(ya_{i}\in\mathscr{L}\backslash Act\); for \(i\notin I\), either \(xe_{i}=x_{i}\) or \(ye_{i}=x_{i}\); and \(\hat{v}=\textsc{R}(ze_{1},\ldots,ze_{n})\), where \[ze_{i}:=\begin{cases}(tz_{i}::y_{i}^{\prime}\stackrel{{za_{i}}}{{ \longrightarrow}}z_{i}^{\prime})&\text{ if }i\in I\\ &\text{ }xe_{i}&\text{ if }i\notin I\text{ and }ye_{i}=x_{i}\\ &\text{ }y_{i}^{\prime}&\text{ otherwise.}\end{cases}\] The last clause above is simply to ensure that if \(t\leadsto_{u}v\) for an indicator transition \(u\), that is, with \(\ell(u)\notin Act\), then \(v=t\).
The other conditions of Definition 7.1 are illustrated by the Venn diagram of Figure 1. The outer circle depicts the indices \(1,\ldots,n\) numbering the arguments of the operator \(Op\) that is the common type of the De Simone rules named R and S; \(I_{\text{R}}\) and \(I_{\text{S}}\) are the trigger sets of R and S, respectively. In line with Definition 6.3, \(xe=x_{i}\) for \(i\in I_{\text{R}}\), and \(xe=(tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{ \prime})\) for \(i\notin I_{\text{R}}\). Likewise, \(ye=x_{i}\) for \(i\in I_{\text{S}}\), and \(ye=(ty_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}y_{i}^{ \prime})\) for \(i\notin I_{\text{S}}\). So the premises of any rule named S are \(\{x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}y_{i}^{\prime}\mid i \in I_{\text{S}}\}\). By Definition 5.1 the target of such a rule is a univariate process expression \(q\) with no other variables than \(z_{1},\ldots,z_{n}\), where \(z_{i}:=x_{i}\) for \(i\in I_{\text{S}}\) and \(z_{i}:=y_{i}^{\prime}\) for \(i\notin I_{\text{S}}\). Since \(src_{\circ}(\hat{v})=q\), the transition expression \(\hat{v}\) must be univariate, and have no variables other than \(ze_{i}\) for \(i=1,\ldots,n\), where \(ze_{i}\) is either the process variable \(z_{i}\) or a transition variable expression \((tz_{i}::z_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}z_{i}^{ \prime})\).
Figure 1: Inclusion between index sets \(I,I_{\text{R}},I_{\text{S}},I_{\text{T}},I_{G}\subseteq\{1,..,n\}\). One has \((I_{\text{R}}\cap I_{G})\backslash I_{\text{S}}\subseteq I_{\text{T}}\). The annotations \(n_{i}\) show the location of index \(i\) (suppressed for unary operators) of rule \(n\).
\(I\) is the set of indices \(i\) for which the above successor rule has a premise. Since this premise involves the transition variables \(tx_{i}\) and \(ty_{i}\), necessarily \(I\subseteq I_{\mathbb{R}}\cap I_{\mathbb{S}}\). Let \(I_{G}\) be the set of indices for which \(ze_{i}\) occurs in \(\hat{v}\), and \(I_{\mathbb{T}}\subseteq I_{G}\) be the subset where \(ze_{i}\) is a transition variable. The conditions on \(\hat{v}\) in Definition 7.1 say that \(I\cap I_{G}\subseteq I_{\mathbb{T}}\) and \((I_{\mathbb{R}}\cap I_{G})\backslash I_{\mathbb{S}}\subseteq I_{\mathbb{T}}\). For \(i\in I\cap I_{G}\), the transition variable \(tz_{i}\) is inherited from the premises of the rule, and for \(i\in(I_{\mathbb{R}}\cap I_{G})\backslash I_{\mathbb{S}}\) the transition variable \(tz_{i}\) is inherited from its source.
In order to show that most classes of indices allowed by our format are indeed populated, we indicated the positions of the indices of the rules of CCS and (the forthcoming) ABCdE from Tables 2 and 5.
Any De Simone language, including CCS, SCCS, ACP and Meije, can trivially be extended to a language with successors, e.g. by setting \(\mathcal{U}=\emptyset\). This would formalise the assumption that the parallel composition operator of these languages is governed by a _scheduler_, scheduling actions from different components in a nondeterministic way. The choice of \(\mathcal{U}\) from Table 2 instead formalises the assumption that parallel components act independently, up to synchronisations between them.
We now present the main theorem of this paper, namely that ep-bisimulation is a lean congruence for all languages that can be presented in De Simone format with successors. A lean congruence preserves equivalence when replacing closed subexpressions of a process by equivalent alternatives. Being a lean congruence implies being a congruence for all operators of the language, but also covers the recursion construct.
**Theorem 7.2** (Lean Congruence): Ep-bisimulation is a lean congruence for all De Simone languages with successors. Formally, fix a TSSS \((\Sigma,\mathcal{R},\aleph,\mathcal{U})\) in De Simone format. If \(p\in\mathbb{P}^{\,r}(\Sigma)\) and \(\rho,\nu\) are two closed \(\Sigma\)-substitutions with \(\forall x\in\mathcal{V}_{\mathcal{P}}\). \(\rho(x)\leftrightarroweq_{ep}\nu(x)\) then \(p[\rho]\leftrightarroweq_{ep}p[\nu]\).
The proof can be found in Appendix A of the full version of this paper [16].
In contrast to a lean congruence, a full congruence would also allow replacement within a recursive specification of subexpressions that may contain recursion variables bound outside of these subexpressions. As our proof is already sophisticated, we consider the proof of full congruence to be beyond the scope of the paper. In fact we are only aware of two papers that provide a proof of full congruence via a rule format [22, 10].
We carefully designed our De Simone format with successors and can state the following conjecture.
**Conjecture 7.3**: Ep-bisimulation is a full congruence for all De Simone languages with successors.
## 8 A Larger Case Study: The Process Algebra ABCdE
The _Algebra of Broadcast Communication with discards and Emissions_ (ABCdE) stems from [14]. It combines CCS [18], its extension with broadcast communication [21, 12, 11], and its extension with signals [5, 7, 8, 11]. Here, we extend CCS as presented in Section 3.
ABCdE is parametrised with sets \(\mathcal{C}\) of _handshake communication names_ as used in CCS, \(\mathcal{B}\) of _broadcast communication names_ and \(\mathcal{S}\) of _signals_. \(\bar{\mathcal{S}}\coloneqq\{\bar{s}\mid s\in\mathcal{S}\}\) is the set of signal emissions. The collections \(\mathcal{B}\)!, \(\mathcal{B}\)? and \(\mathcal{B}\): of _broadcast_, _receive_, and _discard_ actions are given by \(\mathcal{B}\sharp\coloneqq\{b\sharp\mid b\in\mathcal{B}\}\) for \(\sharp\in\{!,?,:\}\). \(Act\coloneqq\mathcal{C}\cup\bar{\mathcal{C}}\cup\{\tau\}\cup\mathcal{B}\!\! \cup\!\mathcal{B}\!\!\cup\!\mathcal{S}\) is the set of _actions_, with \(\tau\) the _internal action_, and \(\mathcal{L}\coloneqq Act\cup\mathcal{B}\!\!\!:\cup\bar{\mathcal{S}}\) is the set of _transition labels_. Complementation extends to \(\mathcal{C}\cup\bar{\mathcal{C}}\cup\mathcal{S}\cup\mathcal{S}\cup\bar{ \mathcal{S}}\) by \(\bar{\epsilon}\coloneqq c\).
Below, \(c\) ranges over \(\mathcal{C}\cup\bar{\mathcal{C}}\cup\mathcal{S}\cup\bar{\mathcal{S}}\), \(\eta\) over \(\mathcal{C}\cup\bar{\mathcal{C}}\cup\{\tau\}\cup\mathcal{S}\cup\bar{\mathcal{S}}\), \(\alpha\) over \(Act\), \(\ell\) over \(\mathcal{L}\), \(\gamma\) over \(In\coloneqq\mathcal{L}\backslash Act\), \(b\) over \(\mathcal{B}\), \(\sharp,\sharp_{1},\sharp_{2}\) over \(\{!,?,:\}\), \(s\) over \(\mathcal{S}\), \(S\) over recursive specifications and \(X\) over \(V_{S}\). A _relabelling_ is a function \(f:(\mathcal{C}\to\mathcal{C})\cup(\mathcal{B}\to\mathcal{B})\cup(\mathcal{S} \to\mathcal{S})\); it extends to \(\mathcal{L}\) by \(f(\bar{c})=\overline{f(c)}\), \(f(\tau)\coloneqq\tau\) and \(f(b\sharp)=f(b)\sharp\).
Next to the constant and operators of CCS, the process signature \(\Sigma\) of ABCdE features a unary _signalling_ operator \(\underline{-}\)'s for each signal \(s\in\mathcal{S}\).
The semantics of ABCdE is given by the transition rule templates displayed in Tables 1 and 3. The latter augments CCS with mechanisms for broadcast communication and signalling. The rule \(|_{\mathbb{C}}\) presents the core of broadcast communication [21], where any broadcast-action \(b!\) performed by a component in a parallel composition needs to synchronise with either a receive action \(b?\) or a discard action \(b\): of any other component. In order to ensure associativity of the parallel composition, rule \(|_{\mathbb{C}}\) also allows receipt actions of both components (\(\sharp_{1}=\sharp_{2}=?\)), or a receipt and a discard, to be combined into a receipt action.
A transition \(p\stackrel{{ b:}}{{\longrightarrow}}q\) is derivable only if \(q=p\). It indicates that the process \(p\) is unable to receive a broadcast communication \(b!\) on channel \(b\). The Rule \(b!\) allows the nil process (inaction) to discard any incoming message; in the same spirit \(b!\alpha\). allows a message to be discarded by a process that cannot receive it. A process offering a choice can only perform a discard-action if both choice-options can discard the corresponding broadcast (Rule \(+_{\mathbb{C}}\)). Finally, by rule \(rec_{In}\), a recursively defined process \(\langle X|S\rangle\) can discard a broadcast iff \(\langle S_{X}|S\rangle\) can discard it. The variant \(rec_{In}\) of \(rec_{Act}\) is introduced to maintain the property that \(target(\theta)=source(\theta)\) for any indicator transition \(\theta\).
A signalling process \(p\)'\(s\) emits the signal \(s\) to be read by another process. A typically example is a traffic light being red. Signal emission is modelled as an indicator transition, which does not change the state of the emitting process. The first rule \((^{-s})\) models the emission \(\bar{s}\) of signal \(s\) to the environment. The environment (processes running in parallel) can read the signal by performing a read action \(s\). This action synchronises with the emission \(\bar{s}\), via rule \(|_{\mathbb{C}}\) of Table 1. Reading a signal does not change the state of the emitter.
Rules \(+_{\mathbb{L}}^{\varepsilon}\) and \(+_{\mathbb{R}}^{\varepsilon}\) describe the interaction between signal emission and choice. Relabelling and restriction are handled by rules \(\backslash L\) and \([f]\) of Table 1, respectively. These operators do not prevent the emission of a signal, and emitting signals never changes the state of the emitting process. Signal emission \(p\)'\(s\) does not block other transitions of \(p\).
It is trivial to check that the TSS of ABCdE is in De Simone format.
The transition signature of ABCdE (Table 4) is completely determined by the set of transition rule templates in Tables 1 and 3. We have united the rules for handshaking and broadcast communication by assigning the same name \(|_{\mathbb{C}}\) to all their instances. When expressing transitions in ABCdE as expressions, we use infix notation for the binary transition constructors, and prefix or postfix notation for unary ones. For example, the transition \(b!\)\(0\)\(()\) is shortened to \(b!\)\(0\), \(\stackrel{{\alpha}}{{\rightarrow}}(p)\) to \(\stackrel{{\alpha}}{{\rightarrow}}p\), \(\backslash L(t)\) to \(t\backslash L\), and \(|_{\mathbb{L}}(t,p)\) to \(t|_{\mathbb{L}}p\).
\begin{table}
\begin{tabular}{|c c c c c|} \hline \(\begin{array}{c}\hline\hline\mathbf{0}\stackrel{{ b:}}{{ \longrightarrow}}\mathbf{0}\end{array}\) & \(\begin{array}{c}b!\mathbf{0}\end{array}\) & \(\begin{array}{c}\alpha\neq b?\\ \alpha.x\stackrel{{ b:}}{{\longrightarrow}}\alpha.x\end{array}\) & \(\begin{array}{c}b!\alpha.\alpha.\end{array}\) & \(\begin{array}{c}\frac{x\stackrel{{ b:}}{{ \longrightarrow}}x^{\prime},\ y\stackrel{{ b:}}{{ \longrightarrow}}y^{\prime}}{x+y\stackrel{{ b:}}{{ \longrightarrow}}x^{\prime}+y^{\prime}}+_{\mathbb{C}}\end{array}\) \\ \(\begin{array}{c}\frac{x\stackrel{{ b\sharp_{1}}}{{ \longrightarrow}}x^{\prime},\ y\stackrel{{ b\sharp_{2}}}{{ \longrightarrow}}y^{\prime}}{x|y\stackrel{{ b\sharp}}{{ \longrightarrow}}x^{\prime}|y^{\prime}}\end{array}\) & \(|_{\mathbb{C}}\) & with \(\begin{array}{c}\circ\\ \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}
Table 5 extends the successor relation of CCS (Table 2) to ABCdE. \(P,Q\) are process variables, \(t,v\) transition variables enabled at \(P\), \(u,w\) transition variables enabled at \(Q\), \(P^{\prime},Q^{\prime}\) the targets of \(v,w\), respectively and \(t^{\prime},u^{\prime}\) transitions enabled at \(P^{\prime},Q^{\prime}\), respectively. To express those rules in the same way as Definition 7.1, we replace the metavariables \(P\), \(Q\), \(t\), \(u\), etc. with variable expressions as indicated on the right. Here \(xa_{i}\), \(ya_{i}\), \(za_{i}\) are label variables that should be instantiated to match the trigger of the rules and side conditions. As ABCdE does not feature operators of arity \(>\)2, the index \(i\) from Definition 7.1 can be 1 or 2 only.
To save duplication of rules 8b, 8c, 9b, 9c and 10 we have assigned the same name \(|_{\mathrm{c}}\) to the rules for handshaking and broadcast communication. The intuition of the rules of Table 5 is explained in detail in [14].
In the naming convention for transitions from [14] the sub- and superscripts of the transition constructors \(+\), \(|\) and \(\hat{s}\), and of the recursion construct, were suppressed. In most cases that yields no ambiguity, as the difference between \(|_{\mathrm{L}}\) and \(|_{\mathrm{R}}\), for instance, can be detected by checking which of its two arguments are of type transition versus process. Moreover, it avoids the duplication in rules 3a, 4a, 5, 6, 11c and 11d. The ambiguity between \(+_{\mathrm{L}}\) and \(+_{\mathrm{L}}^{\varepsilon}\) (or \(+_{\mathrm{R}}\) and \(+_{\mathrm{R}}^{\varepsilon}\)) was in [14] resolved by adorning rules 3-6 with a side condition \(\ell(v)\notin\mathcal{S}\) or \(\ell(w)\notin\mathcal{S}\), and the ambiguity between \(rec_{Act}\) and \(rec_{In}\) (or \(\hat{s}_{Act}\) and \(\hat{s}_{In}\)) by adorning rules 11c and 11d with a side condition \(\ell(v)\in Act\); this is not needed here.
It is easy to check that all rules are in the newly introduced De Simone format, except Rule 1. However, this rule can be converted in to a collection of De Simone rules by substituting \(R(xe_{1},\ldots,xe_{n})\) for \(\chi\) and \(S(ye_{1},\ldots,ye_{n})\) for \(\zeta\), adding a premise in the form of \(xe_{i}\leadsto_{ye_{i}}(t_{\mathrm{Z}}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}))\) if \(i\in I_{\mathrm{R}}\cap I_{\mathrm{S}}\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Constructor & \(\xrightarrow{\alpha}\) & \((\rightarrow^{s})\) & \(b\):\(\mathbf{0}\) & \(b\):\(\alpha\) & \(+_{\mathrm{L}}\) & \(+_{\mathrm{R}}\) & \(+_{\mathrm{C}}\) & \(+_{\mathrm{L}}^{\varepsilon}\) & \(+_{\mathrm{R}}^{\varepsilon}\) & \(|_{\mathrm{L}}\) & \(|_{\mathrm{C}}\) & \(|_{\mathrm{R}}\) & \(\backslash L\) & \([f]\) & \(\hat{s}_{Act}\) & \(\hat{s}_{In}\) \\ \hline Arity & 1 & 1 & 0 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 1 & 1 & 1 & 1 \\ \hline Trigger Set & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\{1\}\) & \(\{2\}\) & \(\{1,2\}\) & \(\{1\}\) & \(\{2\}\) & \(\{1\}\) & \(\{1,2\}\) & \(\{2\}\) & \(\{1\}\) & \(\{1\}\) & \(\{1\}\) \\ \hline \end{tabular}
\end{table}
Table 4: Transition signature of ABCdE
\begin{table}
\begin{tabular}{|c|c|} \hline Meta & Variable Expression \\ \hline \(P\) & \(x_{1}\) \\ \(Q\) & \(x_{2}\) \\ \(P^{\prime}\) & \(y_{1}^{\prime}\) \\ \(Q^{\prime}\) & \(y_{2}^{\prime}\) \\ \(t\) & \((x_{1}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\
for each pair of rules of the same type named r and s.3 The various occurrences of 1 in Figure 1 refer to these substitution instances. It follows that \(\trianglelefteq_{ep}\) is a congruence for the operators of ABCdE, as well as a lean congruence for recursion.
Footnote 3: This yields \(1^{2}+2\cdot 1+5\cdot 3+3\cdot 2+2\cdot 1=26\) rules of types \((\mathbf{0},0)\), \((\alpha_{\_},\_},1)\), \((+,2)\), \((\overset{\ast}{s},1)\) and \(\langle X|S\rangle\) not included in Tables 2 and 5.
## 9 Related Work & Conclusion
In this paper we have added a successor relation to the well-known De Simone format. This has allowed us to prove the general result that enabling preserving bisimilarity - a finer semantic equivalence relation than strong bisimulation - is a lean congruence for all languages with a structural operational semantics within this format. We do not cover full congruence yet, as proofs for general recursions are incredible hard and usually excluded from work justifying semantic equivalences.
There is ample work on congruence formats in the literature. Good overview papers are [2, 20]. For system description languages that do not capture time, probability or other useful extensions to standard process algebras, all congruence formats target strong bisimilarity, or some semantic equivalence or preorder that is strictly coarser than strong bisimilarity. As far as we know, the present paper is the first to define a congruence format for a semantic equivalence that is finer than strong bisimilarity.
Our congruence format also ensures a lean congruence for recursion. So far, the only papers that provide a rule format yielding a congruence property for recursion are [22] and [10], and both of them target strong bisimilarity.
In Sections 3 and 8, we have applied our format to show lean congruence of ep-bisimilarity for the process algebra CCS and ABCdE, respectively. This latter process algebra features broadcast communication and signalling. These two features are representative for issues that may arise elsewhere, and help to ensure that our results are as general as possible. Our congruence format can effortlessly be applied to other calculi like CSP [6] or ACP [4].
In order to evaluate ep-bisimilarity on process algebras like CCS, CSP, ACP or ABCdE, their semantics needs to be given in terms of labelled transition systems extended with a successor relation \(\leadsto\). This relation models concurrency between transitions enabled in the same state, and also tells what happens to a transition if a concurrent transition is executed first. Without this extra component, labelled transition systems lack the necessary information to capture liveness properties in the sense explained in the introduction.
In a previous paper [14] we already gave such a semantics to ABCdE. The rules for the successor relation presented in [14], displayed in Tables 2 and 5, are now seen to fit our congruence format. We can now also conclude that ep-bisimulation is a lean congruence for ABCdE. In [15, Appendix B] we contemplate a very different approach for defining the relation \(\leadsto\). Following [11], we understand each transition as the synchronisation of a number of elementary particles called _synchrons_. Then relations on synchrons are proposed in terms of which the \(\leadsto\)-relation is defined. It is shown that this leads to the same \(\leadsto\)-relation as the operational approach from [14] and Tables 2 and 5. | BISIMILARITYの保存を可能にすることは、強 bisimilarity の一つで、安全性を維持しつつ also livenessの性質を保持します。これを正しく定義するために、ラベル付き遷移体系に、同一状態での遷移を有効にする共存を捉えるための後継関係を更新する必要があります。この後継関係を扱うために、有名なデシモネ形式を拡張しました。そして、この後継関係の誘導的な定義を処理するために、デシモネ形式を拡張しました。これにより、ep-bisimilarity は演算子に対して、また ricorsioneに対して、すべての拡張されたデシモネ言語において、等価性となります。 |
2305.20081 | Efficient Diffusion Policies for Offline Reinforcement Learning | Offline reinforcement learning (RL) aims to learn optimal policies from
offline datasets, where the parameterization of policies is crucial but often
overlooked. Recently, Diffsuion-QL significantly boosts the performance of
offline RL by representing a policy with a diffusion model, whose success
relies on a parametrized Markov Chain with hundreds of steps for sampling.
However, Diffusion-QL suffers from two critical limitations. 1) It is
computationally inefficient to forward and backward through the whole Markov
chain during training. 2) It is incompatible with maximum likelihood-based RL
algorithms (e.g., policy gradient methods) as the likelihood of diffusion
models is intractable. Therefore, we propose efficient diffusion policy (EDP)
to overcome these two challenges. EDP approximately constructs actions from
corrupted ones at training to avoid running the sampling chain. We conduct
extensive experiments on the D4RL benchmark. The results show that EDP can
reduce the diffusion policy training time from 5 days to 5 hours on
gym-locomotion tasks. Moreover, we show that EDP is compatible with various
offline RL algorithms (TD3, CRR, and IQL) and achieves new state-of-the-art on
D4RL by large margins over previous methods. Our code is available at
https://github.com/sail-sg/edp. | Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, Shuicheng Yan | 2023-05-31T17:55:21 | http://arxiv.org/abs/2305.20081v2 | # Efficient Diffusion Policies for Offline Reinforcement Learning
###### Abstract
Offline reinforcement learning (RL) aims to learn optimal policies from offline datasets, where the parameterization of policies is crucial but often overlooked. Recently, Diffsuion-QL [35] significantly boosts the performance of offline RL by representing a policy with a diffusion model, whose success relies on a parametrized Markov Chain with hundreds of steps for sampling. However, Diffusion-QL suffers from two critical limitations. 1) It is computationally inefficient to forward and backward through the whole Markov chain during training. 2) It is incompatible with maximum likelihood-based RL algorithms (_e.g._, policy gradient methods) as the likelihood of diffusion models is intractable. Therefore, we propose _efficient diffusion policy_ (EDP) to overcome these two challenges. EDP approximately constructs actions from corrupted ones at training to avoid running the sampling chain. We conduct extensive experiments on the D4RL benchmark. The results show that EDP can reduce the diffusion policy training time **from 5 days to 5 hours** on gym-locomotion tasks. Moreover, we show that EDP is compatible with various offline RL algorithms (TD3, CRR, and IQL) and achieves new state-of-the-art on D4RL by large margins over previous methods. Our code is available at [https://github.com/sail-sg/edp](https://github.com/sail-sg/edp).
## 1 Introduction
Offline reinforcement learning (RL) is much desired in real-world applications as it can extract knowledge from previous experiences, thus avoiding costly or risky online interactions. Extending online RL algorithms to the offline domain faces the distributional shift [9] problem. Existing methods mainly focus on addressing this issue by either constraining a policy to stay close to the data-collecting policy [6; 37] or making conservative updates for Q-networks [16; 14; 38]. However, Offline RL can also be viewed as a state-conditional generative modeling problem of actions, where the parameterization of the policy network is important but largely overlooked. Most offline RL works follow the convention of parameterizing the policy as a diagonal Gaussian distribution with the learned mean and variance. This scheme might become inferior when the data distribution is complex, especially when offline data are collected
Figure 1: Efficiency and Generality. D-QL is Diffusion-QL. _Left_: The training time on the locomotion tasks in D4RL. _Right_: the performance of EDP and previous SOTA on each domain in D4RL. EDP is trained with TD3 on locomotion and IQL on the other three domains. (Best viewed in color.)
from various sources and present strong multi-modalities [30]. Therefore, more expressive models for the policy are strongly desired.
Recently, Diffusion-QL [35] made a successful attempt by replacing the diagonal Gaussian policy with a diffusion model, significantly boosting the performance of the TD3+BC [6] algorithm. Diffusion models [32; 10] have achieved the new state-of-the-art (SOTA) in image generation tasks [24; 5], demonstrating a superior ability to capture complex data distributions.
Despite the impressive improvement that Diffusion-QL has achieved, it has two critical drawbacks preventing it from practical applications. _First_, training a diffusion policy with offline RL is computationally inefficient. Consider a parameterized diffusion policy \(\pi_{\theta}(\mathbf{a}|\mathbf{s})\), Diffusion-QL optimizes it by maximizing the Q value \(Q(\mathbf{s},\mathbf{a}_{\theta})\) of a state \(\mathbf{s}\) given a policy-generated action \(\mathbf{a}_{\theta}\sim\pi_{\theta}(\mathbf{a}\mid\mathbf{s})\). However, sampling from a diffusion model relies on a long parameterized Markov chain (_e.g._, 1,000 steps), whose forward inference and gradient backpropagation are unaffordably expensive. _Second_, diffusion policy is not a generic policy class as it is restricted to TD3-style algorithms. As computing the sample likelihood \(\pi_{\theta}(a\mid s)\) is intractable in diffusion models [33], diffusion policy is incompatible with a large family of policy gradient algorithms (_e.g._, V-Trace [20], AWR [26], IQL [14]), which require a tractable and differentiable log-likelihood \(\log\pi_{\theta}(\mathbf{a}|\mathbf{s})\) for policy improvement.
In this work, we propose _efficient diffusion policy_ (EDP) to address the above two limitations of diffusion policies. Specifically, we base EDP on the denoising diffusion probabilistic model (DDPM) [10], which learns a noise-prediction network to predict the noise used to corrupt an example. In the forward diffusion process, a corrupted sample follows a predefined Gaussian distribution when the clean example and timestep are given. In turn, given a corrupted sample and predicted noise, we can approximate its clean version by leveraging the reparametrization trick. Based on this observation, to avoid the tedious sampling process, we propose _action approximation_ to build an action from a corrupted one, which can be easily constructed from the dataset. In this way, each training step only needs to pass through the noise-prediction network once, thus substantially reducing the training time. As experimented, by simply adding action approximation, we obtain **2x** speed-up without performance loss. Moreover, we apply DPM-Solver [19], a faster ODE-based sampler, to further accelerate both the training and sampling process. Finally, to support likelihood-based RL algorithms, we leverage the evidence lower bound for the likelihood developed in DDPM and approximate the policy likelihood from a constructed Gaussian distribution with variance fixed and mean obtained from action approximation.
We evaluate the efficiency and generality of our method on the popular D4RL benchmarking, as shown in Fig. 1. We first benchmark the efficiency of EDP on gym-locomotion tasks. By replacing the diffusion policy in Diffusion-QL with our EDP, the training time of Diffusion-QL is reduced substantially **from five days to five hours** (compared to their official code). Meanwhile, we observe slight to clear performance improvements on different tasks as the improved efficiency enables training DDPM with more timesteps than before. Moreover, we plug EDP into three different offline algorithms (including TD3+BC, CRR, and IQL), and the results justify its superiority over standard diagonal Gaussian policies. As a result, EDP set up new state-of-the-art on all four domains in D4RL.
## 2 Related Work
Offline RLDistributional shift between the learned and behavior policies is offline RL's biggest challenge. Existing research mitigates this problem by making modifications to policy evaluation [22; 16; 14] or policy improvement [9; 37; 6; 31; 36]. For example, conservative Q-learning (CQL) [16] penalizes out-of-distribution actions for having higher Q-values, proving that this is equivalent to optimizing a lower bound of Q-values. Onestep RL [1] conducts policy evaluation on in-distribution data to avoid querying unseen actions. IQL [14] introduces expectile regression [13] to approximate dynamic programming with the Bellman optimality function. TD3+BC explicitly constrains the learned policy by adding a behavior cloning loss to mimic the behavior policy. Instead, CRR and AWR impose an implicit policy regularization by performing policy gradient-style policy updates. Despite their effectiveness, they ignore that the capacity of a policy representation plays a vital role in fitting the data distribution. This paper instead focuses on an orthogonal aspect (_i.e._, policy parameterization) that all the above methods can benefit. Another line of work is trying to cast offline RL as a sequence-to-sequence translation model [3; 11], which is beyond the scope of this work.
Policy ParametrizationDifferent RL algorithms may pose different requirements for parameterizing a policy distribution. There are mainly two categories of requirements: 1) The sampling process
is differentiable, such as the deterministic policy in DDPG [17] and TD3 [7]. 2) The log-likelihood of samples is tractable. For example, policy gradient methods [28; 29; 36; 26] optimize a policy based on maximum likelihood estimation (MLE). Therefore, most works represent policy with a diagonal Gaussian distribution with mean and variance parameterized with a multi-layer perceptron (MLP). On the other hand, BCQ [9] and BEAR [15] choose to model policy with a conditional variational autoencoder (CVAE). Recently, Diffusion-QL [35] introduced diffusion models into offline RL and demonstrated that diffusion models are superior at modeling complex action distributions than CVAE and diagonal Gaussian. However, it takes tens to hundreds more time to train a diffusion policy than a diagonal Gaussian one. Moreover, diffusion policy only satisfies the first requirement, which means many other offline RL algorithms can not use it, including the current SOTA IQL.
Our method is motivated to solve the about two limitations. We first propose a more efficient way to train diffusion policies, which reduces training time to the level of a Gaussian policy. Then, we generalize the diffusion policy to be compatible with MLE-based RL methods.
## 3 Preliminaries
### Offline Reinforcement Learning
A decision-making problem in reinforcement learning is usually represented by a Markov Decision Process (MDP): \(\mathcal{M}=\{\mathcal{S},\mathcal{A},P,R,\gamma\}\). \(\mathcal{S}\) and \(\mathcal{A}\) are the state and action spaces respectively, \(P(\boldsymbol{s}^{\prime}|\boldsymbol{s},\boldsymbol{a})\) measures the transition probability from state \(\boldsymbol{s}\) to state \(\boldsymbol{s}^{\prime}\) after taking action \(\boldsymbol{a}\) while \(R(\boldsymbol{s},\boldsymbol{a},\boldsymbol{s}^{\prime})\) gives the reward for the corresponding transition, \(\gamma\in[0,1)\) is the discount factor. A policy \(\pi(\boldsymbol{a}|\boldsymbol{s})^{2}\) describes how an agent interacts with the environment. The optimal policy \(\pi^{*}(\boldsymbol{a}|\boldsymbol{s})\) is the one achieves maximal cumulative discounted returns: \(\pi^{*}=\arg\max\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r( \boldsymbol{s}_{t},\boldsymbol{a}_{t})\right]\). Reinforcement learning algorithms frequently rely on the definition of value function \(V(\boldsymbol{s})=\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r( \boldsymbol{s}_{t},\boldsymbol{a}_{t})|\boldsymbol{s}_{0}=\boldsymbol{s}\right]\), and action value (Q) function \(Q(\boldsymbol{s},\boldsymbol{a})=\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty} \gamma^{t}r(\boldsymbol{s}_{t},\boldsymbol{a}_{t})|\boldsymbol{s}_{0}= \boldsymbol{s},\boldsymbol{a}_{0}=\boldsymbol{a}\right]\), which represents the expected cumulative discounted return of a policy \(\pi\) given the initial state \(\boldsymbol{s}\) or state-action pair \((\boldsymbol{s},\boldsymbol{a})\).
In the offline RL setting, instead of learning from interactions with the environment, agents focus on learning an optimal policy from a previously collected dataset of transitions: \(\mathcal{D}=\{(\boldsymbol{s}_{t},\boldsymbol{a}_{t},\boldsymbol{s}_{t+1},r _{t})\}\). Offline RL algorithms for continuous control are usually based on an actor-critic framework that alternates between policy evaluation and policy improvement. During policy evaluation, a parameterized Q network \(Q_{\phi}(\boldsymbol{s},\boldsymbol{a})\) is optimized based on approximate dynamic programming to minimize the following temporal difference (TD) error \(L_{\text{TD}}(\phi)\): \(\mathbb{E}_{(\boldsymbol{s},\boldsymbol{a},\boldsymbol{s}^{\prime})\sim \mathcal{D}}\left[\left(r(\boldsymbol{s},\boldsymbol{a})+\gamma\max_{ \boldsymbol{a}^{\prime}}Q_{\hat{\phi}}(\boldsymbol{s}^{\prime},\boldsymbol{a}^ {\prime})-Q_{\phi}(\boldsymbol{s},\boldsymbol{a})\right)^{2}\right],\) where \(Q_{\hat{\phi}}(\boldsymbol{s},\boldsymbol{a})\) denotes a target network. Then at the policy improvement step, knowledge in the Q network is distilled into the policy network in various ways. Offline RL methods address the distributional shift [9] problem induced by the offline dataset \(\mathcal{D}\) by either modifying the policy evaluation step to regularize Q learning or constraining the policy improvement directly. In the following, we will show that our diffusion policy design is compatible with any offline algorithms and can speed up policy evaluation and improvement.
### Diffusion Models
Consider a real data distribution \(q(\boldsymbol{x})\) and a sample \(\boldsymbol{x}^{0}\sim q(\boldsymbol{x})\) drawn from it. The (forward) diffusion process fixed to a Markov chain gradually adds Gaussian noise to the sample in \(K\) steps, producing a sequence of noisy samples \(\boldsymbol{x}^{1},\ldots\boldsymbol{x}^{K}\). Note that we use superscript \(k\) to denote diffusion timestep to avoid conflicting with the RL timestep. The noise is controlled by a variance schedule \(\beta^{1},\ldots,\beta^{K}\):
\[q(\boldsymbol{x}^{k}|\boldsymbol{x}^{k-1})=\mathcal{N}(\boldsymbol{x}^{k}; \sqrt{1-\beta^{k}}\boldsymbol{x}^{k-1},\beta^{k}\boldsymbol{I}),\quad q( \boldsymbol{x}^{1:K}|\boldsymbol{x}^{0})=\prod_{k=1}^{K}q(\boldsymbol{x}^{k}| \boldsymbol{x}^{k-1}). \tag{1}\]
When \(K\rightarrow\infty\), \(\boldsymbol{x}^{K}\) distributes as an isotropic Gaussian distribution. Diffusion models learn a conditional distribution \(p_{\theta}(\boldsymbol{x}^{t-1}|\boldsymbol{x}^{t})\) and generate new samples by reversing the above process:
\[p_{\theta}(\boldsymbol{x}^{0:K})=p(\boldsymbol{x}^{K})\prod_{k=1}^{K}p_{\theta }(\boldsymbol{x}^{k-1}|\boldsymbol{x}^{k}),\quad p_{\theta}(\boldsymbol{x}^{k-1 }|\boldsymbol{x}^{k})=\mathcal{N}(\boldsymbol{x}^{k-1};\boldsymbol{\mu}_{ \theta}(\boldsymbol{x}^{k},k),\boldsymbol{\Sigma}_{\theta}(\boldsymbol{x}^{k},k )), \tag{2}\]
where \(p(\mathbf{x}^{K})=\mathcal{N}(\mathbf{0},\mathbf{I})\) under the condition that \(\prod_{k=1}^{K}(1-\beta^{k})\approx 0\). The training is performed by maximizing the evidence lower bound (ELBO): \(\mathbb{E}_{\mathbf{x}_{0}}[\log p_{\theta}(\mathbf{x}^{0})]\geq\mathbb{E}_{q}\left[ \log\frac{p_{\theta}(\mathbf{x}^{0:K})}{q(\mathbf{x}^{1:K}|\mathbf{x}^{0})}\right]\).
## 4 Efficient Diffusion Policy
In this section, we detail the design of our efficient diffusion policy (EDP). First, we formulate an RL policy with a diffusion model. Second, we present a novel algorithm that can train a diffusion policy efficiently, termed Reinforcement-Guided Diffusion Policy Learning (RGDPL). Then, we generalize the diffusion policy to work with arbitrary offline RL algorithms and compare our EDP with Diffusion-QL to highlight its superiority in efficiency and generality. Finally, we discuss several methods to sample from the diffusion policy during evaluation.
### Diffusion Policy
Following [35], we use the reverse process of a conditional diffusion model as a parametric policy:
\[\pi_{\theta}(\mathbf{a}|\mathbf{s})=p_{\theta}(\mathbf{a}^{0:K}|\mathbf{s})=p(\mathbf{a}^{K}) \prod_{k=1}^{K}p_{\theta}(\mathbf{a}^{k-1}|\mathbf{a}^{k},\mathbf{s}), \tag{3}\]
where \(\mathbf{a}^{K}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). We choose to parameterize \(\pi_{\theta}\) based on Denoising Diffusion Probabilistic Models (DDPM) [10], which sets \(\mathbf{\Sigma}_{\theta}(\mathbf{a}^{k},k;\mathbf{s})=\beta^{k}\mathbf{I}\) to fixed time-dependent constants, and constructs the mean \(\mathbf{\mu}_{\theta}\) from a noise prediction model as: \(\mathbf{\mu}_{\theta}(\mathbf{a}^{k},k;\mathbf{s})=\frac{1}{\sqrt{\alpha^{k}}}\left(\mathbf{a }^{k}-\frac{\beta^{k}}{\sqrt{1-\bar{\alpha}^{k}}}\mathbf{\epsilon}_{\theta}(\mathbf{a }^{k},k;\mathbf{s})\right)\), where \(\alpha^{k}=1-\beta^{k}\), \(\bar{\alpha}^{k}=\prod_{s=1}^{k}\), and \(\mathbf{\epsilon}_{\theta}\) is a parametric model.
To obtain an action from DDPM, we need to draw samples from \(K\) different Gaussian distributions sequentially, as illustrated in Eqn. (2)-(3). The sampling process can be reformulated as
\[\mathbf{a}^{k-1}=\frac{1}{\sqrt{\alpha^{k}}}\left(\mathbf{a}^{k}-\frac{\beta^{k}}{ \sqrt{1-\bar{\alpha}^{k}}}\mathbf{\epsilon}_{\theta}(\mathbf{a}^{k},k;\mathbf{s})\right)+ \sqrt{\beta^{k}}\mathbf{\epsilon}, \tag{4}\]
with the reparametrization trick, where \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), \(k\) is the reverse timestep from \(K\) to \(0\).
Similar to DDPM, plugging in the conditional Gaussian distributions, the ELBO in Sec. 3.2 can be simplified to the following training objective \(L_{\text{diff}}(\theta)\):
\[\mathbb{E}_{k,\mathbf{\epsilon},(\mathbf{a}^{0},\mathbf{s})}\left[\left\|\mathbf{\epsilon}- \mathbf{\epsilon}_{\theta}\left(\sqrt{\bar{\alpha}^{k}}\mathbf{a}^{0}+\sqrt{1-\bar{ \alpha}^{k}}\mathbf{\epsilon},k;\mathbf{s}\right)\right\|^{2}\right], \tag{5}\]
where \(k\) follows a uniform distribution over the discrete set \(\{1,\dots,K\}\). It means the expectation is taken over all diffusion steps from clean action to pure noise. Moreover, \(\mathbf{\epsilon}\in\mathcal{N}(\mathbf{0},\mathbf{I})\), and \((\mathbf{a}^{0},\mathbf{s})\in\mathcal{D}\) are state-action pairs drawn from the offline dataset. Given a dataset, we can easily and efficiently train a diffusion policy in a behavior-cloning manner as we only need to forward and backward through the network once each iteration. As shown in Diffusion-QL [35], diffusion policies can greatly boost the performance when trained with TD3-based Q learning. However, it still faces two main drawbacks that limit its real-world application: 1) It is inefficient in sampling and training; 2) It is not generalizable to other strong offline reinforcement learning algorithms.
### Reinforcement-Guided Diffusion Policy Learning
To understand how a parametric policy \(\pi_{\theta}\) is trained with offline RL algorithms, we start with a typical Q-learning actor-critic framework for continuous control, which iterates between policy evaluation and policy improvement. Policy evaluation learns a Q network by minimizing the TD error \(L_{\text{TD}}(\phi)\):
\[\mathbb{E}_{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\sim\mathcal{D}}\left[\left(r(\mathbf{s },\mathbf{a})+\gamma Q_{\hat{\phi}}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})-Q_{\phi}(\mathbf{ s},\mathbf{a})\right)^{2}\right], \tag{6}\]
where the next action \(\mathbf{a}^{\prime}\sim\pi_{\theta}(\cdot|\mathbf{s}^{\prime})\). The policy is optimized to maximize the expected Q values :
\[\max_{\theta}\mathbb{E}_{\mathbf{s}\sim\mathcal{D},\mathbf{a}\sim\pi_{\theta}(\mathbf{a}| \mathbf{s})}\left[Q_{\phi}(\mathbf{s},\mathbf{a})\right]. \tag{7}\]
It is straightforward to optimize this objective when a Gaussian policy is used, but things get much more difficult when a diffusion policy is considered due to its complicated sampling process. Instead, we propose to view the offline RL problem from the perspective of generative modeling, where a diffusion policy can be easily learned in a supervised manner from a given dataset. However, unlike in computer vision, where the training data are usually perfect, offline RL datasets often contain suboptimal state-action pairs. Suppose we have a well-trained Q network \(Q_{\phi}\), the question becomes how we can efficiently use \(Q_{\phi}\) to guide diffusion policy training procedure. We now show that this can be achieved without sampling actions from diffusion policies.
Let's revisit the forward diffusion process in Eqn. 1. A notable property of it is that the distribution of noisy action \(\mathbf{a}^{k}\) at any step \(k\) can be written in closed form: \(q(\mathbf{a}^{k}|\mathbf{a}^{0})=\mathcal{N}(\mathbf{a}^{k};\sqrt{\bar{\alpha}^{k}}\mathbf{a}^ {0},(1-\bar{\alpha}^{k})\mathbf{I})\). Using the reparametrization trick, we are able to connect \(\mathbf{a}^{k}\), \(\mathbf{a}^{0}\) and \(\epsilon\) by:
\[\mathbf{a}^{k}=\sqrt{\bar{\alpha}^{k}}\mathbf{a}^{0}+\sqrt{1-\bar{\alpha}^{k}}\mathbf{ \epsilon},\quad\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{8}\]
Recall that our diffusion policy is parameterized to predict \(\mathbf{\epsilon}\) with \(\mathbf{\epsilon}_{\theta}(\mathbf{a}^{k},k;\mathbf{s})\). By relaxing \(\mathbf{\epsilon}\) with \(\mathbf{\epsilon}_{\theta}(\mathbf{a}^{k},k;\mathbf{s})\) and rearranging Eqn. (8), we obtain the approximiated action:
\[\hat{\mathbf{a}}^{0}=\frac{1}{\sqrt{\bar{\alpha}^{k}}}\mathbf{a}^{k}-\frac{\sqrt{1- \bar{\alpha}^{k}}}{\sqrt{\bar{\alpha}^{k}}}\mathbf{\epsilon}_{\theta}(\mathbf{a}^{k}, k;\mathbf{s}). \tag{9}\]
In this way, instead of running the reverse diffusion process to sample an action \(\mathbf{a}^{0}\), we can cheaply construct \(\hat{\mathbf{a}}^{0}\) from a state-action pair \((\mathbf{s},\mathbf{a})\) in the dataset by first corrupting the action \(\mathbf{a}\) to \(\mathbf{a}^{k}\) then performing one-step denoising to it. We will refer to this technique as _action approximation_ in the following. Accordingly, the policy improvement for diffusion policies is modified as follows:
\[L_{\pi}(\theta)=-\mathbb{E}_{\mathbf{s}\sim\mathcal{D},\hat{\mathbf{a}}^{0}}\left[Q_{ \phi}(\mathbf{s},\hat{\mathbf{a}}^{0})\right]. \tag{10}\]
To improve the efficiency of policy evaluation, we propose to replace the DDPM sampling in Eqn. (4) with DPM-Solver [19], which is an ODE-based sampler. The algorithm is defered to the appendix.
### Generalization to Various RL algorithms
There are mainly two types of approaches to realize the objective in Eqn. 7 for policy improvement.
_Direct policy optimization:_ It maximizes Q values and directly backpropagate the gradients from Q network to policy network, _i.e._, \(\nabla_{\theta}L_{\pi}(\theta)=-\frac{\partial Q_{\phi}(\mathbf{s},\mathbf{a})}{ \partial\mathbf{a}}\frac{\partial\mathbf{a}}{\partial\theta}\). This is only applicable to cases where \(\frac{\partial\mathbf{a}}{\partial\theta}\) is tractable, _e.g._, when a deterministic policy \(\mathbf{a}=\pi_{\theta}(\mathbf{s})\) is used or when the sampling process can be reparameterized. Sample algorithms belonging to this category include TD3 [7], TD3+BC [6], and CQL [16]. One can easily verify that both the expensive DDPM sampling in Eqn. (4) and our efficient approximation in Eqn. (9) can be used for direct policy optimization.
_Likelihood-based policy optimization:_ It tries to distill the knowledge from the Q network into the policy network indirectly by performing weighted regression or weighted maximum likelihood:
\[\max_{\theta}\quad\mathbb{E}_{(\mathbf{s},\mathbf{a})\sim\mathcal{D}}\left[f(Q_{\phi} (\mathbf{s},\mathbf{a}))\log\pi_{\theta}(\mathbf{a}|\mathbf{s})\right], \tag{11}\]
where \(f(Q_{\phi}(\mathbf{s},\mathbf{a}))\) is a monotonically increasing function that assigns a weight to each state-action pair in the dataset. This objective requires the log-likelihood of the policy to be tractable and differentiable. AWR [26], CRR [36], and IQL [14] fall into this category but each has a unique design in terms of the weighting function \(f\). Since the likelihood of samples in Diffusion models is intractable, we propose the following two variants for realizing Eqn. 11.
First, instead of computing the likelihood, we turn to a lower bound for \(\log\pi_{\theta}(\mathbf{a}|\mathbf{s})\) introduced in DDPM [10]. By discarding the constant term that does not depend on \(\theta\), we can have the objective:
\[\mathbb{E}_{k,\mathbf{\epsilon},(\mathbf{a},\mathbf{s})}\left[\frac{\beta^{k}\cdot f(Q_{ \phi}(\mathbf{s},\mathbf{a}))}{2\alpha^{k}(1-\bar{\alpha}^{k-1})}\left\|\mathbf{\epsilon} -\mathbf{\epsilon}_{\theta}\left(\mathbf{a}^{k},k;\mathbf{s}\right)\right\|^{2}\right]. \tag{12}\]
Second, instead of directly optimizing \(\log\pi_{\theta}(\mathbf{a}|\mathbf{s})\), we propose to replace it with an approximated policy \(\hat{\pi}_{\theta}(\mathbf{a}|\mathbf{s})\triangleq\mathcal{N}(\hat{\mathbf{a}}^{0},\mathbf{ I})\), where \(\hat{\mathbf{a}}^{0}\) is from Eqn. (9). Then, we get the following objective:
\[\mathbb{E}_{k,\mathbf{\epsilon},(\mathbf{a},\mathbf{s})}\left[f(Q_{\phi}(\mathbf{s},\mathbf{a})) \left\|\mathbf{a}-\hat{\mathbf{a}}^{0}\right\|^{2}\right]. \tag{13}\]
Empirically, we find these two choices perform similarly, but the latter is easier to implement. So we will report results mainly based on the second realization. In our experiments, we consider two offline RL algorithms under this category, _i.e._, CRR, and IQL. They use two weighting schemes: \(f_{\text{CRR}}=\exp\big{[}\big{(}Q_{\phi}(s,a)-\mathbb{E}_{a^{\prime}\sim\hat{n} (a|s)}Q(s,a^{\prime})\big{)}\big{/}\tau_{\text{CRR}}\big{]}\) and \(f_{\text{IQL}}=\exp\big{[}(Q_{\phi}(s,a)-V_{\psi}(s))\big{/}\tau_{\text{IQL}}\big{]}\), where \(\tau\) refers to the temperature parameter and \(V_{\psi}(s)\) is an additional value network parameterized by \(\psi\). We defer the details of these two algorithms to Appendix A.
### Comparison to Diffusion-QL
Now we are ready to compare our method and Diffusion-QL comprehensively. Though our EDP shares the same policy parametrization as Diffusion-QL, it differs from Diffusion-QL significantly in the training algorithm. As a result, the computational efficiency and generality of diffusion policies have been improved substantially.
**Efficiency** The diffusion policy affects both policy evaluation (Eqn. (6)) and policy improvement (Eqn. (7)). First, calculating \(L_{\text{TD}}(\phi)\) in policy evaluation requires drawing the next action from it. Diffusion-QL uses DDPM sampling while EDP employs a DPM-Solver, which can reduce the sampling steps from 1000 to 15, thus accelerating the training. Second, in policy improvement, Diffusion-QL again applies DDPM for sampling. Then, it calculates the loss function based on sampled actions and backpropagates through the sampling process for network update. This means it needs to forward and backward a neural network for \(K\) times each training iteration. As a result, Diffusion-QL can only work with small \(K\), _e.g._, \(5\sim 100\). In comparison, our training scheme only passes through the network once an iteration, no matter how big \(K\) is. This enables EDP to use a larger \(K\) (1000 in our experiments) to train diffusion policy on the more fine-grained scale. The results in Tab. 1 also show a larger \(K\) can give better performance.
**Generality** Diffusion-QL can only work with direct policy optimization, which contains only a small portion of algorithms. Moreover, thanks to their flexibility and high performance, the likelihood-based algorithms are preferred for some tasks (_e.g._, Antmaze). Our method successfully makes diffusion trainable with any RL algorithm.
### Controlled Sampling from Diffusion Policies
Traditionally, a continuous policy is represented with a state-conditional Gaussian distribution. During evaluation time, a policy executes deterministically to reduce variance by outputting the distribution mean as an action. However, with diffusion policies, we can only randomly draw a sample from the underlying distribution without access to its statistics. As a result, the sampling process is noisy, and the evaluation is of high variance. We consider the following method to reduce variance.
**Energy-based Action Selection (EAS)** Recall that the goal of (offline) RL is to learn a policy that can maximize the cumulative return or values. Though the policy \(\pi_{\theta}\) is stochastic, the learned \(Q_{\phi}\) provides a deterministic critic for action evaluation. We can sample a few actions randomly, then use \(Q_{\phi}\) for selection among them to eliminate randomness. EAS first samples \(N\) actions from \(\pi_{\theta}\) by using any samplers (_i.e._, DPM-Solver), then sample one of them with weights proportional to \(e^{Q(\mathbf{s},\mathbf{a})}\). This procedure can be understood as sampling from an improved policy \(p(\mathbf{a}|\mathbf{s})\propto e^{Q(\mathbf{s},\mathbf{a})}\pi_{\theta}(\mathbf{a}|\mathbf{s})\). All results will be reported based on EAS. See Appendix. C.4 for the other two methods.
## 5 Experiments
We conduct extensive experiments on the D4RL benchmark [2] to verify the following assumptions: 1) Our diffusion policy is much more efficient than the previous one regarding training and evaluation costs. 2) Our diffusion policy is a generic policy class that can be learned through direct and likelihood-based policy learning methods. We also provide various ablation studies on the critical components for better understanding.
BaselinesWe evaluate our method on four domains in D4RL, including Gym-locomotion, AntMaze, Adroit, and Kitchen. For each domain, we consider extensive baselines to provide a thorough evaluation. The simplest method is the classic behavior cloning (BC) baseline and 10% BC that performs behavior cloning on the best 10% data. TD3+BC [6] combines off-policy reinforcement
learning algorithms with BC. OneStepRL [1] first conducts policy evaluation to obtain the Q-value of the behavior policy from the offline dataset, then use it for policy improvement. AWAC [23], AWR [26], and CRR [36] improve policy improvement by adding advantage-based weights to policy loss functions. CQL [16] and IQL [14] constrain the policy evaluation process by making conservative Q updates or replacing the max operator with expectile regression. We also consider the Decision Transformer (DT) [4] baseline that maps offline RL as a sequence-to-sequence translation problem.
Experimental SetupWe keep the backbone network architecture the same for all tasks and algorithms, which is a 3-layer MLP (hidden size 256) with Mish [21] activation function following Diffusion-QL [35]. For the noise prediction network \(\mathbf{\epsilon}_{\theta}(\mathbf{a}^{k},k;\mathbf{s})\) in diffusion policy, we first encode timestep \(k\) with sinusoidal embedding [34], then concatenate it with the noisy action \(\mathbf{a}^{k}\) and the conditional state \(\mathbf{s}\). We use the Adam [12] to optimize both diffusion policy and the Q networks. The models are trained for 2000 epochs on Gym-locomotion and 1000 epochs on the other three domains. Each epoch consists of 1000 iterations of policy updates with batch size 256. For DPM-Solver [19], we use the third-order version and set the model call steps to 15. We reimplement DQL strictly following the official PyTorch code [25] for fair comparison and we refer to DQL (JAX) for all sample efficiency comparisons. We defer the complete list of all hyperparameters to the appendix due to space limits. Throughout this paper, the results are reported by averaging 5 random seeds.
Evaluation ProtocalWe consider two evaluation metrics in this paper. First, _online model selection_ (OMS), proposed by Diffusion-QL [35], selects the best-performing model throughout the whole training process. However, though OMS can reflect an algorithm's capacity, it is cheating, especially when the training procedure is volatile on some of the tasks. Therefore, we propose another metric to focus on the training stability and quality, which is _running average at training_ (RAT). RAT calculates the running average of evaluation performance for ten consecutive checkpoints during training and reports the last score as the final performance.
### Efficiency and Reproducibility
In this section, we focus on the training and evaluation efficiency of our efficient diffusion policy. We choose the OMS evaluation metric to make a fair comparison with the baseline method Diffusion-QL [35]. We consider four variants of EDP to understand how each component contributes to the high efficiency of our method. 1) EDP is the complete version of our method. It uses the action approximation technique in Eqn. (9) for policy training and uses DPM-Solver for sampling. 2) _EDP w/o DPM_ modifies EDP by replacing DPM-Solver with the original DDPM sampling method in Eqn. 4. 3) _EDP w/o AP_ removes the action approximation technique. 4) DQL (JAX) is our Jax implementation of Diffusion-QL.
We first benchmark and compare the training/evaluation speed of the above three variants and Diffusion-QL. We choose walker2d-medium-expert-v2 as the testbed. For training speed, we run each algorithm for 10,000 iterations of policy updates and calculate the corresponding iterations-per-second (IPS). Similarly, we sample 10,000 transitions by interacting with the environment and calculate the corresponding steps-per-second (SPS) for evaluation speed. Based on the visualization in Fig. 2, by taking DQL (JAX) as the baseline, we are able to attribute the performance boost to specific techniques proposed. Specifically, we can observe that action approximation makes 2.3x training and 3.3x sampling faster, while using the DPM-Solver adds an additional 2.3x training speedup. We can observe that DQL (JAX) is \(5\times\) faster than Diffusion-QL, which means our Jax implementation is more computationally efficient than Diffusion-QL's PyTorch code. This demonstrates that both the action approximation technique and DPM-Solver play a critical role in making the training of diffusion policy efficient. However, this technique does not affect the sampling procedure; thus, _EDP w/o DPM_ and DQL (JAX) are on par with each other regarding sampling speed.
Figure 2: Training and evaluation speed comparison. The training IPS are: \(4.66,22.30,50.94,38.4\), and \(116.21\). The sampling SPS are: \(18.67,123.70\), \(123.06\), \(411.0\), \(411.79\).
To show that EDP does not hurt performance, we compare the normalized scores of all tasks in Tab. 1. The results for Diffusion-QL are directly copied from the paper, where each task is carefully tuned with its own hyperparameters. Instead, DQL (JAX) and EDP use the same hyperparameters for tasks belonging to the same domain. Moreover, since EDP is more computationally efficient, we can train a better score model with large \(K=1000\), which is larger than the one (5\(\sim\)100) used in Diffusion-QL. Note that training diffusion policies with large K is impossible in Diffusion-QL, as it needs to forward and backward the neural network K times. Finally, we can observe that EDP and DQL (JAX) are comparable with Diffusion-QL on gym-locomotion tasks but much better on the other three domains. Hence, efficient diffusion policy can boost sample efficiency and improve performance by enabling diffusion training in fine-grained noise prediction.
### Generality and Overall Results
This section aims to evaluate whether EDP is a general policy class. To this end, we train it with direct policy optimization (TD3) and likelihood-based policy optimization (CRR, IQL) methods. Then, we compare them with their feed-forward counterparts and other baseline methods in Table 2. All scores for diffusion policies are reported using the RAT metric, while other scores are directly quoted from their paper. It shows that EDP can beat the standard Gaussian policy parameterized with an MLP on all domains and for all the three algorithms considered. On the gym-locomotion domain, EDP + TD3 gives the best performance (average score 85.5), while likelihood-based policy learning methods are slightly worse. However, on the other three domains (Kitchen, Arroit, and Antmaze), EDP + IQL beats all the other methods by a large margin (more than 10 average scores). Therefore, we conclude that EDP can serve as a plug-in policy class for different RL methods.
Evaluation MetricsTo reveal that evaluation metric is important, we train EDP with TD3 algorithms on three selected environments: walker2d-medium-expert-v2, hopper-medium-expert-v2, and antmaze-medium-diverse-v0. We then compare the scores for OMS (best) and RAT (average) by plotting the training curves in Fig. 3. On walker2d, the training is stable; thus, both OMS and RAT scores steadily grow and result in close final scores. A similar trend can be observed on the hopper but with a more significant gap between these two metrics. However, these two metrics diverge significantly when the training succeeds and then crashes on antmaze. Therefore, OMS is misleading and can not give a reliable evaluation of algorithms, which explains the necessity of using RAT in Sec. 5.2.
Energy-Based Action SelectionWe notice that energy-based action selection (EAS) is a general method and can also be used for arbitrary policies. We apply EAS to normal TD3+BC and find no improvement, which shows EAS is only necessary for diffusion sampling. The results are deferred to the Appendix. Moreover, set the number of actions used in EAS from 1 to 200, and report the performance on gym-locomotions tasks in Fig. 4. It shows the normalized score monotonically grows as the number of actions increases on 8 out of 9 tasks. In our main experiments, we set the number of actions to 10 by trading off the performance and computation efficiency.
DPM-SolverWe are using DPM-Solver to speed up the sampling process of diffusion policies. The number of models in DPM-Solver is an important hyper-parameter that affects sampling efficiency and quality. We vary this number from 3 to 30 and compare the performance on gym-locomotion tasks in Fig. 5. We can observe that the performance increases as more steps of model calls are used. The performance gradually plateaus after 15 model calls. Therefore, we use 15 in our main experiments.
## 6 Conclusion
Diffusion policy has emerged as an expressive policy class for offline reinforcement learning. Despite its effectiveness, diffusion policy is limited by two drawbacks, hindering it from wider applications. First, training a diffusion policy requires to forward and backward through a long parameterized Markov chain, which is computationally expensive. Second, the diffusion policy is a restricted policy class that can not work with likelihood-based RL algorithms, which are preferred in many scenarios. We propose efficient diffusion policy (EDP) to address these limitations and make diffusion policies faster, better, and more general. EDP relies on an action approximation to construct actions from corrupted ones, thus avoiding running the Markov chain for action sampling at training. Our benchmarking shows that EDP achieves 25\(\times\) speedup over Diffusion-QL at training time on the gym-locomotion tasks in D4RL. We conducted extensive experiments by training EDP with various offline RL algorithms, including TD3, CRR, and IQL, the results clearly justify the superiority of diffusion policies over Gaussian policies. As a result, EDP set new state-of-the-art on all four domains in D4RL.
Figure 4: Performance of different number of actions used in EAS. The experiments are conducted on the nine locomotion tasks.
Figure 5: Performance of DPM-Solver with varying steps. The experiments are conducted on the nine locomotion tasks.
Figure 3: Training curves for EDP +TD3 on three representative environments. Average represents RAT, Best represents OMS. | オフライン強化学習(RL)は、パラメータ化されたポリシーを学習するのに最適なポリシーを求めることを目指します。しかし、これはしばしば無視されてしまいます。近頃、Diffsuion-QLは、Diffusionモデルを用いてポリシーを表現することで、オフラインRLのパフォーマンスをSignificantly向上させました。しかし、Diffusion-QLは、数百のステップのマルコフ連鎖を用いてサンプルを行うことで成功している一方で、いくつかの重大な制限を伴います。1) マルコフ連鎖をトレーニング中に forward and backward を実行することは、計算的に効率的ではありません。2) 最大似然性に基づくRLアルゴリズム(例:ポリシー勾配法)に適合していないのです。そのため、私たちは、EDP(効率的な diffusion ポリシー)という解決策を提案しました。EDPは、訓練中に、腐食された行動から行動を概算的に構築することで、サンプルのチェーンを回避 |
2309.11189 | Increasing Ticketing Allocative Efficiency Using Marginal Price Auction
Theory | Most modern ticketing systems rely on a first-come-first-serve or randomized
allocation system to determine the allocation of tickets. Such systems has
received considerable backlash in recent years due to its inequitable allotment
and allocative inefficiency. We analyze a ticketing protocol based on a
variation of the marginal price auction system. Users submit bids to the
protocol based on their own utilities. The protocol awards tickets to the
highest bidders and determines the final ticket price paid by all bidders using
the lowest winning submitted bid. Game theoretic proof is provided to ensure
the protocol more efficiently allocates the tickets to the bidders with the
highest utilities. We also prove that the protocol extracts more economic rents
for the event organizers and the non-optimality of ticket scalping under
time-invariant bidder utilities. | Boxiang Fu | 2023-09-20T10:23:39 | http://arxiv.org/abs/2309.11189v1 | # Increasing Ticketing Allocative Efficiency Using Marginal Price Auction Theory
###### Abstract
Most modern ticketing systems rely on a first-come-first-serve or randomized allocation system to determine the allocation of tickets. Such systems has received considerable backlash in recent years due to its inequitable allotment and allocative inefficiency. We analyze a ticketing protocol based on a variation of the marginal price auction system. Users submit bids to the protocol based on their own utilities. The protocol awards tickets to the highest bidders and determines the final ticket price paid by all bidders using the lowest winning submitted bid. Game theoretic proof is provided to ensure the protocol more efficiently allocates the tickets to the bidders with the highest utilities. We also prove that the protocol extracts more economic rents for the event organizers and the non-optimality of ticket scalping under time-invariant bidder utilities.
## 1 Introduction
Current ticket allocation systems used by most major ticketing websites operate on a first-come-first-serve or randomized allocation basis. Such a system has caused considerable backlash over recent years due to its opaque criteria for allocation and the need to compete for who can refresh the ticketing webpage the fastest in the milliseconds after tickets are released for sale (see Ref. [1]). Economically, current systems are also largely inefficient in allocating the tickets to the consumers with the highest utility for the tickets, thereby resulting in a loss in total allocative efficiency.
We propose a ticketing protocol based on the marginal price auction system. The protocol allocates the tickets to the bidders with the highest bids and the price paid by all bidders is the lowest winning submitted bid. The protocol provably increases the total allocative efficiency compared to current allocation systems by assigning the tickets to the group of consumers with the highest utility. We also prove that the proposed system increases the economic rents extracted for the seller as well as offering a partial solution to the ticket scalping problem by proving that rational bidders with time-invariant utilities will refrain from buying scalped tickets.
Protocol Description
We begin by briefly summarizing ticketing systems based on a first-come-first-serve protocol (see Ref. [2]). Prior to the tickets going on sale, the seller publicly announces a time at which the bulk of the tickets are available for purchase. Users typically enter into the ticketing webpage prior to the tickets going on sale and compete on refreshing the webpage immediately after the ticket sale time commences. Users are then served based on their chronological time-stamp registered with the ticketing webpage. The tickets are progressively sold until the allotment has been exhausted or until all users wishing to purchase has been served. Fig. 1 briefly outlines the timeline of a first-come-first-serve ticketing system.
Such a system is inefficient both in terms of time and allocation. Most first-come-first-serve systems require the user to be physically on the webpage waiting in the queue to be able to participate in the allocation, with queuing time possibly taking hours for large events (see Ref. [1] for the case of Taylor Swift's 2023 Australian tour). Economically, the system is also not allocative efficient in most cases. In the common case where demand exceeds supply, the first-come-first-serve system allocates tickets based on chronological ordering, and potentially leaves many buyers with higher utility without an allocation (see Fig. 5 and Example 1).
We propose an alternative system for ticket allocation based on the marginal price auction system. The system is a multi-unit generalization of the Vickrey auction system (see Ref. [3]). In a marginal price auction, a fixed number of units of a homogeneous commodity is put forward for auction. Bidders submit bids for the units via a (usually) sealed-bid auction. The auctioneer allocates the units to the bidders with the highest bids until the allocation is exhausted. The price paid on each unit for all bidders is the lowest winning submitted bid (see Fig. 2). The marginal price auction system has some particularly useful game theoretic properties that are explored in the next section. For now, we outline our proposed ticket allocation mechanism.
The timeline of our proposed marginal price ticket allocation system is outlined in Fig. 3. Instead of publicly announcing a ticket sale commencement time, the seller instead announces a time window for bid submission. During this window, bidders are free to submit bids for one or more tickets. Collateral may be taken to ensure the bid is genuine. A price floor may also be optionally implemented by the seller so that only bids exceeding the floor are accepted. Once the time window elapses, bidding is closed and all outstanding bids are entered into the auction. A marginal price auction system ranks the bids according to their monetary amount and allocates tickets to the highest bids until the allocation is exhausted. The price paid is determined by the
Figure 1: Timeline of Key Steps in a First-Come-First-Serve Ticketing System
lowest winning submitted bid. Tickets are then released to the successful bidders with a requirement to pay the ticket price within a set timeframe and any excess collateral or rebates is released back to the bidders.
The protocol for the marginal price allocation mechanism is summarized in Fig. 4. After the bidding window is opened, users are first required to validate their identities if they have not done so prior. This entails signing up to the protocol so that an unique identifier can be attributed to the user (see Ref. [4]). For users wishing to bid multiple units, multiple identifiers should be provided by the user. These should ideally be the identities of the individuals hoping to attend the event. Such identification is crucial to allow us to associate a user submitting multiple bids as a proxy for multiple natural persons submitting multiple one unit bids. This allows us to ensure the validity of Theorem 1 and also reduce potential malicious activity such as intentionally bidding in large quantities by ticket scalpers to reduce overall available supply.
Once user identification is validated, bids may be submitted through the protocol and bids exceeding the price floor are entered into the central database. Ideally, collateral equalling 100% of
Figure 3: Timeline of Key Steps in a Marginal Price Ticketing System
Figure 2: Ticket Allocation and Pricing in a Marginal Price Auction System
the bid amount should also be posted concurrently with the bid to ensure the bid is genuine. This may be relaxed to cover less than 100% if additional guarantees can be put in place to ensure the bid is honest (e.g. the number of times the user has bid, the number of verified identities associated with the user, etc). This step can also provide useful information to the event organizers to gauge the popularity of the event. If the number of submitted bids greatly exceeds capacity, it could allow organizers to schedule additional shows to increase supply.
Next, the event organizers may optionally choose to disclose an indicative final price prior to the end of the bidding window to stimulate bidding. This could be as rudimentary as determining the lowest winning bid of all the submitted bids up until this time. However, since the auction is no longer sealed-bid, its dynamics may be affected and the optimal bidding strategy may not be the one proven in Theorem 1.
Once the bidding window elapses, the bidding webpage closes and the protocol no longer accepts incoming bids. The protocol then initiates a marginal price auction on all outstanding bids (see Algorithm 1). Bids are ranked in descending price order and tickets are allocated to the highest bids until the ticket allocation is exhausted, and the price of all tickets is determined by the lowest winning bid. In the case of multiple bids at the lowest winning bid price, a randomized lottery or
Figure 4: Description of Steps in a Marginal Price Ticketing Protocol
the chronological order of the bids may be used to allocate the remaining tickets.
After the auction is executed, the tickets are released to the successful bidders and any excess collateral is released. If the collateral amount is less than the final ticket price, the bidder may be required to pay the remaining amount within a predetermined settlement period. Optionally, a rebate (both monetary and/or non-monetary) could be distributed to the winning bidders after the auction should the final settlement price greatly exceed the original price floor ticket price. Its rationale is explained in the next section.
```
0:\(\mathbf{b}=(b_{1},b_{2},\ldots b_{i},\ldots b_{N})\)\(\triangleright\) Submitted bids in chronological order
0:\(m\)\(\triangleright\) Price floor
0:\(K\)\(\triangleright\) Number of available tickets if\(N\leq K\)then return User identifiers of \(\mathbf{b}\) and ticket price \(m\) elseif\(N>K\)then c\(\leftarrow\) DescendingSort(\(\mathbf{b}\)) return User identifiers of \(c_{i}\) with \(i\leq K\) and ticket price \(c_{K}\)
```
**Algorithm 1** Marginal Price Ticket Auction
## 3 Properties and Proofs
A marginal price auction system has a number of nice game theoretic properties that allows the system to more efficiently allocate tickets based on the user's individual valuations. In essence, the marginal price auction system allocates the tickets to the group with the highest utility for the tickets, as opposed to a first-come-first-serve allocation in conventional ticketing systems. First, we prove that for rational bidders with demand for only one ticket, the optimal strategy for each bidder is to bid their true value of the item. From this, we show that the marginal price auction system extracts economic rents for the seller that is greater than or equal to the rents extracted from the first-come-first-serve system. We also show that the total valuation of successful bidders from the marginal price auction system is greater than or equal to the total valuation of the successful bidders from the first-come-first-serve system. This increases allocative efficiency and allots the limited number of tickets available to the group of bidders with the highest valuations. Finally, we show that the system offers a partial solution to the ticket scalping problem by proving that it is not optimal to buy from scalpers in the case of time-invariant bidder valuations.
The first theorem is a standard result of marginal price auction systems found in most auction theory textbooks. The exposition used here is based on a variation of the proof found in Ref. [5]. Throughout this section we assume that each bidder has demand for one ticket only. This is a valid assumption in the case of event ticketing problems as one person can only maximally enjoy one unit of the ticket by being physically present at the event. We relax this one ticket assumption in the protocol implementation description by introducing an identity verification mechanism so that a user submitting multiple bids can be regarded as a proxy for multiple natural persons submitting multiple one unit bids. For ease of exposition we regard users that bid at exactly the final price as losing the bid (i.e. they are left without a ticket). For physical implementation purposes, a randomization
procedure may be used so that all bidders who bid at exactly the final price is entered into a lottery and a subset is randomly chosen to be allocated the remaining tickets.
**Theorem 1**.: _In a marginal price auction with single-unit bidder demand, the optimal strategy for all bidders is to bid their own true valuation._
Proof.: Let \(N\) denote the number of bidders in the auction and \(K\) denote the number of available units with \(N>K\). Also, let \(v_{i}\) denote bidder \(i\)'s valuation for one unit of the item, \(b_{i}\) denote bidder \(i\)'s submitted single-unit bid for the item, and let \(\mathbf{c}=(c_{1},c_{2},\ldots c_{i},\ldots c_{N})\) denote the \(N\)-vector of submitted bids by the \(N\) bidders arranged in descending price order (similarly \(\mathbf{c}^{-i}\) is the descending order bid vector without bid \(i\)).
The final price set by the marginal price auction is given by the lowest winning bid at
\[p=c_{K}\]
The payoff to bidder \(i\) is given by the payoff function
\[P_{i}(v_{i},\mathbf{c})=\begin{cases}v_{i}-p&\text{if }b_{i}>p\\ 0&\text{otherwise}\end{cases}\]
We claim that \(b_{i}=v_{i}\). Suppose by contradiction that \(b_{i}>v_{i}\). We have the following cases:
_Case 1_: \(p\geq b_{i}>v_{i}\). Bidder \(i\) loses the auction and receives a payoff of 0 regardless of their action.
_Case 2_: \(b_{i}>p\geq v_{i}\). The payoff to bidder \(i\) is \(P_{i}(v_{i},\mathbf{c})=v_{i}-p\leq 0\) and is weakly dominated by the alternate strategy \(\tilde{b_{i}}=v_{i}\) with payoff \(P_{i}(v_{i},\mathbf{c}^{-i},\tilde{b_{i}})=0\).
_Case 3_: \(b_{i}>v_{i}>p\). Since both \(b_{i}>p=c_{K}\) and \(v_{i}>p=c_{K}\), it makes no difference bidding at \(b_{i}\) or \(v_{i}\) as it only permutes the location of bidder \(i\)'s bid in the first \(K-1\) places of vector \(\mathbf{c}\). So bidder \(i\) wins the bid regardless and pays the same price \(p=c_{K}\).
The three exhaustive cases shows that the strategy \(b_{i}>v_{i}\) is weakly dominated by the strategy \(\tilde{b_{i}}=v_{i}\). Next, suppose that \(b_{i}<v_{i}\). We have the following cases:
_Case 1_: \(p\geq v_{i}>b_{i}\). Bidder \(i\) loses the auction and receives a payoff of 0 regardless of their action.
_Case 2_: \(v_{i}>p\geq b_{i}\). The payoff to bidder \(i\) is \(P_{i}(v_{i},\mathbf{c})=0\) and is weakly dominated by the alternate strategy \(\tilde{b_{i}}=v_{i}\) with payoff
\[P_{i}(v_{i},\mathbf{c}^{-i},\tilde{b_{i}})=\begin{cases}v_{i}-\tilde{p}&\text {if }v_{i}>c_{K-1}\\ 0&\text{otherwise}\end{cases}\]
where \(\tilde{p}=c_{K-1}\) is now the lowest winning bid due to the insertion of bid \(\tilde{b_{i}}\) into the first \(K-1\) slots of \(\mathbf{c}\).
_Case 3_: \(v_{i}>b_{i}>p\). As with the previous _Case 3_, bidder \(i\) wins the bid regardless and pays the same price \(p=c_{K}\).
Thus, both strategies \(b_{i}>v_{i}\) and \(b_{i}<v_{i}\) are weakly dominated by \(\tilde{b}_{i}=v_{i}\). We conclude that the optimal bidding strategy for bidder \(i\) is to bid their own true valuation.
The theorem above is not true in general if bidders have demand for more than one unit (see Ref. [5]). Hence, an identity verification mechanism is needed so that we regard a user submitting multiple bids as proxies for multiple natural persons. The mechanism effectively allows the seller to circumvent determining the pricing of the tickets based on imperfect information and instead rely on the marginal price auction mechanism to allow bidders to reveal their own reservation price through the bidding process. The theorem above guarantees that rational bidders will reveal their own willingness-to-pay during the bidding process and disclose this information to the seller. The mechanism also allows the seller to extract more economic rents than the first-come-first-serve system, which we will prove below. We also impose a price floor at which bids must exceed to be successful at being allocated a ticket. This is typical in most modern ticketing systems (it is just the ticket price in first-come-first-serve systems).
**Theorem 2**.: _In a marginal price auction with single-unit bidder demand and price floor, the economic rents extracted is greater than or equal to the economic rents extracted from a first-come-first-serve system._
Proof.: Let \(\mathbf{c}=(c_{1},c_{2},\ldots c_{i},\ldots c_{N})\) denote the \(N\)-vector of submitted bids by the \(N\) bidders arranged in descending price order. Let the price floor be denoted by \(\$m\) and \(K\) denote the number of units available to bid with \(N>K\). We have the following cases:
_Case 1_: \(c_{K}\geq m\). There is enough demand above the price floor to exhaust the supply of \(K\) units available to bid. The economic rents obtained by the first-come-first-serve system is given by \(mK\) (allocated to the first \(K\) bidders with bids exceeding the price floor in chronological order), while the the economic rents obtained by the marginal price auction is given by \(c_{K}K\). Since \(c_{K}\geq m\), we have \(c_{K}K\geq mK\).
_Case 2_: \(c_{K}<m\). There is not enough demand above the price floor to exhaust the supply of \(K\) units available to bid. The price floor ensures that only the \(k<K\) bidders with \(c_{1},c_{2},\ldots c_{k}\geq m\) are allocated at price \(\$m\) and \(K-k\) units are left unallocated. The economic rents extracted is \(mk\) for both systems.
From the two cases, we conclude that the a marginal price auction extracts economic rents that is greater than or equal to that extracted from a first-come-first-serve system.
Below we provide a two simple examples on the different economic rents extracted by both systems.
**Example 1**.: Let the number of bidders be \(N=6\) and the number of units available to bid be \(K=3\). Let the price floor be \(m=20\) with the chronological bid vector \(\mathbf{b}=(35,15,40,20,25,20)\). The descending price vector is then \(\mathbf{c}=(40,35,25,20,20,15)\).
The first-come-first-serve system sets the ticket price at \(m=20\) and the successful bidders are the 1st, 3rd, and 4th entries in the chronological bid vector \(\mathbf{b}\). The economic rents extracted for the seller is \(3\times 20=60\).
The marginal price auction system sets the ticket price at \(c_{3}=25\) and the successful bidders entered into the auction in the chronological order of 1st, 3rd, and 5th. The economic rents extracted for the seller is \(3\times 25=75\). The excess economic rents extracted amounts to $15 and the 4th chronologically-ordered bidder would no longer be successful in the auction.
**Example 2**.: Let the number of bidders be \(N=6\) and the number of units available to bid be \(K=3\). Let the price floor be \(m=30\) with the chronological bid vector \(\mathbf{b}=(35,15,40,20,25,20)\). The descending price vector is then \(\mathbf{c}=(40,35,25,20,20,15)\).
Both systems set the ticket price at the price floor \(m=30\) and the successful bidders are the 1st and 3rd entries in the chronological bid vector \(\mathbf{b}\). The economic rents extracted for both systems is \(2\times 30=60\). In this scenario, the seller may consider lowering the price floor prior to the bidding window closing to allow enough bids to exceed the price floor so that all units are allocated.
The next theorem shows that the marginal price auction system has higher allocative efficiency compared to the first-come-first-serve system (see Fig. 5).
**Theorem 3**.: _Assuming single-unit bidder demand and price floor, the sum of the valuations of the successful bidders in a marginal price auction system is greater than or equal to the sum of the valuations of successful bidders in a first-come-first-serve system._
Figure 5: Ticket Allocation of a Marginal Price Auction System (L) and a First-Come-First-Serve System (R)
Proof.: Let \(N\) denote the number of bidders in the auction. Let the price floor be denoted by \(\$m\) and \(v_{i}\) denote bidder \(i\)'s valuation for one unit of the item. Let \(\mathbf{b}^{b_{i}\geq m}=(b_{1},b_{2},\ldots b_{i},\ldots b_{k})\) denote the \(k\)-vector of submitted bids that exceed the price floor arranged in chronological order with \(k\leq N\), and let \(\mathbf{c}^{b_{i}\geq m}=(c_{1},c_{2},\ldots c_{i},\ldots c_{k})\) be the sorted \(\mathbf{b}^{b_{i}\geq m}\) vector in descending price order.
The marginal price auction system allocates the units based on the leading entries of vector \(\mathbf{c}^{b_{i}\geq m}\) while the first-come-first-serve system allocates units based on the leading entries of vector \(\mathbf{b}^{b_{i}\geq m}\). Since \(\mathbf{c}^{b_{i}\geq m}\) is sorted based on descending price order, the sum of its leading entries is greater than or equal to the sum of the leading entries of \(\mathbf{b}^{b_{i}\geq m}\). From Theorem 1, we know that the optimal bidding strategy is \(b_{i}=v_{i}\). Hence the sum of the bids is equal to the sum of the valuations. Thus, the sum of the valuations of the successful bidders in a marginal price auction system is greater than or equal to the sum of the valuations of successful bidders in a first-come-first-serve system.
While the marginal price auction system does improve overall allocative efficiency, it nevertheless erodes consumer surplus and redistributes the surplus to the sellers (see Fig. 6). To ensure that consumers still enjoy some benefits of switching to the marginal price ticketing system, a welfare transfer in the form of a rebate and/or excess collateral return mechanism may be implemented if the final settlement price greatly exceeds the original price floor ticket price (see Fig. 7). Non-monetary rebates (e.g. merchandise) may also be distributed if there is perceived value by the bidders. It is important to note that this rebate must be done after the auction has taken place, and should not occur frequently enough so as to change the expectations of the bidders. Changing expectations will result in deviations in the optimal strategy of bidders, and could render Theorem 1 invalid.
Finally, we prove that it is not optimal to buy from ticket scalpers in the case of time-invariant bidder valuations.
**Theorem 4**.: _If individual valuations are time-invariant, then it is not optimal for bidders to buy from ticket scalpers after an unsuccessful bid._
Proof.: Let \(p\) be the final price set by the marginal price auction and let \(v_{i}\) denote bidder \(i\)'s valuation for one ticket. If bidder \(i\) is unsuccessful in the auction, by Theorem 1, the individual valuation is less than the final price (\(v_{i}<p\)). For economically rational ticket scalpers, the scalping price \(\tilde{p}\) is given by \(\tilde{p}\geq p\). Assuming individual valuations are time-invariant, we have \(v_{i}<p\leq\tilde{p}\). So bidder \(i\)'s valuation of the ticket is below the scalping price, and the bidder is better off not buying the ticket from the scalper.
Theorem 4 is particularly relevant for event ticketing purposes as it partially solves the ticket scalping problem. Although not necessarily a negative externality in the economics sense as ticket scalpers do serve a purpose to equilibrate limited supply with demand, it is nevertheless regarded as socially unacceptable and banned in most countries due to the erosion of consumer surplus (for the case of Australia, see Ref. [6]). The marginal price auction mechanism partially solves this as bidders with time-invariant valuations will refrain from purchasing tickets from scalpers. Therefore, individuals that could potentially buy from scalpers are restricted to the subset of bidders that have
time-varying valuations, new bidders that did not participate in the original auction and/or bidders who may wish to obtain better seating for the event.
## 4 Simulation
We provide a simple simulation of the marginal price auction system summarized in Table 1. The simulation assumes three scenarios covering small, medium, and large events with capacity \(K=100\), \(1000\), and \(10000\) respectively. We also assume a price floor of \(m=100\) with the number of bidders equaling \(1.5\times K\) and have valuations according to the normal distribution N(\(\mu=125\), \(\sigma=25\)) (see Ref. [7]). The emphasis here is not on the assumptions and such analysis is best left to the econometricians. Here we focus on key distinctive features of the marginal price ticket allocation system.
The simulation substantiates the proofs of Theorem 2 and Theorem 3. We see an increase in both economic rents extracted and total bidder valuation from the marginal price auction system as compared to the first-come-first-serve system. However, we also see an erosion of consumer surplus due to the need to pay a higher ticket price. It may become socially unacceptable for the ticket price to be substantially above the price floor. In such cases, a rebate mechanism should be used to redistribute the surplus back to the consumers. Overall, the simulation shows that total allocative
Figure 6: Consumer and Seller Surplus of a Marginal Price Auction System (L) and a First-Come-First-Serve System (R)
efficiency is increased by using the marginal price auction system.
## 5 Conclusion
Through this paper, we have analyzed a ticketing protocol based on the marginal price auction system. During the bidding window, bidders can submit bids for the tickets and post collateral. The protocol allocates the tickets to the highest bids and the ticket price is determined by the lowest winning bid. Tickets are then released to the successful bidders with a requirement to pay within a specified timeframe and collateral is given back to all bidders. We also proved that the mechanism allows for a more allocative efficient ticketing system. Additionally, more economic rents can be obtained by the event organizers and we also showed that it is not optimal for bidders to buy from ticket scalpers under time-invariant valuations. Finally, we provide a simple simulation to substantiate our proofs.
Figure 7: Welfare Transfer from Sellers to Consumers from Rebate | 現代のチケットシステムは、チケットの配分を決定する際に、先着順の分配またはランダムな分配システムを採用しています。近年、このようなシステムには、不公平な分配と分配効率の悪さという批判が多く寄せられています。このシステムに基づいたチケットプロトコルを分析します。ユーザーは、独自の利潤に基づいて、プロトコルに bid を提出します。プロトコルは、最も高い Bid を受け払い、すべての Bidder が支払う最終的なチケット価格を決定します。ゲーム理論的な証明は、プロトコルがより効率的にチケットを Bidder に配分するのを助けます。また、このプロトコルは、イベント主催者と非最適なチケット売買を証明します。 |
2305.19600 | Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous
Federated Learning | Federated Learning (FL) is a machine learning paradigm that enables clients
to jointly train a global model by aggregating the locally trained models
without sharing any local training data. In practice, there can often be
substantial heterogeneity (e.g., class imbalance) across the local data
distributions observed by each of these clients. Under such non-iid data
distributions across clients, FL suffers from the 'client-drift' problem where
every client drifts to its own local optimum. This results in slower
convergence and poor performance of the aggregated model. To address this
limitation, we propose a novel regularization technique based on adaptive
self-distillation (ASD) for training models on the client side. Our
regularization scheme adaptively adjusts to the client's training data based on
the global model entropy and the client's label distribution. The proposed
regularization can be easily integrated atop existing, state-of-the-art FL
algorithms, leading to a further boost in the performance of these
off-the-shelf methods. We theoretically explain how ASD reduces client-drift
and also explain its generalization ability. We demonstrate the efficacy of our
approach through extensive experiments on multiple real-world benchmarks and
show substantial gains in performance over state-of-the-art methods. | M. Yashwanth, Gaurav Kumar Nayak, Arya Singh, Yogesh Simmhan, Anirban Chakraborty | 2023-05-31T07:00:42 | http://arxiv.org/abs/2305.19600v3 | # Federated Learning on Heterogeneous Data via Adaptive Self-Distillation
###### Abstract
Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data. In practice, there can often be substantial heterogeneity (e.g., class imbalance) across the local data distributions observed by each of these clients. Under such non-iid data distributions across clients, FL suffers from the 'client-drift' problem where every client converges to its own local optimum. This results in slower convergence and poor performance of the aggregated model. To address this limitation, we propose a novel regularization technique based on adaptive self-distillation (_ASD_) for training models on the client side. Our regularization scheme adaptively adjusts to the client's training data based on: (i) the closeness of the local model's predictions with that of the global model, and (ii) the client's label distribution. The proposed regularization can be easily integrated atop existing, state-of-the-art FL algorithms leading to a further boost in the performance of these off-the-shelf methods. We demonstrate the efficacy of our proposed FL approach through extensive experiments on multiple real-world benchmarks (including datasets with common corruptions and perturbations) and show substantial gains in performance over the state-of-the-art methods. The code is provided in the supplementary.
Federated Learning, Knowledge Distillation, Heterogeneity & Generalization
## I Introduction
Federated Learning (FL) is a machine learning method where the clients learn a shared model under the orchestration of the server without sharing the client's data. Due to its privacy-preserving nature, it has many applications in smartphones [1], the Internet of Things (IoT), healthcare and insurance organizations [2], where training data is generated at edge devices or from privacy-sensitive domains. As originally introduced in [3], FL involves model training across an architecture consisting of one server and multiple clients. In traditional FL, each client securely holds its training data due to privacy concerns as well as to avoid large communication overheads while transmitting the same. At the same time, these clients aim to collaboratively train a generalized model that can leverage the entirety of the training data disjointly distributed across all these clients and thereby attain accuracy comparable to a centrally trained model.
One way to solve this problem is by FedSGD proposed by [3]. The problem with this approach is the communication cost, as it takes a large number of rounds to converge. To minimize the communication cost and ensure faster convergence, they introduce the Federated Average (FedAvg) algorithm [3]. FedAvg shows excellent convergence when the data is independent and identically distributed (iid) or non-heterogeneous across the clients. But it has slower convergence and poor performance in non-iid or heterogeneous settings.
Data generated at the edge/client devices are often highly heterogeneous, as a consequence of the data generation process. They can differ in terms of quantity imbalance (the number of samples with each client), label imbalance (empirical label distribution across the clients), and feature imbalance (features of the data across the clients are non-iid). When there exists a label or feature imbalance, the objective for every client becomes different as the local minimum for every client will be different. In such settings, during the local training, the client's model starts to drift towards its own local minimum and farther away from the global objective. This is undesirable as the goal of the FL is to converge to a global model that generalizes well across all the clients. This phenomenon, known as 'client-drift' is introduced and explored in earlier works [4, 5].
One popular way of mitigating this challenge of non-iid data distribution owing to label imbalance across clients is via client-side regularization. Here, the client models during local training are explicitly regularized with the global model parameters to minimize client drift. Algorithms such as FedProx [6], SCAFFOLD [4] and FedDyn [5] use regularization at the client-side in the parameter space. But they ignore the representations of the global model, which can be useful and are explored in distillation-based works in recent literature. Authors in [7] introduce the class-wise adaptive weighting scheme FedCAD at the server side. FedCAD relies on the server and the presence of auxiliary data for computing the class-wise weights. Another recent work [8] propose FedNTD by distilling only the incorrect class labels.
The primary motivation behind the distillation and regularization works in the context of FL is that the global model will have better representations than the local models. By introducing regularization on the client-side, we make the client models remain in proximity to the global model. This leads to faster convergence, i.e., obtaining the desired accuracy with minimum communication rounds. This is particularly helpful in FL where edge devices transmit and receive model parameters in each round, and this causes communication costs on the constrained edge devices [9].
Motivated by the utility of client model regularization in mitigating client drift we propose an efficient Adaptive Self
Distillation (ASD) strategy for Federated learning. This is a distillation-based regularizer where the regularizer is adaptively adjusted based on the Kullback Leibler (KL) divergence between the global and local model predictions and the empirical label distribution of the client's data. This novel design of our proposed ASD method can be easily integrated atop any existing FL methods to result in substantial performance gain, which makes it an attractive and compelling solution to the federated learning problem. To the best of our knowledge, this is the first work where the adaptive weights are used for the distillation loss in the FL framework without requiring access to auxiliary data and without the assistance of the server. As a validation, we combine our proposed regularization scheme with the algorithms such as FedAvg [3], FedProx [6], FedDyn [5] and FedNTD [8], and refer to the enhanced methods as FedAvg+ASD, FedProx+ASD, FedDyn+ASD and FedNTD+ASD respectively.
The key contributions of this work are, as follows.
* We introduced a novel regularization method ASD in the context of Federated Learning that alleviates the client drift problem by adaptively weighting the regularization loss for each sample based on two key factors: (i) closeness of predictions between the global model and the local model, and (ii) empirical label distribution of client data.
* Unlike prior works where regularization schemes are typically combined with the FedAvg algorithm, our proposed approach can be suitably combined with any state-of-the-art aggregation methods that further yields improvement in performance.
* We demonstrate the efficiency of our method by extensive experiments on CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets to improve the accuracy and reduce the communication cost.
* We present the theoretical analysis of the client-drift and also the empirical evidence for better generalization ability of ASD.
## II Related Work
### _Federated Learning (FL)_
In recent times, addressing non-iid problems in Federated Learning has become an active area of research, and the field is developing rapidly. For brevity, we discuss a few related works here. In FedAvg [3], the two main challenges explored are reducing communication costs with fewer rounds and ensuring privacy by avoiding having to share the data. There are some studies based on gradient inversion [10] raising privacy concerns owing to gradient sharing while some studies have proposed in defense of sharing the gradients [11, 12].
FedAvg [3] is the generalization of local SGD [13] by increasing the number of local updates, significantly reducing communication costs for an iid setting, but does not give similar improvements for non-iid data. FedProx [6] introduced a proximal term by penalizing the weights if they are far from the global initialized model. SCAFFOLD [4] viewed this problem as one of objective inconsistency and introduced a gradient correction term which acts as a regularizer. Later, FedDyn [5] improved upon this by introducing the dynamic regularization term. [14] attempts to solve this problem by enabling all device participation or none, as described by [5].
There are several papers that perform an SGD type analysis which involves the full device participation, and this breaks the important constraint in FL setup of partial device participation. Some of these attempt to compress the models to reduce the communication cost [15]. A few works include regularization methods on the client side [16], and one shot methods where clients send the condensed data and the server trains on the condensed data [17]. In [18], an adaptive weighting scheme is considered on task-specific loss to minimize the learning from samples whose representation is negligible.
### _Federated Learning using Knowledge Distillation_
Knowledge Distillation (KD) introduced by [19] is a technique to transfer the knowledge from a pre-trained teacher model to the student model by matching the predicted probabilities. Adaptive distillation was used in [20]. The server-side KD methods such as FedGen [21] use KD to train the generator at the server and the generator is broadcasted to the clients in the subsequent round. The clients use the generator to generate the data to provide the inductive bias. This method incurs extra communication of generator parameters along the model and training of the generator in general is difficult. In FedDF [22] KD is used at the server that relies on the external data. The KD is performed on an ensemble of client models, especially client models act as a multiple Teacher models and the knowledge is distilled into a single student model, which is a global model. In FedNTD [8] the non-true class logits are used for distillation. This method gives uniform weight to all the samples. In FedCAD [7] and FedSSD [23] the client-drift problem is posed as a forgetting problem, and a weighting scheme has been proposed. Importantly, the weights are assisted by the server with the assumption that the server has access to auxiliary data. One shortcoming of this method is the availability of auxiliary data on the server, which may not hold. Unlike all of these approaches, we propose a novel adaptive distillation strategy that aims to mitigate the challenge of client drift due to non-iid data without relying on the server and access to any form of auxiliary data to compute the adaptive weights.
## III Method
We first describe the federated optimization problem in general, then explain the proposed method of adaptive self-distillation (ASD) in section III-B. We provide the theoretical and empirical analysis in the sections III-C and III-D respectively.
### _Problem Setup_
We assume there is a single server/cloud and \(m\) clients/edge devices. We further assume that client \(k\) has the Dataset \(\mathcal{D}_{k}\) with \(n_{k}\) training samples drawn iid from the data distribution \(\mathbb{P}_{k}(x,y)\). The data distributions \(\{\mathbb{P}_{k}(x,y)\}_{k=1}^{K}\) across the
clients are assumed to be non-iid. In this setup, we perform the following optimization. [5],[3]
\[\underset{w\in R^{d}}{arg\,min}\ \left(f(\mathbf{w})\triangleq\frac{1}{K}\sum_{k \in[K]}f_{k}(\mathbf{w})\right) \tag{1}\]
where \(f_{k}(\mathbf{w})\) is the client specific objective function and \(w\) denotes model parameters. The overall FL framework is described in detail in figure 1.
### _Adaptive Self Distillation (ASD) in FL_
We now describe the proposed method where each client \(k\) minimizes the \(f_{k}(\mathbf{w})\) as defined below Eq. (2).
\[f_{k}(\mathbf{w})\triangleq L_{k}(\mathbf{w})+\lambda L_{k}^{\,asd}(\mathbf{w}) \tag{2}\]
\(L_{k}(\mathbf{w})\) is given below.
\[L_{k}(\mathbf{w})=\underset{x,y\in P_{k}(x,y)}{E}[l_{k}(\mathbf{w};(x,y))] \tag{3}\]
Here, \(l_{k}\) is cross-entropy loss. The expectation is computed over training samples drawn from \(\mathbb{P}_{k}(x,y)\) of a client \(k\). This is approximated as the empirical average of the samples from the Dataset \(\mathcal{D}_{k}\).
\(L_{k}^{asd}(\mathbf{w})\) in Eq. 2 denotes the proposed Adaptive Self Distillation loss (ASD) term which considers label imbalance and quantifies how easily the predictions of the local model can drift from the global model. ASD loss is designed so that client models learn from the local data at the same time not drift too much from the global model. We define (ASD) Loss as follows.
\[L_{k}^{\,asd}(\mathbf{w})\triangleq\mathbb{E}[\alpha_{k}(x,y)\mathcal{D}_{ \text{KL}}(q_{g}(x,\mathbf{w}^{t})||q_{k}(x,\mathbf{w}))] \tag{4}\]
In the above Eq. 4 \(\mathbf{w}^{t}\) represents the global model at round \(t\) and \(\mathbf{w}\) represents the trainable model parameters of client \(k\). \(\alpha_{k}(x,y)\) denotes the weight for the sample \(x\) with label ground truth label \(y\). For simplicity, we denote the global model softmax predictions \(q_{g}(x,\mathbf{w}^{t})\) as \(q_{g}(x)\) and client model softmax predictions \(q_{k}(x,\mathbf{w})\) as \(q_{k}(x)\). \(\mathcal{D}_{\text{KL}}\) is the KL divergence. The Eq. 4 can be approximated by the below equation using the mini-batch.
\[L_{k}^{\,asd}(\mathbf{w})=\frac{1}{B}\sum_{i\in[B]}\alpha_{k}(x^{i},y^{i}) \mathcal{D}_{\text{KL}}(q_{g}(x^{i})||q_{k}(x^{i})) \tag{5}\]
where \(B\) is the batch size, \((x^{i},y^{i})\in\mathcal{D}_{k}\) and \(q_{g}\) and \(q_{k}\) are softmax probabilities on the temperature (\(\tau\)) scaled logits of the global model and client model \(k\) respectively. They are given in Eq. 6 and Eq. 7.
\[q_{g}^{c}(x^{i})=\frac{exp\left(z_{g}^{c}(x^{i})/\tau\right)}{\sum_{m\in C} \exp\left(z_{g}^{m}(x^{i})/\tau\right)} \tag{6}\]
\[q_{k}^{c}(x^{i})=\frac{exp(z_{k}^{c}(x^{i})/\tau)}{\sum_{m\in C}\exp\left(z_{k }^{m}(x^{i})/\tau\right)} \tag{7}\]
where \(z_{g}(x^{i})\), \(z_{k}(x^{i})\) are the logits predicted on the input \(x^{i}\) by the global model and client model \(k\) respectively. The index \(i\) denotes the \(i^{th}\) sample of the batch. The \(\mathcal{D}_{\text{KL}}(q_{g}(x^{i})||q_{k}(x^{i}))\) is given in Eq. (8).
\[\mathcal{D}_{\text{KL}}(q_{g}(x^{i})||q_{k}(x^{i}))=\sum_{c=1}^{C}q_{g}^{c}(x ^{i})log(q_{g}^{c}(x^{i})/q_{k}^{c}(x^{i})) \tag{8}\]
Fig. 1: Federated Learning with Adaptive Self Distillation: The figure describes the overview of the proposed approach based on Adaptive distillation. In **Step 1**. Server broadcasts the model parameters, In **Step 2**. clients train their models by minimizing both the cross entropy loss and predicted probability distribution over the classes between the global model and the client model by minimizing the KL divergence, the importance of the sample in the batch is decided by the proposed adaptive scheme as a function of label distribution and the KL term. The server model is fixed while training the client. In **Step 3**. Server aggregates the client models based on FedAvg aggregation. The process repeats till convergence.
where \(C\) is the number of classes. We use the simplified notation \(\alpha_{k}^{i}\) for distillation weights \(\alpha_{k}(x^{i},y^{i})\) and it is given in below Eq.9.
\[\alpha_{k}^{i}=\frac{\hat{\alpha_{k}}^{i}}{\sum_{i\in B}\hat{\alpha_{k}}^{i}} \tag{9}\]
and \(\hat{\alpha_{k}}^{i}\) is defined as below Eq. 10
\[\hat{\alpha_{k}}^{i}\triangleq exp(\beta^{kl}\mathcal{D}_{\text{KL}}(q_{g}(x^{ i})||q_{k}(x^{i}))+\beta I(p_{k}^{y^{i}})) \tag{10}\]
where \(\mathcal{D}_{\text{KL}}(q_{g}(x^{i})||q_{k}(x^{i}))\) is given by (8). \(\mathcal{D}_{\text{KL}}\) term captures how close the local model's predictions are to the global model. If KL is high, then the samples are easy-to-drift. Our weighting scheme increases the weights when the KL term is high, forcing the easy-to-drift samples to stay close to the global model. The presence of the KL term will weigh every sample differently depending on how close it is to the global model.
The term \(I(p_{k}^{y^{i}})\) defined as below.
\[I(p_{k}^{y^{i}})\triangleq-log(p_{k}^{y^{i}}) \tag{11}\]
where \(p_{k}^{y^{i}}\) is the empirical label distribution, it is computed as Eq.12.
\[p_{k}^{y^{i}=c}=\frac{\sum_{i\in|\mathcal{D}_{k}|}\mathbb{I}_{y^{i}=c}}{| \mathcal{D}_{k}|} \tag{12}\]
where \(\mathbb{I}_{y^{i}=c}\) denotes the indicator function and its value is \(1\) if the label of the \(i^{th}\) training sample \(y^{i}\) is class \(c\) else it is \(0\). \(I(p_{k}^{y^{i}})\) term captures the label imbalance. Its value is higher when the probability of label is less; when the term \(I(p_{k}^{y^{i}})\) is higher, the values of \(\alpha_{i}^{k}\) are also higher, giving more weight to distillation loss. This term is specific to each client as the label distribution is unique to each client. This term ensures the client model predictions stay close to the global model if the samples belong to the class whose probability of occurrence is very low. To simplify notation, we use \(p_{k}^{c}\) for \(p_{k}^{y^{i}=c}\) as it depends only on the class. The \(\beta\) and \(\beta^{kl}\) are the hyper-parameters, \(\beta\) captures how much importance will be given to label imbalance, and \(\beta^{kl}\) captures the importance given to KL term. Finally we use Eq.8, Eq.9 and Eq. 11 to compute the \(L_{k}^{asd}(\mathbf{w})\) defined in Eq.5. The algorithm for combining ASD with FedAvg is given in Algorithm 1.
```
1:Server Executes
2:Initialize \(\mathbf{w}^{t}\)
3:for every communication round t in \(T\)do
4: sample random subset \(S\) of clients, \(S\subset[K]\)
5:for every client \(k\) in \(S\) in parallel do
6:\(\mathbf{w}_{k}^{t}\) = ClientUpdate(\(k\),\(\mathbf{w}^{t-1}\))
7:endfor
8:\(\mathbf{w}^{t}\) = ServerAggregation(\(\mathbf{w}_{k}^{t}\))
9:endfor
10:procedureClientUpdate(\(k\),\(\mathbf{w}^{t-1}\))
11: set \(\mathbf{w}_{k}^{t}=\mathbf{w}^{t-1}\)
12:for every epoch e in \(E\)do
13:for every batch b in \(B\)do
14: Compute \(f_{k}(\mathbf{w})\) by computing \(\alpha_{k}\), \(L_{k}\)\(\&\)\(L_{k}^{asd}\)
15:\(\mathbf{w}_{k}^{t}=\mathbf{w}_{k}^{t}-\nabla f_{k}(\mathbf{w}_{k}^{t})\)
16:endfor
17:endfor
18: return \(\mathbf{w}_{k}^{t}\)
19:endprocedureServerAggregation(\(\mathbf{w}_{k}^{t}\))
20:\(\mathbf{w}^{t}=\sum_{k}\frac{n_{k}}{n}\mathbf{w}_{k}^{t}\)
21:endprocedure
```
**Algorithm 1** FedAvg+ASD
### _Theoretical Analysis_
In this section, we perform the theoretical analysis of the client drift. We assume that \(\beta^{kl}=1\) and \(\beta=1\) in Eq. 10, which is also the default choice of hyper-parameters in our experiments. We now introduce the Gradient dissimilarity \(G_{d}\) based on the works of [6, 8] as a way to measure the extent of client-drift as below.
\[G_{d}(\mathbf{w},\lambda)=\frac{\frac{1}{K}\sum_{k}\left\|\nabla f_{k}( \mathbf{w})\right\|^{2}}{\left\|\nabla f(\mathbf{w})\right\|^{2}} \tag{13}\]
\(G_{d}(\mathbf{w},\lambda)\) is function of both the \(\mathbf{w}\) and \(\lambda\). For convenience, we simply write \(G_{d}\) and mention arguments when explicitly required. \(f_{k}(\mathbf{w})\) in the above Eq. 13 is same as Eq. 2.
With this, we now establish a series of propositions to show that ASD regularization reduces the client-drift as that leads to faster convergence.
**Proposition 1**: _The minimum value gradient diversity \(G_{d}\) is \(1\) (\(G_{d}\geq 1\)). It is attained if all \(f_{k}\) are identical._
The above proposition implies that if all the client's gradients are progressing in the same direction, which means there is no drift \(G_{d}=1\). The result follows from Jensen's inequality, proof is provided in section \(4\) of supplementary material. The lower value of \(G_{d}\) is desirable and ideally \(1\). To analyze the \(G_{d}\), we need \(\nabla f_{k}(\mathbf{w})\) which is given in the below proposition.
**Proposition 2**: _When the class conditional distribution across the clients is identical, i.e., \(\mathbb{P}_{k}(x\mid y)=\mathbb{P}(x\mid y)\) then \(\nabla f_{k}(\mathbf{w})=\sum_{c}p_{k}^{c}(\mathbf{g}_{c}+\lambda\gamma_{c}^{c} \tilde{\mathbf{g}}_{c})\), where \(\mathbf{g}_{c}=\nabla\mathbb{E}[l(\mathbf{w};x,y)\mid y=c]\), and \(\tilde{\mathbf{g}}_{c}=\nabla\mathbb{E}[\exp(\mathcal{D}_{\text{KL}}(q_{g}(x)||q _{k}(x)))\mathcal{D}_{\text{KL}}(q_{g}(x)||q_{k}(x))\mid y=c]\) where \(\gamma_{c}^{c}=\frac{1}{p_{k}^{c}}\)._
The result follows from the tower property of expectation and the assumption that class conditional distribution is the same for all the clients. From the above proposition, we can see that the gradients \(\nabla f_{k}(\mathbf{w})\) only differ due to \(p_{k}^{c}\) which captures the data heterogeneity due to label imbalance. The detailed proof is given in section \(4\) of the supplementary.
**Assumption 1**: _class-wise gradients are orthonormal \(\mathbf{g}_{c}^{\intercal}\mathbf{g}_{m}=0\), \(\tilde{\mathbf{g}}_{c}^{\intercal}\tilde{\mathbf{g}}_{m}=0\) and \(\mathbf{g}_{c}^{\intercal}\tilde{\mathbf{g}}_{m}=0\) for \(c\neq m\). The assumption on orthonormal class-wise gradients intuitively implies that gradients of loss for a specific class cannot give any significant information on the gradients of the other class._
**Proposition 3**: _When the class-conditional distribution across the clients is the same, and the Assumption 1 holds then \(\exists\) a range of values for \(\lambda\) such that whenever \(\lambda\in(\lambda_{min},\lambda_{max})\) we have \(\frac{dG_{d}}{d\lambda}<0\) and \(G_{d}(\mathbf{w},\lambda)<G_{d}(\mathbf{w},0)\)._
The proposition implies that there is a value of \(\lambda\in(\lambda_{min},\lambda_{max})\) such that the derivative of \(G_{d}\) w.r.t \(\lambda\) is negative. The proof is given in section \(4\) of supplementary.
This indicates that by appropriately selecting the value of \(\lambda\) we can make the \(G_{d}\) lower which in turn reduces the client drift. This key result allows the ASD regularizer to combine with the existing methods and improve their performance.
To understand the convergence, we assume the following standard assumptions based on [6, 4].
**Assumption 2**: \(\frac{1}{K}\sum_{k}\left\|\nabla f_{k}(\mathbf{w})\right\|^{2}\leq B^{2}( \lambda)\|\nabla f(\mathbf{w})\|^{2}\)__
**Assumption 3**: \(\left\|\nabla f_{k}(\mathbf{x})-f_{k}(\mathbf{x})\right\|=\beta\|\mathbf{x}- \mathbf{y}\|\) (\(\beta\) smoothness)__
**Assumption 4**: _Gradients have bounded Variance._
**Proposition 4**: _Suppose the functions \(f_{k}\) satisfy Assumption 2 above and \(\mathbf{g}^{T}\tilde{\mathbf{g}}_{c}>0\) then we have \(B^{2}(\lambda)<B^{2}(0)\)._
Proof:: The Assumption 2 implies that \(G_{d}(\mathbf{w},\lambda)\leq B^{2}(\lambda)\), as \(G_{d}((\mathbf{w},\lambda)\) defined in Eq. 13. \(B^{2}(\lambda)\) can be defined as below.
\[B^{2}(\lambda)=\sup_{\mathbf{w}\in\mathbb{R}^{d}}G_{d}(\mathbf{w},\lambda) \tag{14}\]
For a fixed \(\lambda\) as per proposition 3 we have the following.
\[\sup_{\mathbf{w}\in\mathbb{R}^{d}}G_{d}(\mathbf{w},\lambda)<\sup_{\mathbf{w} \in\mathbb{R}^{d}}G_{d}(\mathbf{w},0) \tag{15}\]
The above inequality 15 is true as proposition 3 guarantees that the value of \(G_{d}(\mathbf{w},\lambda)<G_{d}(\mathbf{w},0)\) for all \(\mathbf{w}\) when \(\lambda\in(\lambda_{min},\lambda_{max})\). If inequality 15 is not true, one can find a \(\mathbf{w}\) that contradicts the proposition 3 which is impossible. This means for some value of \(\lambda\in(\lambda_{min},\lambda_{max})\) we have \(B^{2}(\lambda)<B^{2}(0)\) from Eq. 14 and Eq. 15.
To gain insights into the impact of the ASD regularizer on convergence, we further assume that functions \(L^{asd}_{k}\) in the Eq. 4 approximately constant in the argument \(\mathbf{w}^{t}\). This allows us to treat \(f_{k}\) in Eq. 2 effectively as a function of \(\mathbf{w}\). We can now use the convergence result from [4]. We state the result formally in the below proposition.
**Proposition 5**: _Theorem V in [4]. Assume that \(f(\mathbf{w})\) and \(f_{k}(\mathbf{w})\), in our Eq. 1 satisfies Assumptions 2, 3 and 4. Let \(\mathbf{w}^{*}=arg\,min\,\,f(\mathbf{w})\), the global step-size be \(\alpha_{g}\) and the local step-size be \(\alpha_{l}\). FedAvg+ASD algorithm will have contracting gradients. If Initial model is \(\mathbf{w}^{0}\), \(F=f(\mathbf{w}^{0})-f(\mathbf{w}^{*})\) and for constant \(M\), then in \(R\) rounds, the model \(w^{R}\) satisfies \(\mathbb{E}[\left\|\nabla f(\mathbf{w}^{R})\right\|^{2}]\leq O(\frac{\beta M \sqrt{F}}{\sqrt{RLS}}+\frac{\beta B^{2}(\lambda F)}{R})\)._
The above proposition gives the convergence rate for the FedAvg+ASD algorithm in the non-convex setting. We now show that FedAvg+ASD converges faster than FedAvg.
We see the convergence rate in proposition 5 is \(O(\frac{\beta M\sqrt{F}}{\sqrt{RLS}}+\frac{\beta B^{2}(\lambda F)}{R})\). We can see that convergence has a direct dependence on \(B^{2}(\lambda)\). This is the only term that is linked to heterogeneity assumption. So the lower value of \(B(\lambda)\) implies faster convergence. From proposition 4 we have \(B^{2}(\lambda)<B^{2}(0)\). Note that the case \(\lambda=0\) corresponds to without ASD. We have shown the tighter convergence for FedAvg+ASD. The key take away from the analysis is ASD helps in reducing the client-drift that leads to faster convergence.
### _Empirical Analysis_
In the previous section we showed that when the clients use ASD loss, it will result in a lower value of \(G_{d}\), and this in turn results in faster convergence for FedAvg+ASD compared to FedAvg. Since both FedAvg and FedAvg+ASD are optimizing different cost functions. It is natural to ask how well these solutions generalize to unseen data. We empirically observed that FedAvg+ASD generalizes much better than FedAvg. To better understand this phenomenon we consider analyzing the properties such as top eigenvalue and the trace of Hessian of the cross-entropy loss for the global models obtained with and without ASD. We follow the method described in [24] to compute the top eigenvalue and trace of the Hessian. In general, converging to the flat minima is indicative of better generalization this has been studied in [25][24]. The lower values of the top eigenvalue and trace are typical indicators of the presence of flat minima. In the table I we can observe that FedAvg+ASD does achieve the lower value of top eigenvalue and trace compared to FedAvg, suggesting convergence to flat minima. We observe similar trends with FedNTD+ASD and FedDyn+ASD, the detailed results are presented in the section \(3\) of supplementary material.
## IV Experiments
We perform the experiments on CIFAR-10, CIFAR-100 dataset [26], Tiny-ImageNet [27] dataset, and the tailored version of CIFAR-100 dataset, with different degrees of heterogeneity in the balanced settings (i.e., the same number of samples per client but the class label distribution of each varies). We set the total number of clients to \(100\) in all our experiments. We sample the clients with a ratio of \(0.1\), i.e., 10 percent of clients are sampled on an average per communication round, similar to the protocol followed in [5]. We build our experiments using publicly available codebase by [5]. For generating non-iid data, Dirichlet distribution is used. To simulate the effect of label imbalance, for every client we sample the 'probability distribution' over the classes from the distribution \(p^{dir}_{k}=Dir(\delta,C)\). Every sample of \(p^{Dir}_{k}\) is a vector of length \(C\) and all the elements of this vector are non-negative and sum to 1. This vector represents the label distribution for the client. The parameter \(\delta\) captures the degree of heterogeneity. Lower values of \(\delta\) capture high heterogeneity and as the value of \(\delta\) increases, the label distribution becomes more uniform. Another parameter of Dirichlet distribution (i.e., \(C\)), its value can be interpreted from the training dataset ( \(C=100\) for CIFAR-100). For notational convenience, we omit \(C\) from \(Dir(\delta,C)\) by simply re-writing as \(Dir(\delta)\). By configuring the concentration parameter \(\delta\) to 0.6 and 0.3, we
generate the data from moderate to high heterogeneity, which is in line with the approach followed in [5] and [28]. In a balanced setting, each client receives the same number of samples. For instance, consider the case of CIFAR-100 where we have \(50000\) training samples. In the case of \(100\) clients, each client will get \(500\) samples, and the distribution of labels across the clients follows the Dirichlet distribution. Figure 2 shows the heatmap of the label distribution for a subset of 10 clients for the CIFAR-100 dataset.
## V Results and Discussion
For evaluation, we report accuracy on the test dataset as our performance metric and the number of communication rounds required to attain the desired accuracy as a metric to quantify the communication cost. Specifically, we evaluate the global model on the test set and report its accuracy after every communication round. For comparison, we consider the popular methods for federated learning such as FedAvg [3], FedProx [6], FedDyn [5] and FedNTD [7]. We augment each of these methods with our approach (ASD) and observe a significant boost in performance. For a fair comparison, we consider the same models used in Fedavg [3], and FedDyn [5], for CIFAR-10 and CIFAR-100 classification tasks. The model architecture used for CIFAR-100 contains \(2\) convolution layers followed by \(3\) fully connected layers. For Tiny-ImageNet, we use \(3\) convolution followed by \(3\) fully connected layers. The detailed architectures are given in supplementary material.
Unlike existing works, we further evaluate our proposed method in extreme non-IID conditions. To mimic such scenarios, we create a corrupted version of CIFAR-100 with different levels of noise based on [29] and name it CIFAR-100C. The impulse noise is added to the CIFAR-100 dataset with five levels of severity. The first 10K samples are noise-free, the next 10K at increased severity, further next \(10\)K at even more severity, and so on. Overall, we create a total of \(50000\) training samples where the noise increases from the first 10k samples being noise-free to the last 10k samples being heavily corrupted.
In this work, we use the proposed approach ASD with FedAvg, FedProx, FedDyn and FedNTD and refer to them as FedAvg+ASD, FedProx+ASD, FedDyn+ASD and FedNTD+ASD respectively.
Hyper-parameters: SGD algorithm with a learning rate of 0.1 and the decay learning rate per round of 0.998 is used to train the client models. Temperature \(\tau\) is set to 2.0. We only tune the hyper-parameter \(\lambda\). The \(\beta\), and \(\beta^{kl}\) are always set to \(1\) and the value of \(\lambda\) is set to \(20\) and \(30\) for CIFAR-100 and Tiny
Fig. 2: Label distribution of clients: A subset containing 10 clients out of 100, and their corresponding label distribution based on Dirichlet distribution is plotted. It is easy to observe that the labels are not uniformly distributed across the clients.
ImageNet respectively for all our experiments. We compare the convergence of different schemes for 500 communication rounds. Following the testing protocol of [5], we average across all the client models and compute the test accuracy on the averaged model, which is reported in our results. In all the tables, we report the test accuracy of the global model in \(\%\) at the end of 500 communication rounds. All the experiments in the Tables are performed over three different initializations, mean and standard deviations of accuracy over the three experiments are reported. In the figures 3 and 4 we report the test accuracy after every communication round.
### _Performance of ASD on CIFAR-10, CIFAR-100 and Tiny-Imagenet_
In Table II, we report the performance of CIFAR-100 and Tiny-ImageNet datasets with various algorithms for non-iid (\(Dir(\delta=0.3)\) and \(Dir(\delta=0.6)\)) and the iid settings. We observe that our proposed ASD applied on FedDyn improves its performance by \(\approx 1.0\%\) for \(Dir(\delta=0.3)\) and \(Dir(\delta=0.6)\). The accuracy vs communication rounds plots is shown in Figure 3, we can see that adding ASD gives consistent improvement across the rounds1. For the Tiny-ImageNet dataset, we observe that FedDyn+ASD improves FedDyn by \(\approx 2\%\) for \(Dir(\delta=0.3)\) and \(Dir(\delta=0.6)\) and almost similar in the iid partition. The improvement in the iid case for Tiny-ImageNet dataset is marginal because we did not tune the hyper-parameter \(\lambda\) for every experiment, intuitively the lower value of \(\lambda\) will be better for iid case. The test accuracy vs communication rounds plot on Tiny-Imagenet are shown in Figure 4. We obtain significant improvements for FedAvg+ASD against FedAvg, FedProx+ASD against FedProx and for FedNTD+ASD against FedNTD. In the Table III we present the CIFAR-10 results, we observe that adding ASD consistently gives an improvement of \(\approx 0.7\%-0.8\%\) improvement across all the algorithms.
Footnote 1: In the figures we omit the comparison between FedProx and FedProx+ASD for better readability.
### _Analyzing the Communication cost_
In this section, we analyze the communication cost (i.e., the number of communication rounds) for attaining a specified accuracy. From Table IV, we can infer that in all data heterogeneity settings across the datasets, the federated learning algorithms when augmented with the proposed regularizer outperform its original implementation. In particular, FedDyn+ASD outperforms all the algorithms. For attaining \(26\%\) accuracy with \(Dir(\delta=0.3)\) on the Tiny-ImageNet dataset, FedDyn+ASD takes \(169\) rounds while FedDyn takes \(242\) rounds.
Fig. 4: Test Accuracy vs Communication rounds: Comparison of algorithms with \(Dir(\delta=0.3)\), \(Dir(\delta=0.6)\) and iid data partitions on Tiny-ImageNet dataset. All the algorithms augmented with proposed regularization (ASD) outperform compared to their original form. FedDyn+ASD outperforms all the other algorithms, except iid case where it performs close to FedDyn
Fig. 3: Test Accuracy vs Communication rounds: Comparison of algorithms with \(Dir(\delta=0.3)\), \(Dir(\delta=0.6)\) and iid data partitions on CIFAR-100 dataset. All the algorithms augmented with proposed regularization (ASD) outperform compared to their original form. FedDyn+ASD outperforms all the other algorithms.
rounds. Similarly, the FedNTD+ASD needs 406 rounds while FedNTD takes 500+ rounds saving at least 92 rounds of communication cost.
### _Efficacy of ASD in extreme non-IID cases_
In Table V, we analyze the performance of federated learning algorithms for the extreme non-IID case (CIFAR-100C). We observe that our proposed FedDyn with ASD performs \(1.40\%\) better than FedDyn. FedNTD+ASD improves FedNTD by \(2.43\%\). Similarly, FedAvg+ASD improves FedAvg by \(2.79\%\). Hence, our method ASD yields consistent gains even in extreme non-IID settings.
### _Sensitivity to hyper parameters \(\lambda\)_
We study the impact of changing the hyper-parameter on the CIFAR-100 dataset with the non-iid partition of \(Dir(\delta=0.3)\). When using FedAvg+ASD algorithm, the only hyper-parameter we consider is \(\lambda\), the other parameters \(\beta\) and \(\beta^{kl}\) are kept to \(1\). In Figure 5 we see that the accuracy of the model increases with \(\lambda\) and then drops after a critical point. Overall the accuracy varies from \(42.75\%\) to \(44.5\%\) and doesn't drastically impact the accuracy.
### _Impact of \(\mathcal{D}_{\text{KL}}\) and the \(I(p_{k}^{y^{i}})\)_
We analyze the contribution of KL term for FedAvg+ASD algorithm on the CIFAR-100 dataset with \(Dir(\delta=0.3)\) non-iid data partition. In table VI, we analyze the contribution of each term by setting the hyperparameter \(\beta\) and \(\beta^{kl}\) for CIFAR-100 and CIFAR-100C. It can be seen that both the \(\beta\) and \(\beta_{kl}\) significantly contribute, and when both are used, it further leads to improved performance. All the accuracies are reported by averaging over three different initializations.
### _Comparison with adaptive vs uniform weights_
We analyze the impact of the proposed adaptive weighting scheme. We compare by making all the alpha values in Eq 10 to \(1\). This can be obtained by setting the values of \(\beta\) and \(\beta^{kl}\) to \(0\). We can see from Table VII that in the non-iid scenarios, the proposed adaptive weighting scheme is better than assigning the uniform weights, thus establishing the impact of proposed adaptive weights. In iid cases, it impacts very marginally.
Fig. 5: Study of hyper-parameter’s sensitivity for CIFAR100 and \(Dir(\delta=0.3)\) with FedAvg+ASD. Accuracy is sensitive at lower values of \(\lambda\).
## VI Computation Cost
In this section we quantify the amount of computation increased in the proposed method vs the accuracy benefits obtained. As described in [30] the major computational burden for the distillation scheme comes from the teacher forward pass, student forward pass and the student backward pass. Let \(F_{t}\), \(F_{s}\) and \(B_{s}\) denote the number of operations for teacher forward pass, student forward pass and student backward pass respectively. Overall cost with batch of \(N_{b}\) samples given below.
\[C_{reg}=(F_{t}+F_{s}+B_{s})*N_{b} \tag{16}\]
\[C_{noreg}=(F_{s}+B_{s})*N_{b} \tag{17}\]
\(C_{reg}\) is the cost due to the proposed regularization cost and \(C_{noreg}\) is the cost without regularization. It can be clearly seen that we increase the computation cost per batch due to the teacher model by \(N_{b}*F_{t}\). The cost is linear in batch size and since we use the same model architecture for student and teacher we have \(F_{t}=F_{s}\) this implies we double the forward pass computation. If storing the model is too expensive then one can get all the predictions, i.e., the softmax probabilities and store them instead of the model. This way one can save the memory and the repeated computations of the global model.
## VII Privacy of Proposed Method
In our method, which is ASD regularizer, the adaptive weights are computed by the client without depending on the server and it does not assume access to any auxiliary data at the server as assumed in methods such as FedCAD [7] and FedDF [22]. In our method, only model parameters are communicated with the server similar to FedAvg [3]. Thus our privacy is similar to the FedAvg method at the same time obtaining significant improvements in the performance.
## VIII Conclusion
We presented an efficient and effective method for addressing data heterogeneity due to label imbalance in Federated learning using Adaptive Self Distillation (_ASD_), which does not require any auxiliary data and no extra communication cost. We also theoretically showed that ASD has lower client-drift leading to better convergence. Moreover, we also performed empirical analysis to show that ASD has better generalization by analyzing Hessian's top eigenvalue and trace. The effectiveness of our approach is shown via extensive experiments across datasets such as CIFAR-10, CIFAR-100 and Tiny-ImageNet with different degrees of heterogeneity. Our proposed approach ASD can be integrated easily into any of the FL frameworks. We showed its efficacy by improving the performance when combined with FedAvg, FedDyn, and FedNTD. We also observed that our method achieves flat minima on convergence. As a future investigation, we aim to look into a deeper theoretical analysis of ASD.
| 連鎖学習 (FL) は、クライアントがそれぞれローカルで学習したモデルを総計としてグローバルモデルを共同でトレーニングする機械学習の paradigma です。これは、ローカルの学習データの共有をしないことで実現されます。実際には、これらのクライアントが持つローカルなデータの分布は、しばしば有意な相違 (例:クラス不均衡) が存在します。クライアント間で存在する非iidなデータ分布の下では、FLは「クライアントドリフト」問題に陥ります。これは、クライアントがそれぞれのローカル最適にドリフトしてしまうことで、収束が遅くなり、総計モデルのパフォーマンスが低下します。この制限を克服するため、私たちは、クライアント側でモデルをトレーニングするための、新しい自己蒸散(ASD)に基づいた正規化手法を提案しました。この正規化手法は、グローバルモデルのエンティティとクライアントのラベル分布に基づいて、クライアントのトレーニング |
2309.10297 | Approximate ultrahomogeneity in $L_pL_q$ lattices | We show that for $1\leq p, q<\infty$ with $p/q \notin \mathbb{N}$, the doubly
atomless separable $L_pL_q$ Banach lattice $L_p(L_q)$ is approximately
ultrahomogeneous (AUH) over the class of its finitely generated sublattices.
The above is not true when $p/q \in \mathbb{N}$. However, for any $p\neq q$,
$L_p(L_q)$ is AUH over the finitely generated lattices in the class $BL_pL_q$
of bands of $L_pL_q$ lattices. | Mary Angelica Tursi | 2023-09-19T04:01:52 | http://arxiv.org/abs/2309.10297v1 | # Approximate ultrahomogeneous in \(L_{p}L_{q}\) lattices
###### Abstract.
We show that for \(1\leq p,q<\infty\) with \(p/q\notin\mathbb{N}\), the doubly atomless separable \(L_{p}L_{q}\) Banach lattice \(L_{p}(L_{q})\) is approximately ultrahomogeneous (AUH) over the class of its finitely generated sublattices. The above is not true when \(p/q\in\mathbb{N}\). However, for any \(p\neq q\), \(L_{p}(L_{q})\) is AUH over the finitely generated lattices in the class \(BL_{p}L_{q}\) of bands of \(L_{p}L_{q}\) lattices.
## 1. Introduction
In this paper, we explore the homogeneity properties (or lack thereof) of the class of \(L_{p}L_{q}\) lattices under various conditions.
The following is taken from [6]: A Banach lattice \(X\) is an **abstract \(L_{p}L_{q}\) lattice** if there is a measure space \((\Omega,\Sigma,\mu)\) such that \(X\) can be equipped with an \(L_{\infty}(\Omega)\)-module and a map \(N:X\to L_{p}(\Omega)_{+}\) such that
* For all \(\phi\in L_{\infty}(\Omega)_{+}\) and \(x\in X_{+}\), \(\phi\cdot x\geq 0\),
* For all \(\phi\in L_{\infty}(\Omega)\) and \(x\in X\), \(N[\phi\cdot x]=|\phi|N[x]\).
* For all \(x,y\in X\), \(N[x+y]\leq N[x]+N[y]\)
* If \(x\) and \(y\) are disjoint, \(N[x+y]^{q}=N[x]^{q}+N[y]^{q}\), and if \(|x|\leq|y|\), then \(N[x]\leq N[y]\).
* For all \(x\in X\), \(\|x\|=\|N[x]\|_{L_{p}}\).
When the abstract \(L_{p}L_{q}\) space is separable, it has a concrete representation: Suppose \((\Omega,\Sigma,\mu)\) and \((\Omega^{\prime},\Sigma^{\prime},\mu^{\prime})\) are measure spaces. Denote by \(L_{p}(\Omega;L_{q}(\Omega^{\prime}))\) the space of Bochner-measurable functions \(f:\Omega\to L_{q}(\Omega^{\prime})\) such that the function \(N[f]\), with \(N[f](\omega)=\|f(\omega)\|_{q}\) for \(\omega\in\Omega\), is in \(L_{p}(\Omega)\). The class of _bands_ in \(L_{p}L_{q}\) lattices, which we denote by \(BL_{p}L_{q}\), has certain analogous properties to those of \(L_{p}\) spaces, particularly with respect to its isometric theory.
\(L_{p}L_{q}\) lattices (and their sublattices) have been extensively studied for their model theoretic properties in [6] and [7]. It turns out that while abstract \(L_{p}L_{q}\) lattices themselves are not axiomatizable, the larger class \(BL_{p}L_{q}\) is axiomatizable with certain properties corresponding to those of \(L_{p}\) spaces. For instance, it is known that the class of atomless \(L_{p}\) lattices is separably categorical, meaning that there exists one unique atomless separable \(L_{p}\) lattice up to lattice isometry. Correspondingly, the class of _doubly atomless
\(BL_{p}L_{q}\) lattices is also separably categorical; in particular, up to lattice isometry, \(L_{p}([0,1];L_{q}[0,1])\), which throughout will just be referred to as \(L_{p}(L_{q})\), is the unique separable doubly atomless \(BL_{p}L_{q}\) lattice (see [7, Proposition 2.6]).
Additionally, when \(p\neq q\), the lattice isometries of \(L_{p}L_{q}\) lattices can be characterized in a manner echoing those of linear isometries over \(L_{p}\) spaces (with \(p\neq 2\)). Recall from [1, Ch. 11 Theorem 5.1] that a map \(T:L_{p}(0,1)\to L_{p}(0,1)\) is a surjective linear isometry iff \(Tf(t)=h(t)f(\phi(t))\), where \(\phi\) is a measure-preserving transformation and \(h\) is related to \(\phi\) through Radon-Nikodym derivatives. If we want \(T\) to be a _lattice_ isometry as well, then we also have \(h\) positive (and the above characterization will also work for \(p=2\)). In [3] (for the case of \(q=2\)) and [13], a corresponding characterization of linear isometries is found for spaces of the form \(L_{p}(X;Y)\), for certain \(p\) and Banach spaces \(Y\). In particular, for \(L_{p}L_{q}\) lattices with \(p\neq q\): given \(f\in L_{p}(\Omega;L_{q}(\Omega^{\prime}))\), where \(f\) is understood as a map from \(\Omega\) to \(L_{q}\), any surjective linear isometry \(T\) is of the form
\[Tf(x)=S(x)\big{(}e(x)\phi f(x)\big{)},\]
where \(\phi\) is a set isomorphism (see [3] and [13] for definitions) \(e\) is a measurable function related to \(\phi\) via Radon-Nikodym derivatives, and \(S\) is a Bochner-measurable function from \(\Omega\) to the space of linear maps from \(L_{q}\) to itself such that for each \(x\), \(S(x)\) is a linear isometry over \(L_{q}\).
In [11], Raynaud obtained results on linear subspaces of \(L_{p}L_{q}\) spaces, showing that for \(1\leq q\leq p<\infty\), some \(\ell_{r}\) linearly isomorphically embeds into \(L_{p}(L_{q})\) iff it embeds either to \(L_{p}\) or to \(L_{q}\). However, when \(1\leq p\leq q<\infty\), for \(p\leq r\leq q\), the space \(\ell_{r}\) isometrically embeds as a lattice in \(L_{p}(L_{q})\), and for any \(p\)-convex and \(q\)-concave Orlicz function \(\phi\), the lattice \(L_{\phi}\) embeds lattice isomorphically into \(L_{p}(L_{q})\). Thus, unlike with \(L_{p}\) lattices whose infinite dimensional sublattices are determined up to lattice isometry by the number of atoms, the sublattices of \(L_{p}L_{q}\) are not so simply classifiable.
In fact, the lattice isometry classes behave more like the \(L_{p}\) linear isometries, at least along the positive cone, as is evident in certain equimeasurability results for \(L_{p}L_{q}\) lattices. In [11], Raynaud also obtained the following on uniqueness of measures, a variation of a result which will be relevant in this paper: let \(\alpha>0,\alpha\notin\mathbb{N}\), and suppose two probability measures \(\nu_{1}\) and \(\nu_{2}\) on \(\mathbb{R}_{+}\) are such that for all \(s>0\),
\[\int_{0}^{\infty}(t+s)^{\alpha}\ d\nu_{1}(t)=\int_{0}^{\infty}(t+s)^{\alpha} \ d\nu_{1}(t).\]
Then \(\nu_{1}=\nu_{2}\). Linde gives an alternate proof of this result in [8].
Various versions and expansions of the above result appear in reference to \(L_{p}\) spaces: for instance, an early result from Rudin generalizes the above to equality of integrals over \(\mathbb{R}^{n}\): ([12]). Assume that \(\alpha>0\) with \(\alpha\notin 2\mathbb{N}\), and suppose that for all \(\mathbf{v}\in R^{n}\),
\[\int_{\mathbb{R}^{n}}(1+\mathbf{v}\cdot z)^{\alpha}\ d\nu_{1}(z)=\int_{ \mathbb{R}^{n}}(1+\mathbf{v}\cdot z)^{\alpha}\ d\nu_{2}(z)\]
Then \(\nu_{1}=\nu_{2}\). An application of this result is a similar condition by which one can show that one collection of measurable functions \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}\), with \(\mathbf{f}=(f_{1},...,f_{n})\) is equimeasurable with another collection \(\mathbf{g}=(g_{1},...,g_{n})\) By defining \(\nu_{1}\) and \(\nu_{2}\) as pushforward measures of \(F\) and \(G\). In the case of \(L_{p}\) spaces, if \(f\) and \(g\) are corresponding basic sequences whose pushforward measures satisfy the above for \(\alpha=p\), then they generate isometric Banach spaces. Raynaud's result shows the converse is true for \(\alpha\neq 4,6,8,...\). A similar result in\(L_{p}(L_{q})\) from [7] holds for \(\alpha=p/q\notin\mathbb{N}\) under certain conditions, except instead of equimeasurable \(\mathbf{f}\) and \(\mathbf{g}\), when the \(f_{i}\)'s and \(g_{i}^{\prime}s\) are mutually disjoint and positive and the map \(f_{i}\mapsto g_{i}\) generates a lattice isometry, \((N[f_{1}],...,N[f_{n}])\) and \((N[g_{1}],...,N[g_{n}])\) are equimeasurable.
Recall that a space \(X\) is _approximately ultrahomogeneous_ (AUH) over a class \(\mathcal{G}\) of finitely generated spaces if for all appropriate embeddings \(f_{i};E\hookrightarrow X\) with \(i=1,2\), for all \(E\in\mathcal{G}\) generated by \(e_{1},...,e_{n}\in E\), and for all \(\varepsilon>0\), there exists an automorphism \(\phi:X\to X\) such that for each \(1\leq j\leq n\), \(\|\phi\circ f_{1}(e_{j})-f_{2}(e_{j})\|<\varepsilon\).
In the Banach space setting, the embeddings are linear embeddings and the class of finitely generated spaces are finite dimensional spaces. In the lattice setting, the appropriate maps are isometric lattice embeddings, and one can either choose finite dimensional or finitely generated lattices.
The equimeasurability results described above can be used to show an approximate ultrahomogeneity of \(L_{p}([0,1])\) over its finite dimensional linear subspaces only so long as \(p\notin 2\mathbb{N}\) (see [10]). Conversely, the cases where \(p\in 2\mathbb{N}\) are not AUH over finite dimensional linear subspaces, with counterexamples showing linearly isometric spaces whose corresponding basis elements are not equimeasurability. Alternate methods using continuous Fraisse Theory have since then been used to give alternate proofs of linear approximate ultrahomogeneity of \(L_{p}\) for \(p\notin 2\mathbb{N}\) (see [5]) as well as lattice homogeneity of \(L_{p}\) for all \(1\leq p<\infty\) (see [2], [5]).
This paper is structured as follows: in section 2, we first establish basic notation and give a characterization of finite dimensional \(BL_{p}L_{q}\) lattices. This characterization is used in subsequent sections for establishing both equimeasurability and ultrahomogeneity results later on.
In section 3 we show that when \(p\neq q\), \(L_{p}(L_{q}):=L_{p}(L_{q})\) is AUH over the larger class of finite dimensional (and finitely generated) \(BL_{p}L_{q}\) spaces. This is done by characterizing representations of \(BL_{p}L_{q}\) sublattices \(L_{p}(L_{q})\) in such a way that induces automorphisms over \(L_{p}(L_{q})\) making the homogeneity diagram commute. The results here play a role in subsequent sections as well.
In section 4, we prove that if in addition \(p/q\notin\mathbb{N}\), \(L_{p}(L_{q})\) is also AUH over the class of its finitely generated sublattices. First, we determine the isometric structure of finite dimensional sublattices of \(L_{p}(L_{q})\) lattices by giving an alternate proof of [7, Proposition 3.2] showing that two sublattices \(E\) and \(F\) of \(L_{p}(L_{q})\), with the \(e_{i}\)'s and \(f_{i}\)'s each forming the basis of atoms, are lattice isometric iff \((N[e_{1}],...,N[e_{n}])\) and \((N[f_{1}),...,N[f_{n}])\) are equimeasurable. The equimeasurability result allows us to reduce a homogeneity diagram involving a finite dimensional sublattice of \(L_{p}(L_{q})\) to one with a finite dimensional \(BL_{p}L_{q}\) lattice, from which, in combination with the results in section 3, the main result follows.
Section 5 considers the case of \(p/q\in\mathbb{N}\). Here, we provide a counterexample to equimeasurability in the case that \(p/q\in\mathbb{N}\) and use this counterexample to show that in such cases, \(L_{p}(L_{q})\) is not AUH over the class of its finite dimensional lattices.
## 2. Preliminaries
We begin with some basic notation and definitions. Given a measurable set \(A\subseteq\mathbb{R}^{n}\), we let \(\mathbf{1}_{A}\) refer to the characteristic function over \(A\). For a lattice \(X\), let \(B(X)\) be the unit ball, and \(S(X)\) be the unit sphere.
For elements \(e_{1},...,e_{n}\) in some lattice \(X\), use bracket notation \(<e_{1},...,e_{n}>_{L}\) to refer to the Banach lattice generated by the elements \(e_{1},...,e_{n}\). In addition, we write \(<e_{1},...,e_{n}>\) without the \(L\) subscript to denote that the generating elements \(e_{i}\) are also mutually disjoint positive elements in the unit sphere. Throughout, we will also use boldface notation to designate a finite sequence of elements: for instance, for \(x_{1},...,x_{n}\in\mathbb{R}\) or \(x_{1},...,x_{n}\in X\) for some lattice \(x\), let \(\mathbf{x}=(x_{1},...,x_{n})\). Use the same notation to denote a sequence of functions over corresponding elements: for example, let \((f_{1},...,f_{n})=\mathbf{f}\), or \((f_{1}(x_{1}),...f_{n}(x_{n}))=\mathbf{f}(\mathbf{x})\), or \((f(x_{1}),...,f(x_{n}))=f(\mathbf{x})\). Finally, for any element \(e\) or tuple \(\mathbf{e}\) of elements in some lattice \(X\), let \(\boldsymbol{\beta}(e)\)
and \(\boldsymbol{\beta}(\mathbf{e})\) be the band generated by \(e\) and \(\mathbf{e}\) in \(X\), respectively.
Recall that Bochner integrable functions are the norm limits of simple functions \(f:\Omega\to L_{q}(\Omega^{\prime})\), with \(f(\omega)=\sum_{1}^{n}r_{i}\mathbf{1}_{A_{i}}(\omega)\mathbf{1}_{B_{i}}\), where \(\mathbf{1}_{A_{i}}\) and \(\mathbf{1}_{B_{i}}\) are the characteristic functions for \(A_{i}\in\Sigma\) and \(B_{i}\in\Sigma^{\prime}\), respectively. One can also consider \(f\in L_{p}(\Omega;L_{q}(\Omega^{\prime}))\) as a \(\Sigma\otimes\Sigma^{\prime}\)-measurable function such that
\[\|f\|=\bigg{(}\int_{\Omega}\|f(\omega)\|_{q}^{p}\ d\omega\bigg{)}^{1/p}=\bigg{(} \int_{\Omega}\bigg{(}\int_{\Omega^{\prime}}|f(\omega,\omega^{\prime})|^{q}\ d \omega^{\prime}\bigg{)}^{p/q}\ d\omega\bigg{)}^{1/p}\]
Unlike the more familiar \(L_{p}\) lattices, the class of abstract \(L_{p}L_{q}\) lattices is not itself axiomatizable; however, the slightly more general class \(BL_{p}L_{q}\) of bands in \(L_{p}(L_{q})\) lattices is axiomatizable. Additionally, if \(X\) is a separable \(BL_{p}L_{q}\) lattice, it is lattice isometric to a lattice of the form
\[\bigg{(}\bigoplus_{p}L_{p}(\Omega_{n};\ell_{q}^{n})\bigg{)}\oplus_{p}L_{p}( \Omega_{\infty};\ell_{q})\]
\[\oplus_{p}\bigg{(}\bigoplus_{p}L_{p}(\Omega_{n}^{\prime};L_{q}(0,1)\oplus_{q} \ell_{q}^{n})\bigg{)}\]
\[\oplus_{p}L_{p}(\Omega_{\infty}^{\prime};L_{q}(0,1)\oplus_{q}\ell_{q}).\]
\(BL_{p}L_{q}\) lattices may also contain what are called _base disjoint_ elements. \(x\) and \(y\) are base disjoint if \(N[x]\perp N[y]\). Based on this, we call \(x\) a _base atom_ if whenever \(0\leq y,z\leq x\) with \(y\) and \(z\) base disjoint, then either \(N[y]=0\) or \(N[z]=0\). Observe this implies that \(N[x]\) is an atom in \(L_{p}\). Alternatively, we call \(x\) a _fiber atom_ if any disjoint \(0\leq y,z\leq x\) are also base disjoint. Finally, we say that \(X\) is _doubly atomless_ if it contains neither base atoms nor fiber atoms.
Another representation of \(BL_{p}L_{q}\) involves its finite dimensional subspaces. We say that \(X\) is an \((\mathcal{L}_{p}\mathcal{L}_{q})_{\lambda}\) lattice, with \(\lambda\geq 1\) if for all disjoint \(x_{1},...,x_{n}\in X\) and \(\varepsilon>0\), there is a finite dimensional \(F\) of \(X\) that is \((1+\varepsilon)\)-isometric to a finite dimensional \(BL_{p}L_{q}\) space containing \(x_{1}^{\prime},...,x_{n}^{\prime}\) such that for each \(1\leq i\leq n\), \(\|x_{i}-x_{i}^{\prime}\|<\varepsilon\). Henson and Raynaud proved that in fact, any lattice \(X\) is a \(BL_{p}L_{q}\) space iff \(X\) is \((\mathcal{L}_{p}\mathcal{L}_{q})_{1}\) (see [6]). This equivalence can be used to show the following:
**Proposition 2.1**.: _(Henson, Raynaud) If \(X\) is a separable \(BL_{p}L_{q}\) lattice, then it is the inductive limit of finite dimensional \(BL_{p}L_{q}\) lattices._
The latter statement is not explicitly in the statement of Lemma 3.5 in [6], but the proof showing that any \(BL_{p}L_{q}\) lattice is \((\mathcal{L}_{p}\mathcal{L}_{q})_{1}\) was demonstrated by proving the statement itself.
Throughout this paper, we refer to this class of finite dimensional \(BL_{p}L_{q}\) lattices as \(B\mathcal{K}_{p,q}\). Observe that if \(E\in B\mathcal{K}_{p,q}\), then it is of the form \(\oplus_{p}(\ell_{q}^{m_{i}})_{1}^{N}\) where for \(1\leq k\leq N\), the atoms \(e(1,1),...,e(k,m_{k})\) generate \(\ell_{q}^{m_{k}}\).
**Proposition 2.2**.: _Let \(E\) be a \(B\mathcal{K}_{p,q}\) sublattice of \(L_{p}(L_{q})\) with atoms \(e(k,j)\) as described above. Then the following are true:_
1. _There exist disjoint measurable_ \(A(k)\subseteq[0,1]\) _such that for all_ \(i\)_,_ \(\operatorname{supp}(e(k,j))\subseteq A(k)\times[0,1]\)_,_
2. _For all_ \(k\) _and for all_ \(j,j^{\prime}\)_,_ \(N[e(k,j)]=N[e(k,j^{\prime})]\)_._
_Conversely, if \(E\) is a finite dimensional sublattice of \(L_{p}(L_{q})\) satisfying properties (1) and (2), then \(E\) is in \(B\mathcal{K}_{p,q}\)._
In order to prove this theorem, we first need the following lemma:
**Lemma 2.3**.: _Let \(0<r<\infty\), with \(r\neq 1\). suppose \(x_{1},...,x_{n}\in L_{r}+\) are such that_
\[\|\sum_{1}^{n}x_{k}\|_{r}^{r}=\sum\|x_{k}\|_{r}^{r}\]
_Then the \(x_{i}\)'s are mutually disjoint._
Proof.: If \(r<1\), then
\[\int x_{i}(t)^{r}+x_{j}(t)^{r}\ dt=\|x_{i}\|_{r}^{r}+\|x_{j}\|_{r}^{r}=\int(x_{ i}(t)+x_{j}(t))^{r}\ dt \tag{1}\]
Now observe that for all \(t\), \((x_{i}(t)+x_{j}(t))^{r}\leq x_{i}(t)^{r}+x_{j}(t)^{r}\), with equality iff either \(x_{i}(t)=0\) or \(x_{j}(t)=0\), so \((x_{i}+x_{j})^{r}-x_{i}^{r}-x_{j}^{r}\in L_{1}+\). Combined with the above equality in line (1), since \(\|(x_{i}+x_{j})^{r}-x_{i}^{r}-x_{j}^{r}\|_{1}=0\), it follows that \(x_{i}(t)^{r}+x_{j}(t)^{r}=(x_{i}(t)+x_{j}(t))^{r}\) a.e., so \(x_{i}\) must be disjoint from \(x_{j}\) when \(i\neq j\).
If \(r>1\), proceed as in the proof for \(r<1\), but with the inequalities reversed, given that in this instance \(x_{i}(t)^{r}+x_{j}(t)^{r}\leq(x_{i}(t)+x_{j}(t))^{r}\) for all \(t\).
**Remark 2.4**.: The above implies that a \(BL_{p}L_{q}\) lattice \(X\) is base atomless if it contains no bands lattice isometric to some \(L_{p}\) or \(L_{q}\) space. Indeed, if there were a base atom \(e\), then any two \(0\leq x\perp y\leq e\) would have to have \(N\)-norms multiple to each other, so \(<x,y>\) is lattice isometric to \(\ell_{q}^{2}\). Resultantly, the band generated by \(e\) is an \(L_{q}\) space. Similarly, if \(e\) is a fiber atom, then any \(0\leq x\perp y\leq e\) is also base disjoint, which implies that the band generated by \(e\) is an \(L_{p}\) space.
We now conclude with the proof of Proposition 2.2:
Proof of Proposition 2.2.: Observe that for each appropriate pair \((k,j)\),
\[\bigg{(}\int_{0}^{1}N[e(k,j)]^{p}(s)\ ds\bigg{)}^{q/p}=\|N^{q}[e(k,j)]\|_{p/q}=1\]
For notational ease, let \(E(k,j)=N^{q}[e(k,j)]\). Pick \(j_{1},...,j_{n}\) with each \(j_{k}\leq m_{k}\). Then, by disjointness of the \(e(k,j)\)'s, for all \((a_{k})_{k}\geq 0\) and all \(x=\sum_{k}a_{k}e(k,j_{k})\),
\[\|\sum a_{k}e(k,j_{k})\|^{q} =\bigg{(}\int_{0}^{1}\bigg{(}\sum_{k}a_{k}^{q}E(k,j_{k})(s)\bigg{)} ^{p/q}\ ds\bigg{)}^{q/p}\] \[=\bigg{|}\bigg{|}\sum a_{k}^{q}E(k,j_{k})\bigg{|}\bigg{|}_{p/q}.\]
Now since the \(e(k,j_{k})\)'s are isometric to \(\ell_{p}\),
\[\bigg{|}\bigg{|}\sum a_{k}^{q}E(k,j_{k})\bigg{|}\bigg{|}_{p/q}^{p/q}=\sum_{i} a_{k}^{p}=\sum_{k}(a_{k}^{q})^{p/q}=\sum_{k}\|a_{k}^{q}E(k,j_{k})\|_{p/q}^{p/q}.\]
Since the \(E(k,j_{k})\)'s are all positive and \(p\neq q\), by Lemma 2.3, the \(E(k,j_{k})\)'s are disjoint, that is, the \(e(k,j_{k})^{\prime}s\) are base disjoint.
For \(1\leq k\leq N\), let \(A(1),...,A(n)\) be mutually disjoint measurable sets each supporting each \(E(k,j)\) for \(1\leq j\leq n_{k}\). Then each \(e(k,j)\) is supported by \(A(k)\times[0,1]\). Now we prove (2). Fix \(k\), Then using similar computations as above, and since the \(e(k,j)\)'s for fixed \(k\) generate \(\ell_{q}^{m_{k}}\):
\[\|\sum_{j}a_{j}e(k,j)\|^{q}=\bigg{|}\bigg{|}\sum_{j}a_{j}^{q}E(k,j)\bigg{|} \bigg{|}_{p/q}=\sum_{j}a_{j}^{q}=\sum_{j}a_{j}^{q}\|E(k,j)\|_{p/q}\]
By Minkowski's inequality, as \(p\neq q\), equality occurs only when \(E(k,j)(s)=E(k,j^{\prime})(s)\) a.e. for all \(1\leq j,j^{\prime}\leq n_{i}\).
To show the converse, it is enough to give the computation:
\[\|\sum_{k,j}a(k,j)e(k,j)\| =\bigg{(}\int_{0}^{1}\bigg{[}\int\bigg{(}\sum_{k,j}a(k,j)e(k,j)(s,t)\bigg{)}^{q}\ dt\bigg{]}^{p/q}\ ds\bigg{)}^{1/p}\] \[=\bigg{(}\sum_{k}\int_{0}^{1}\bigg{[}\sum_{j=1}^{n_{i}}|a(k,j)|^ {q}E(k,j)(s)\bigg{]}^{p/q}\ ds\bigg{)}^{1/p}\] \[=\bigg{(}\sum_{k}\bigg{[}\sum_{j=1}^{n_{k}}|a(k,j)|^{q}\bigg{]} ^{p/q}\int_{0}^{1}E(k,1)^{p/q}(s)\ ds\bigg{)}^{1/p}\] \[=\bigg{(}\sum_{k}\bigg{[}\sum_{j=1}^{n_{k}}|a(k,j)|^{q}\bigg{]} ^{p/q}\bigg{)}^{1/p}\]
The following results will allow us to reduce homogeneity diagrams to those in which the atoms \(e(k,j)\) of some \(E\in B\mathcal{K}_{p,q}\) are mapped by both embeddings to characteristic functions of measurable \(A(k,j)\subseteq[0,1]^{2}\). In fact, we can further simplify such diagrams to cases where \(E\) is generated by such \(e(k,j)\)'s which additionally are _base-simple_, i.e., \(N[e(k,j)]\) is a simple function.
**Proposition 2.5**.: _Let \(1\leq p\neq q<\infty\) and let \(e\in S(L_{p}(L_{q}))_{+}\) be an element with full support over \([0,1]^{2}\). Then there exists a lattice automorphism \(\phi\) from \(L_{p}(L_{q})\) to itself such that \(\phi(\mathbf{1})=e\). Furthermore, \(\phi\) can be constructed to bijectively map both simple functions to simple functions and base-simple functions to base-simple functions._
Proof.: The proof is an expansion of the technique used in Lemma 3.3 from [5]. Given a function \(g(y)\in L_{q_{+}}\), define \(\tilde{g}(y)_{q}\) by \(\tilde{g}(y)_{q}=\int_{0}^{y}g(t)^{q}\ dt\), and for notation, use \(e_{x}(y)=e(x,y)\). Since \(e\) has full support, we may assume that for all \(0\leq x\leq 1\), \(N[e](x)>0\). From there, Define \(\phi\) by
\[\phi(f)(x,y)=f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N^{q }[e](x)}\bigg{)}e(x,y)\]
\(e\geq 0\) and the rest of the function definition is a composition, so \(\phi\) is a lattice homomorphism. To show it is also an isometry, simply compute the norm, using substitution in the appropriate places:
\[\|\phi(f)\|^{p}= \int_{0}^{1}\bigg{|}\int_{0}^{1}f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N^{q}[e](x)}\bigg{)}^{q}e(x,y)^{q}\ dy\bigg{|}^{p/ q}\ dx\] \[= \int_{0}^{1}\bigg{|}\int_{0}^{1}f(\widetilde{N[e]}(x)_{p},y)^{q} \ dy\bigg{|}^{p/q}N^{p}[e](x)\ dx\] \[= \int_{0}^{1}N[f](\widetilde{N[e]}(x)_{p})^{p}N^{p}[e](x)\ dx\] \[= \int_{0}^{1}N^{p}[f](x)\ dx=\|f\|^{p}.\]
To show surjectivity, let \(B\subseteq[0,1]^{2}\) be a measurable set. Note that any \((x^{\prime},y^{\prime})\in[0,1]^{2}\) can be expressed as \((\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N^{q}[e](x)})\) for some \(x,y\), since \(\widetilde{N[e]}(x)_{p}\) is an increasing continuous function from \(0\) to \(1\), while \(\tilde{e}_{x}(y)_{q}\) is continuously increasing from \(0\) to \(N^{q}[e](x)\). Thus there exists \(B^{\prime}\) such that \(\phi(\mathbf{1}_{B^{\prime}})=\mathbf{1}_{B}\cdot e\), implying that \(\phi\)'s image is dense in the band generated by \(\boldsymbol{\beta}(e)=L_{p}(L_{q})\) since \(e\) has full support. Therefore, \(\phi\) is also surjective.
Finally, \(\phi\) consists of function composition into \(f\) multiplied by \(e\), so if \(e\) and \(f\) are simple, then it has a finite image, so if \(f\) is simple, then the product is also simple, \(\phi\) maps simple functions to simple functions, Conversely,
if \(\phi(f)\) is simple, then \(\phi(f)/e\) is also simple. Thus \(f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N[e](x)}\bigg{)}\) has a finite image. It follows that \(f\) itself has a finite image.
Using similar reasoning, if \(N[e]\) is simple, then whenever \(N[f]\) is simple, \(N[\phi(f)]\) must also be simple, and likewise the converse is true, since by the computation above, \(N[\phi(f)](x)=N[f](\widetilde{N[e]}(x)_{p})\cdot N[e](x)\).
## 3. Approximate Ultrahomogeneity of \(L_{p}(L_{q})\) over \(BL_{p}L_{q}\) spaces
In this section, we show that for any \(1\leq p\neq q<\infty\), \(L_{p}(L_{q})\) is AUH over \(B\mathcal{K}_{p,q}\).
Let \(\mathbf{f}:=(f_{1},...,f_{n})\) and \(\mathbf{g}:=(g_{1},...,g_{n})\) be sequences of measurable functions and let \(\lambda\) be a measure in \(\mathbb{R}\). Then we say that \(\mathbf{f}\) and \(\mathbf{g}\) are _equimeasurable_ if for all \(\lambda\)-measurable \(B\subseteq\mathbb{R}^{n}\),
\[\lambda(t:\mathbf{f}(t)\in B)=\lambda(t:\mathbf{g}(t)\in B)\]
We also say that functions \(\mathbf{f}\) and \(\mathbf{g}\) in \(L_{p}(L_{q})\) are _base-equimeasurable_ if \(N(\mathbf{f})\) and \(N(\mathbf{g})\) are equimeasurable.
Lusky's main proof in [10] of linear approximate ultrahomogeneity in \(L_{p}(0,1)\) for \(p\neq 4,6,8,...\) hinges on the equimeasurability of generating elements for two copies of some \(E=<e_{1},...,e_{n}>\) in \(L_{p}\) containing \(\mathbf{1}\). But when \(p=4,6,8,...\), there exist finite dimensional \(E\) such that two linearly isometric copies of \(E\) in \(L_{p}\) do not have equimeasurable corresponding basis elements. However, if homogeneity properties are limited to \(E\) with mutually disjoint basis elements, then \(E\) is linearly isometric to \(\ell_{p}^{n}\), and for all \(1\leq p<\infty\), \(L_{p}\) is AUH over all \(\ell_{p}^{n}\) spaces. Note that here, an equimeasurability principle (albeit a trivial one) also applies: Any two copies of \(\ell_{p}^{n}=<e_{1},...,e_{n}>\) into \(L_{p}(0,1)\) with \(\sum_{k}e_{k}=n^{1/p}\cdot\mathbf{1}\) have (trivially) equimeasurable corresponding basis elements to each other as well.
In the \(L_{p}(L_{q})\) setting, similar results arise, except rather than comparing corresponding basis elements \(f_{i}(e_{1}),...,f_{i}(e_{n})\) of isometric copies \(f_{i}(E)\) of \(E\), equimeasurability results hold in the \(L_{q}\)-norms \(N[f_{i}(e_{j})]\) under similar conditions, with finite dimensional \(BL_{p}L_{q}\) lattices taking on a role like \(\ell_{p}^{n}\) does in \(L_{p}\) spaces.
The following shows that equimeasurability plays a strong role in the approximate ultrahomogeneity of \(L_{p}(L_{q})\) by showing that any automorphism fixing \(\mathbf{1}\) preserves base-equimeasurability for characteristic functions:
**Proposition 3.1**.: _Suppose \(p\neq q\), and let \(T:L_{p}(L_{q})\) be a lattice automorphism with \(T(\mathbf{1})=\mathbf{1}\). Then there exists a function \(\phi\in L_{p}(L_{q})\) and a measure preserving transformation \(\psi\) over \(L_{p}\) such that for a.e. \(x\in[0,1]\)
\(\phi(x,\cdot)\) is also a measure preserving transformation inducing an isometry over \(L_{q}\), and for all \(f\),_
\[Tf(x,y)=f(\psi(x),\phi(x,y)).\]
_Furthermore, for all measurable \(B_{1},...,B_{n}\subseteq[0,1]^{2}\) with \(\mathbf{1}_{B_{i}}\)'s mutually disjoint, \((\mathbf{1}_{B_{1}},...,\mathbf{1}_{B_{n}})\) and \((T\mathbf{1}_{B_{1}},...,T\mathbf{1}_{B_{n}})\) are base-equimeasurable._
Proof.: By the main result in [13], there exists a strongly measurable function \(\Phi:[0,1]\to B(L_{q})\), a set isomorphism \(\Psi\) over \(L_{p}\) (see [13] for a definition on set isomorphisms), and some \(e(x)\in L_{p}\) related to the radon-Nikodym derivative of \(\Psi\) such that
\[Tf(x)(y)=\Phi(x)(e(x)\Psi f(x))(y),\]
and for a.e. \(x\), \(\Phi(x)\) is a linear isometry over \(L_{q}\). Observe first that \(T\) sends any characteristic function \(1_{A\times[0,1]}\in L_{p}(L_{q})\) constant over \(y\) to characteristic function \(\mathbf{1}_{\psi(A)\times[0,1]}\) for some \(\psi(A)\subseteq[0,1]\), so since \(1_{A\times[0,1]}\in L_{p}(L_{q})\) is constant over \(y\), we can just refer to it as \(\mathbf{1}_{A}\). Also, since \(T\) is a lattice isometry, \(\mu(A)=\mu(\psi(A))\), so \(\psi\) is measure preserving. Finally, observe that \(N[\mathbf{1}_{A}]=\mathbf{1}_{A}\). Thus, for any simple function \(g:=\sum c_{i}\mathbf{1}_{A_{i}}\in L_{p}(L_{q})_{+}\) constant over \(y\) with the \(A_{i}\)'s mutually disjoint, we have \(N[g]=g\), and \(Tg=g^{\prime}\). Then for all \(x\),
\[N[g^{\prime}](x)=N[Tg](x)=N[\Phi(x)(eg^{\prime})](x)=e(x)N[\Phi(x)(g^{\prime}) ][x]=|e(x)|N[g^{\prime}](x)\]
It follows that \(|e(x)|=1\). We can thus adjust \(\Phi\) by multiplying by \(-1\) where \(e(x)=-1\). Note also that \(\Phi\) acts as a lattice isometry over \(L_{p}\) when restricted to elements constant over \(y\), so by Banach's theorem in [1], the map \(\Phi f(x)\) can be interpreted as \(\Phi(x)(\ f(\psi(x))\ )\), where \(\psi\) is a measure preserving transformation over \([0,1]\) inducing \(\Psi\). By Banach's theorem again for \(\Phi(x)\), this \(\Phi\) can be interpreted by \(\Phi f(x,y)=e^{\prime}(x,y)f(\psi(x),\phi(x,y))\), with \(\phi(x,\cdot)\) a measure preserving transformation for a.e. \(x\). But since \(T\mathbf{1}=\mathbf{1}\), this \(e^{\prime}(x,y)=1\) as well.
It remains to prove equimeasurability. Let \(\mathbf{1}_{\mathbf{B}}=(\mathbf{1}_{B_{1}},...,\mathbf{1}_{B_{n}})\), and observe that since for a.e. \(x\), \(\phi(x,\cdot)\) is a measure preserving transformation inducing a lattice isometry over \(L_{q}\), it follows that
\[N^{q}[\mathbf{1}_{B_{i}}](x)=\mu(y:(x,y)\in B_{i})=\mu(y:(x,\phi(x,y))\in B_{ i}),\]
While
\[N^{q}[T\mathbf{1}_{B_{i}}](x)=\mu(y:(\psi(x),\phi(x,y))\in B_{i})\] \[=\mu(y:(\psi(x),y)\in B_{i})=N^{q}[\mathbf{1}_{B_{i}}](\psi(x)).\]
Thus for each \(A=\prod_{i}A_{i}\) with \(A_{i}\subseteq[0,1]\) measurable, since \(\psi\) is also a measure preserving transformation,
\[\mu(x:(N^{q}[\mathbf{1}_{\mathbf{B}}](x)\in A)=\mu(x:(N^{q}[\mathbf{1}_{ \mathbf{B}}](\psi(x))\in A)=\mu(x:(N^{q}[T\mathbf{1}_{\mathbf{B}}](x)\in A),\]
and we are done.
The following theorem describes a comparable equimeasurability property of certain copies of \(L_{p}L_{q}\) in \(L_{p}(L_{q})\) for any \(1\leq p\neq q<\infty\):
**Theorem 3.2**.: _Let \(1\leq p\neq q<\infty\), and suppose that \(f_{i}:E\to L_{p}(L_{q})\) are lattice embeddings with \(E\in BK_{p,q}\) generated by a \((k,j)\)-indexed collection of atoms \(\mathbf{e}:=(e(k,j))_{k,j}\) with \(1\leq k\leq n\) and \(1\leq j\leq m_{k}\) as described in Proposition 2.2. Suppose also that \(f(\sum_{k,j}e(k,j))=\mathbf{1}\cdot\|\sum e(k,j)\|\). Then \((f_{1}(\mathbf{e}))\) and \((f_{2}(\mathbf{e}))\) are base-equimeasurable._
Proof.: Let \(\eta=\|\sum_{k,j}e(k,j)\|\), and note first that each \(\frac{1}{\eta}f_{i}(e(k,j))\) is of the form \(\mathbf{1}_{A_{i}(k,j)}\) for some measurable \(A_{i}(k,j)\subseteq[0,1]^{2}\). Second, \(N^{q}[\mathbf{1}_{A_{i}(k,j)}](s)=\mu(A_{i}(k,j)(s))\) with \(A_{i}(k,j)(s)\subseteq[0,1]\) measurable for a.e. \(s\), so by Proposition 2.2, for each fixed \(k\) and each \(j,j^{\prime}\), \(\mu(A_{i}(k,j)(s))=\mu(A_{i}(k,j^{\prime})(s))=\frac{1}{m_{k}}\mathbf{1}_{A_{ i}(k)}(s)\) with \(A_{i}(1),...,A_{i}(n)\subseteq[0,1]\) almost disjoint. It follows that for each appropriate \(k,j\), \(\frac{1}{\eta}=\frac{1}{m_{k}^{1/q}}\mu(A_{i}(k))^{1/p}\), so \(\mu(A_{i}(k))=\left(\frac{m_{k}^{1/q}}{\eta}\right)^{p}\).
To show equimeasurability, observe that for a.e. \(t\), we have \(N^{q}[\mathbf{1}_{A_{i}(k,j)}](s)=\frac{1}{m_{k}}\) iff \(s\in A_{i}(k)\), and \(0\) otherwise. Let \(\mathbf{B}\subseteq\prod_{k}\mathbb{R}^{m_{k}}\) be a measurable set. Note then that any \((k,j)\)-indexed sequence \((N[f_{i}(\mathbf{e})](s))\) is of the form \(\mathbf{c_{s}^{i}}\in\prod_{k}\mathbb{R}^{m_{k}}\) with \(c_{s}^{i}(k,j)=\left(\frac{1}{m_{k}}\right)^{1/q}\) for some unique \(k\), and \(c_{s}^{i}(k,j)=0\) otherwise. It follows then that for some \(I\subseteq 1,...,n\),
\[\mu(s:\mathbf{c_{s}^{i}}\in\mathbf{B})=\sum_{k\in I}\mu(A_{i}(k))=\sum_{k\in I }\bigg{(}\frac{m_{k}^{1/q}}{\eta}\bigg{)}.\]
Since the above holds independent of our choice of \(i\), we are done.
**Remark 3.3**.: The above proof shows much more than base-equimeasurability for copies of \(BK_{p,q}\) lattices in \(L_{p}(L_{q})\). Indeed, if \(\mathbf{1}\in E=<(e(k,j))_{k,j}>\) with \(E\in BK_{p,q}\), then each atom is in fact base-simple, and \(\sum e(k,j)=\eta\cdot\mathbf{1}\) where \(\eta=(\sum_{k}m_{k}^{p/q})^{1/p}\). Furthermore, there exist measurable sets \(A(1),...,A(n)\) partitioning \([0,1]\) with \(\mu(A(k))=\frac{m_{k}^{p/q}}{\eta^{p}}\) such that \(N[e(k,j)]=\frac{\eta}{m_{k}^{1/q}}\mathbf{1}_{A(k)}\). Based on this, we can come up with a "canonical" representation of \(E\), with \(e(k,j)\mapsto\eta\cdot\mathbf{1}_{W_{k}\times V_{k,j}}\), where
\[W_{k}=\big{[}\sum_{l=1}^{k-1}\mu(A(l)),\sum_{l=1}^{k}\mu(A(l))\big{]}\text{, and }V_{k,j}=\bigg{[}\frac{j-1}{m_{k}},\frac{j}{m_{k}}\bigg{]}.\]
This canonical representation will become relevant in later results.
Having characterized representations of lattice in \(BK_{p,q}\), we now move towards proving the AUH result. Before the final proof, we use the following perturbation lemma.
**Lemma 3.4**.: _Let \(f:E\to L_{p}(L_{q})\) be a lattice embedding of a lattice \(E=<e_{1},...,e_{n}>\). Then for all \(\varepsilon>0\), there exists an embedding \(g:E\to L_{p}(L_{q})\) such that \(g(E)\) fully supports \(L_{p}(L_{q})\) and \(\|f-g\|<\varepsilon\)._
Proof.: Let \(M_{k}=supp\big{(}N[f(e_{k})]\big{)}\backslash supp\big{(}N[f(\sum_{1}^{n-1}e_ {k})]\big{)}\). For each \(e_{k}\), we will construct \(e^{\prime}_{k}\) disjoint from \(f(E)\) with support in \(M_{k}\times[0,1]\). Let \(M^{\prime}\) be the elements in \([0,1]^{2}\) disjoint from \(f(E)\). Starting with \(n=1\), Observe that \(M^{\prime}\) can be partitioned by \(M^{\prime}\cap M_{k}\times[0,1]:=M^{\prime}_{k}\). Let
\[\eta_{k}(x,y)=\varepsilon^{1/q}\frac{N[f(e_{k})](x)}{\mu(M^{\prime}_{k}(x))^{ 1/q}}\mathbf{1}_{M^{\prime}_{k}}(x,y).\]
When \(\mu(M^{\prime}_{k}(x))=0\), let \(\eta_{k}(x,y)=0\) as well. Now, let \(g^{\prime}:E\to L_{p}(L_{q})\) be the lattice homomorphism induced by
\[g^{\prime}(e_{k})=(1-\varepsilon)^{1/q}f(e_{k})\cdot\mathbf{1}_{M_{k}}+\eta_ {n}+f(e_{k})\cdot\mathbf{1}_{M^{c}_{k}}.\]
First, we show that \(g^{\prime}\) is an embedding. Observe that for each \(k\),
\[N^{q}[g^{\prime}(e_{k})](x)= \int\eta^{q}_{k}(x,y)+(1-\varepsilon)f(e_{k})^{q}(x,y)\ dy\] \[= \int\varepsilon\frac{N^{q}[f(e_{k})](x)}{\mu(M^{\prime}_{k}(x))} \cdot\mathbf{1}_{M^{\prime}_{k}}(x,y)+(1-\varepsilon)f(e_{k})^{q}(x,y)\ dy\] \[= \varepsilon N^{q}[f(e_{k})](x)+(1-\varepsilon)\int f(e_{k})^{q}(x,y)\ dy\] \[= \varepsilon N^{q}[f(e_{k})](x)+(1-\varepsilon)N^{q}[f(e_{k})](x)= N^{q}[f(e_{k})](x).\]
It easily follows that \(g^{\prime}(E)\) is in fact isometric to \(f(E)\), and thus to \(E\). Furthermore, for every \(k\),
\[\|f(e_{k})-g^{\prime}(e_{k})\|= \|\mathbf{1}_{M_{k}}[(1-(1-\varepsilon)^{1/q})f(e_{k})+\eta_{k}]\|\] \[\leq (1-(1-\varepsilon)^{1/q})+\varepsilon.\]
The above can get arbitrarily small.
Now, if \(supp(N(\sum e_{k}))=[0,1]\), let \(g=g^{\prime}\), and we are done. Otherwise, let \(\tilde{M}=\cup_{k}M_{k}\), and observe that \(\sum g^{\prime}(e_{k})\) fully supports \(L_{p}(\tilde{M};L_{q})\). Observe also that \(L_{p}(L_{q})=L_{p}(\tilde{M};L_{q})\oplus_{p}L_{p}(\tilde{M}^{c};L_{q})\). However, both \(L_{p}(\tilde{M};L_{q})\) and \(L_{p}(\tilde{M}^{c};L_{q})\) are lattice isometric to \(L_{p}(L_{q})\) itself. So there exists an isometric copy of \(E\) fully supporting \(L_{p}(\tilde{M}^{c};L_{q})\). Let \(e^{\prime}_{1},...,e^{\prime}_{n}\in L_{p}(\tilde{M}^{c};L_{q})\) be the corresponding basic atoms of this copy, and let \(g(e_{i})=(1-\varepsilon^{p})^{1/p}g^{\prime}(e_{i})+\varepsilon\cdot e^{ \prime}_{n}\). Then for \(x\in E\),
\[\|g(x)\|^{p}=(1-\varepsilon)\|g^{\prime}(x)\|^{p}+\varepsilon\|x\|^{p}=\|x\|^{p}.\]
Using similar reasoning as in the definition of \(g^{\prime}\), one also gets \(\|g-g^{\prime}\|<(1-(1-\varepsilon)^{1/p})+\varepsilon\), so \(g\) can also arbitrarily approximate \(f\).
Observe that the lemma above allows us to reduce the approximate homogeneity question down to cases where the copies of a \(B\mathcal{K}_{p,q}\) lattice fully support \(L_{p}(L_{q})\). Combined with Proposition 2.5, we can further reduce the possible scenarios to cases where for each \(i\), \(f_{i}(x)=\mathbf{1}\) for some \(x\in E\). It turns out these reductions are sufficient for constructing a lattice automorphism that makes the homogeneity diagram commute as desired:
**Theorem 3.5**.: _Suppose \(1\leq p\neq q<\infty\), and for \(i=1,2\), let \(f_{i}:E\to L_{p}(L_{q})\) be a lattice embedding with \(E:=<(e(k,j))_{k,j}>\in B\mathcal{K}_{p,q}\) and \(1\leq k\leq n\) and \(1\leq j\leq m_{k}\). Suppose also that each \(f_{i}(E)\) fully supports \(L_{p}(L_{q})\). Then there exists a lattice automorphism \(\phi\) over \(L_{p}(L_{q})\) such that \(\phi\circ f_{1}=f_{2}\)._
Proof.: Let \(\eta=\|\sum_{k,j}e(k,j)\|\); by Proposition 2.5, we can assume that for both \(i\)'s, we have \(f_{i}(\sum_{k,j}e(k,j))=\eta\cdot\mathbf{1}\). For notation's sake, let \(e_{i}(k,j):=f_{i}(e(k,j))\). By Proposition 2.2, for each \(i\) there exist mutually disjoint sets \(A_{i}(1),...,A_{i}(n)\) partitioning \([0,1]\) such that for each \(1\leq j\leq m_{k}\), \(supp(N[e_{i}(k,j)])=A_{i}(k)\). In addition, for the sets \(A_{i}(k,1),...,A_{i}(k,m_{k})\), where \(A_{i}(k,j):=supp(e_{i}(k,j))\), partition \(A_{i}(k)\times[0,1]\). It follows also from the statements in Remark 3.3 that \(\mu(A_{1}(k))=\mu(A_{2}(k))\) for each \(k\) and \(N^{q}[e_{i}(k,j)](x)=\frac{\eta^{q}}{m_{k}}\mathbf{1}_{A_{i}(k)}(x)\).
To prove the theorem, it is enough to generate lattice automorphisms \(\phi^{i}\) mapping each band \(\boldsymbol{\beta}(e_{i}(k,j))\) to a corresponding band \(\boldsymbol{\beta}(\mathbf{1}_{W_{k}\times V_{k,j}})\) where \(W_{k}\) and \(V_{k,j}\) are defined as in Remark 3.3, with \(\mathbf{1}_{A_{i}(k,j)}\mapsto\mathbf{1}_{W_{k}\times V_{k,j}}\).
To this end, we make a modified version of the argument in [7, Proposition 2.6] and adopt the notation in Proposition 2.5: construct lattice isometries \(\psi^{i}_{k,j}\) from \(L_{p}(A_{i}(k));L_{q}(V_{k,j}))\) to \(\boldsymbol{\beta}(e^{i}_{k,j})\) with
\[\psi^{i}_{k,j}(f)(x,y)=f\bigg{(}x,\big{(}\widetilde{\mathbf{1}}_{A_{i}(k,j)} \big{)}_{x}(y)_{q}+\frac{j-1}{m_{k}}\bigg{)}\mathbf{1}_{A_{i}(k,j)}(x,y)\]
By similar reasoning as in the proof of Proposition 2.5, \(\psi^{i}_{k,j}\) is a lattice embedding. Surjectivity follows as well. Indeed, since \(N^{q}[\mathbf{1}_{A_{i}(k,j)}](x)=\frac{1}{m_{k}}\), for a.e. \(x\in A_{i}(k)\) the function \(\big{(}\widetilde{\mathbf{1}}_{A_{i}(k,j)}\big{)}_{x}(y)_{q}+\frac{j-1}{m_{k}}\) matches \([0,1]\) continuously to \(V_{k,j}\) with \(supp(e_{i}(k,j)(x,\cdot))\) mapped a.e. surjectively to \(V_{k,j}\). So \(\psi^{i}_{k,j}\)'s image is dense in \(\boldsymbol{\beta}(e_{i}(k,j))\).
Observe that \(\psi^{i}_{k,j}\) also preserves the random norm \(N\) along the base (that is: \(N[f]=N[\psi^{i}_{k,j}(f)]\). Resultantly, the function \(\psi^{i}_{k}:=\oplus_{j}\psi^{i}_{j,k}\) mapping \(L_{p}(A_{i}(k),L_{q}(0,1))\) to \(\oplus_{j}\boldsymbol{\beta}(e_{i}(k,j))\) is also a lattice automorphism. Indeed, for \(f=\sum_{1}^{m_{k}}f_{j}\) with \(f_{j}\in\boldsymbol{\beta}(e_{i}(k,j))\), one gets
\[\|\psi_{k}^{i}(f)\| =\left|\left|N[\sum_{j}\psi_{k,j}^{i}(f_{j})]\right|\right|_{p}= \left|\left|\big{(}\sum_{j}N^{q}[\psi_{k,j}^{i}(f_{j})]\big{)}^{1/q}\right| \right|_{p}\] \[=\left|\left|\big{(}\sum_{j}N^{q}[f_{j}]\big{)}^{1/q}\right| \right|_{p}=\left|\left|N[\sum_{j}f_{j}]\right|\right|_{p}=\|f\|\]
Now let \(\psi^{i}=\oplus_{k}\psi_{k}^{i}\), and observe that given \(f=\sum_{1}^{n}f_{k}\) with \(f_{k}\in L_{p}(A_{i}(k),L_{q}(0,1))\), since the \(f_{k}\)'s are base disjoint, we have
\[\|\psi^{i}f\|^{p}=\sum_{1}^{n}\|\psi_{k}^{i}f_{k}\|^{p}=\sum_{1}^{n}\|f_{k}\|^ {p}=\|f\|^{p}.\]
Thus \(\psi^{i}\) is a lattice automorphism over \(L_{p}(L_{q})\) mapping each \(1_{A_{i}(k)\times V_{k,j}}\) to \(\mathbf{1}_{A_{i}(k,j)}\).
Use [5, Lemma 3.3] to construct a lattice isometry \(\rho_{i}:L_{p}\to L_{p}\) such that for each \(k\), \(\rho_{i}(\mathbf{1}_{W_{k}})=\mathbf{1}_{A_{i}(k)}\). By [1, Ch. 11 Theorem 5.1] this isometry is induced by a measure preserving transformation \(\bar{\rho}_{i}\) from [0,1] to itself such that \(\rho^{i}(f)(x)=f(\bar{\rho}_{i}(x))\). It is easy to show that \(\rho_{i}\) induces a lattice isometry with \(f(x,y)\mapsto f(\bar{\rho}_{i}(x),y)\). In particular, we have \(N[\rho_{i}f](x)=N[f](\bar{\rho}_{i}(x))\), and \(\rho_{i}(\mathbf{1}_{W_{k}\times V_{k,j}})=\mathbf{1}_{A_{i}(k)\times V_{k,j}}\), now let \(\phi^{i}(f)=(\psi^{i}\circ\rho^{i})(f)\), and we are done.
Using the above, we can now show:
**Theorem 3.6**.: _For \(1\leq p\neq q<\infty\), the lattice \(L_{p}(L_{q})\) is AUH for the class \(B\mathcal{K}_{p,q}\)._
Proof.: Let \(f_{i}:E\to L_{p}(L_{q})\) as required, and suppose \(\varepsilon>0\). use Lemma 3.4 to get copies \(E^{\prime}_{i}\) of \(f_{i}(E)\) fully supporting \(L_{p}(L_{q})\) such that for each atom \(e_{k}\in E\) and corresponding atoms \(e_{k}^{i}\in E^{\prime}_{i}\), we have \(\|f_{i}(e_{k})-e_{k}^{i}\|<\varepsilon/2\). now use Theorem 3.5 to generate a lattice automorphism \(\phi\) from \(L_{p}(L_{q})\) to itself such that \(\phi(e_{k}^{1})=e_{k}^{2}\). Then
\[\|\phi(f_{1}(e_{k}))-f_{2}(e_{k}))\|\leq\|\phi(f_{1}(e_{k})-e_{k}^{1})\|+\|e_{ k}^{2}-f_{2}(e_{k})\|<\varepsilon\]
**Remark 3.7**.: Observe that the doubly atomless \(L_{p}(L_{q})\) space is unique among separable \(BL_{p}L_{q}\) spaces that are AUH over \(B\mathcal{K}_{p,q}\). Indeed, this follows from the fact that such a space must be doubly atomless to begin with: let \(E\) be a one dimensional space generated by atom \(e\) and suppose \(X\) is not doubly atomless. Suppose also that \(E\) is embedded by some \(f_{1}\) into a part of \(X\) supported by some \(L_{p}\) or \(L_{q}\) band, and on the other hand is embedded by some \(f_{2}\) into \(F:=\ell_{p}^{2}(\ell_{q}^{2})\) with \(f_{2}(e)\) a unit in \(F\). Then one cannot almost extend \(f_{1}\) to some lattice embedding \(g:F\to X\) with almost commutativity.
One can also expand this approximate ultrahomogeneity to separable sublattices with a weaker condition of almost commutativity in the diagram for generating elements: for any \(BL_{p}L_{q}\) sublattice \(E\) generated by elements \(<e_{1},...,e_{n}>_{L}\), for any \(\varepsilon>0\), and for all lattice embedding pairs \(f_{i}:E\to L_{p}(L_{q})\), there exists a lattice automorphism \(g:L_{p}(L_{q})\to L_{p}(L_{q})\) such that for all \(j=1,...,n\), \(\|g(f_{2}(e_{j}))-f_{1}(e_{j})\|<\varepsilon\).
**Theorem 3.8**.: _For all \(1\leq p\neq q<\infty\), The lattice \(L_{p}(L_{q})\) is AUH for the class of finitely generated \(BL_{p}L_{q}\) lattices._
Proof.: Let \(E=<e_{1},...e_{n}>_{L}\), and let \(f_{i}:E\to L_{p}(L_{q})\) be lattice embeddings. We can assume that \(\|e_{k}\|\leq 1\) for each \(1\leq i\leq n\). By Proposition 2.1, \(E\) is the inductive limit of lattices in \(B\mathcal{K}_{p,q}\). Given \(\varepsilon>0\), pick a \(B\mathcal{K}_{p,q}\) lattice \(E^{\prime}=<e^{\prime}_{1},...,e^{\prime}_{m}>\subseteq E\) such that for each \(e_{k}\), there is some \(x_{k}\in B(E^{\prime})\) such that \(\|x_{k}-e_{k}\|<\frac{\varepsilon}{3}\). Each \(f_{i}|_{E^{\prime}}\) is an embedding into \(L_{p}(L_{q})\), so pick an automorphism \(\phi\) over \(L_{p}(L_{q})\) such that \(\|\phi\circ f_{1}|_{E^{\prime}}-f_{2}|_{E^{\prime}}\|<\frac{\varepsilon}{3}\). Then
\[\|\phi f_{1}(e_{k})-f_{2}(e_{k})\|\leq\|\phi f_{1}(e_{k}-x_{k})\|+\|\phi f_{1}( x_{k})-f_{2}(x_{k})\|+\|f_{2}(x_{k}-e_{k})\|<\varepsilon.\]
We can also expand homogeneity to include not just lattice embeddings but also disjointness preserving linear isometries, that is, if embeddings \(f_{i}:E\to L_{p}(L_{q})\) are not necessarily lattice homomorphisms but preserve disjointness, then there exists a disjointness preserving linear automorphism \(\phi\) over \(L_{p}(L_{q})\) satisfying almost commutativity:
**Corollary 3.9**.: \(L_{p}(L_{q})\) _is AUH over finitely generated sublattices in \(BL_{p}(L_{q})\) with disjointness preserving embeddings._
Proof.: Use the argument in [5, Proposition 3.2] to show that \(L_{p}(L_{q})\) is disjointness preserving AUH over \(B\mathcal{K}_{p,q}\). From there, proceed as in the argument in Theorem 3.8 to extend homogeneity over \(B\mathcal{K}_{p,q}\) to that over \(BL_{p}L_{q}\).
## 4. Approximate Ultrahomogeneity of \(L_{p}(L_{q})\) when \(p/q\notin\mathbb{N}\)
The above results largely focused approximate ultrahomogeneity over \(BL_{p}L_{q}\) lattices. What can be said, however, of _sublattices_ of \(L_{p}L_{q}\) spaces? The answer to this question is split into two cases: first, the cases where \(p/q\notin\mathbb{N}\), and the second is when \(p/q\in\mathbb{N}\). We address the first case in this section. It turns out that if \(p/q\notin\mathbb{N}\), then \(L_{p}(L_{q})\) is AUH for the class of its finitely generated sublattices. The argument involves certain equimea-surability properties of copies of fixed finite dimensional lattices in \(L_{p}(L_{q})\). Throughout, we will refer to the class of sublattices of spaces in \(B\mathcal{K}_{p,q}\) as simply \(\mathcal{K}_{p,q}\), and let \(\overline{\mathcal{K}_{p,q}}\) be the class of finitely generated sublattices of \(L_{p}(L_{q})\).
The following result appeared as [7, Proposition 3.2], which is a multi-dimensional version based on Raynaud's proof for the case of \(n=1\) (see [11, lemma 18]). The approach taken here is a multi-dimensional version of the proof of Lemma 2 in [8].
**Theorem 4.1**.: _Let \(r=p/q\notin\mathbb{N}\), and suppose \(f_{i}:E\to L_{p}(L_{q})\) are lattice isometric embeddings with \(E=<e_{1},...,e_{n}>\). Suppose also that \(f_{1}(x)=f_{2}(x)=\mathbf{1}\) for some \(x\in E_{+}\). Then \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are base-equimeasurable._
Throughout the proof, let \(\mu\) be a measure in some interval \(I^{n}\subseteq C:=\mathbb{R}_{+}^{n}\). To this end, we first show the following:
**Lemma 4.2**.: _Suppose \(0<r\notin\mathbb{N}\), and \(\alpha,\beta\) are positive finite Borel measures on \(C\) such that for all \(\mathbf{v}\in C\) with \(v_{0}>0\),_
\[\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\alpha(\mathbf{z})=\int_{C}|v_ {0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\beta(\mathbf{z})<\infty.\]
_Then \(\alpha=\beta\)._
Proof.: It is equivalent to prove that the signed measure \(\nu:=\alpha-\beta=0\). First, observe that since \(|\nu|\leq\alpha+\beta\), and for any \(\mathbf{v}\geq 0\), \(\int|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d|\nu|(\mathbf{z})<\infty\).
Now, we show by induction on polynomial degree that for all \(k\in\mathbb{N}\), \(\mathbf{v}\geq 0\), and for all multivariate polynomials \(P(\mathbf{z})\) of degree \(k^{\prime}\leq k\),
\[*\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k}P(\mathbf{z})\ d\nu(\mathbf{z} )=0.\]
This is true for the base case \(k=0\) by assumption. Now assume it is true for \(k\in\mathbb{N}\) and let \(k^{\prime}:=\sum l_{i}\leq k\) with \(\mathbf{l}\in\mathbb{N}^{n}\). For notational ease, let \(\mathbf{z}^{\mathbf{l}}=z_{1}^{l_{1}}...z_{n}^{l_{n}}\). Then for each \(v_{i}\) and \(0<t<1\),
\[\int_{R_{+}^{n}}\mathbf{z}^{\mathbf{l}}\frac{(v_{0}+\mathbf{v}\cdot\mathbf{z} +z_{i}t)^{r-k}-(v_{0}+\mathbf{v}\cdot\mathbf{z})^{r-k}}{t}\ d\nu(\mathbf{z})=0.\]
Now, if \(k+1<r\) and \(t\in(0,1)\), then
\[\left|\mathbf{z}^{\mathbf{l}}\frac{(v_{0}+\mathbf{v}\cdot\mathbf{ z}+z_{i}t)^{r-k}-(v_{0}+\mathbf{v}\cdot\mathbf{z})^{r-k}}{t}\right|\leq\mathbf{z}^{ \mathbf{l}}z_{i}(r-k)(v_{0}+\mathbf{v}\cdot\mathbf{z}+v_{i})^{r-k-1}\] \[\leq \frac{r-k}{\mathbf{v}^{\mathbf{l}}v_{i}}(v_{0}+\mathbf{v}\cdot \mathbf{z}+v_{i})^{r}\]
Since in this case, \(0<r-k-1<r\) and \(|\nu|<\infty\), the right hand side must also be \(|\nu|\)-integrable. On the other hand, If \(k+1>r\), then we have
\[\left|\mathbf{z}^{\mathbf{l}}\frac{(v_{0}+\mathbf{v}\cdot\mathbf{z}+v_{i}t)^{ r-k}-(v_{0}+\mathbf{v}\cdot\mathbf{z})^{r-k}}{t}\right|<|r-k|\frac{v_{0}^{r}}{ \mathbf{v}^{\mathbf{l}}v_{i}}\]
which is also \(|\nu|\)-integrable. So now we apply Lebesgue's differentiation theorem over \(v_{i}\) to get, for any \(k\in\mathbb{N}\) and for each \(1\leq i\leq n\):
\[\int_{C}\mathbf{z}^{\mathbf{l}}z_{i}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k-1}\ d \nu(\mathbf{z})=0,\]
since \(r\notin\mathbb{N}\). A similar argument, deriving over \(v_{0}\), can be made to show that
\[\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k-1}\ d\nu(\mathbf{z})=0\]
One can make linear combinations of the above, which implies line \(*\).
Now for fixed \(\mathbf{v}>0\), \(v_{0}>0\) we define a measure \(\Lambda\) on \(C\), where for measurable \(B\subseteq\mathbb{R}^{n}_{+}\),
\[\Lambda(B)=\int_{\phi^{-1}(B)}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\nu( \mathbf{z}).\]
where \(\phi(\mathbf{z})=\frac{1}{v_{0}+\mathbf{v}\cdot\mathbf{z}}\mathbf{z}\). It is sufficient to show that \(\Lambda=0\). Observe first that \(\phi\) is continuous and injective; indeed, if \(\phi(\mathbf{z})=\phi(\mathbf{w})\), then it can be shown that \(\mathbf{v}\cdot\mathbf{w}=\mathbf{v}\cdot\mathbf{z}\). Thus \(\frac{\mathbf{w}}{v_{0}+\mathbf{v}\cdot\mathbf{z}}=\frac{\mathbf{z}}{v_{0}+ \mathbf{v}\cdot\mathbf{z}}\), implying that \(\mathbf{w}=\mathbf{z}\). Resultantly, \(\phi(B)\) for any Borel \(B\) is also Borel, hence we will have shown that for any such \(B\), \(\nu(B)=0\) as well, so \(\nu=0\).
Observe that by choice of \(\mathbf{v}>0\) and and since \((v_{0}+\mathbf{v}\cdot\mathbf{z})>0\) for all \(\mathbf{z}\in\mathbb{R}^{n}_{+}\), have
\[|\Lambda|(B)=\int_{\phi^{-1}(B)}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d|\nu|( \mathbf{z}).\]
Using simple functions and the definition of \(\Lambda\), one can show both that for each \(i\), we have
\[**\ \ \ m_{i}(k):=\int_{C}w^{k}_{i}\ d|\Lambda|(\mathbf{w})=\int_{C}(v_{0}+ \mathbf{v}\cdot\mathbf{z})^{r-k}z^{k}_{i}\ d\|\nu|(\mathbf{z})<\infty\]
and also that
\[\int_{C}w^{k}_{i}\ d\Lambda(\mathbf{w})=\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{ z}|^{r-k}z^{k}_{i}\ d\nu(\mathbf{z})=0,\]
More generally, if \(k=\sum_{i}l_{i}\), then
\[\int_{C}\mathbf{w}^{\mathbf{l}}\ d\Lambda(\mathbf{w})=\int_{C}\mathbf{z}^{ \mathbf{l}}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k}\ d\nu(\mathbf{z})=0,\]
So it follows that \(\int_{C}P(\mathbf{w})\ d\Lambda(\mathbf{w})=0\) for all polynomials \(P(\mathbf{w})\).
Now if \(k>r\) and \(\nu\neq 0\), since \(v_{i}>0\), we then we have
\[m_{i}(k) =\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k}z_{i}^{k}\ d\|\nu|( \mathbf{z})\] \[\leq\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}v_{i}^{-k}\ d|\nu| (\mathbf{z})\leq v_{i}^{-k}|\Lambda|(C)<\infty\]
so
\[m_{i}(k)^{-1/2k}\geq v_{i}^{1/2}|\Lambda|(C)^{-1/2k}\]
Thus for each \(1\leq i\leq n\), \(\sum_{k}m_{i}(k)^{-1/2k}=0\). So by [4, Theorem 5.2], \(|\Lambda|\) is the unique positive measure over \(C\) with moment values \(m_{i}(k)\). Since \(|\Lambda|+\Lambda\) yields the same values, and by **, \(\int_{C}P(\mathbf{w})\ d(|\Lambda|+\Lambda)(\mathbf{w})=\int_{C}P(\mathbf{w})\ d| \Lambda|(\mathbf{w})\), it follows that \(\Lambda=0\), so \(\nu=0\).
Now we are ready to prove Theorem 4.1.
Proof.: For simplicity of notation, let \(F_{j}^{i}=N^{q}[f_{i}(e_{j})]\) and \(I=[0,1]\). By definition of \(N\), the support of \(F_{j}^{i}\) as well as of \(\mu\) is the unit interval. Define positive measures \(\alpha_{j}\) by
\[\alpha_{i}(B)=\mu(\{t\in I:\mathbf{F}^{i}(t)\in B\})=\mu((\mathbf{F}^{i})^{-1} (B)).\]
Now, for any measurable \(B\subseteq C\), we have
\[\int_{C}\mathbf{1}_{B}(\mathbf{z})\ da_{i}(\mathbf{z})=\alpha_{i}(B)=\mu(( \mathbf{F}^{i})^{-1}(B))=\int_{I}(\mathbf{1}_{B}\circ\mathbf{F}^{i})(t)\ dt\]
so for any simple function \(\sigma\) over \(C\),
\[\int_{C}\sigma(\mathbf{z})\ d\alpha_{i}=\int_{0}^{1}\sigma\circ\mathbf{F}^{i}( t)\ dt\]
Using simple functions to approximate \(|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\), and given that \(|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\) is in \(L_{1}(C,\mu)\) and the support of \(\mu\) is the unit interval, it follows that
\[\int_{C}|1+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\alpha_{i}(\mathbf{z})=\int_{0}^{ 1}|1+\mathbf{v}\cdot\mathbf{F}^{i}(t)|^{r}\ dt.\]
It is sufficient now to show that for all \(\mathbf{v}\in\mathbb{R}_{+}^{n}\),
\[\int_{0}^{1}|1+\mathbf{v}\cdot\mathbf{F}^{1}(t)|^{r}\ dt=\int_{0}^{1}|1+ \mathbf{v}\cdot\mathbf{F}^{2}(t)|^{r}\ dt.\]
For \(i,j\) and \(s\in[0,1]\), let \(M_{i}^{j}=\{(s,t):(s,t)\in supp(f_{i}(e_{j}))\}\), and let \(M_{i}^{j}(s)=\{t:(s,t)\in M_{i}^{j}\}\). By assumption, \(x=\sum_{j}x_{j}e_{j}\) with \(x_{j}>0\), so \(\mathbf{1}=N^{q}[f_{i}(x)]=\sum_{j}x_{j}^{q}F_{j}^{i}\). Therefore, since each \(f_{i}\) is an embedding, for all \(\mathbf{c}\in\mathbb{R}_{+}^{n}\),
\[\|\sum_{j}c_{j}e_{j}\|^{p}= \left|\left|\big{(}\sum_{j}c_{j}^{q}F_{j}^{i}(s)\big{)}^{1/q}\right| \right|_{p}\] \[= \left|\left|\big{(}\mathbf{1}+\sum_{j}(c_{j}^{q}-x_{j}^{q})F_{j}^{ i}(s)\big{)}^{1/q}\right|\right|_{p}\]
Let \(v_{j}:=c_{j}^{q}-x_{j}^{q}\): then in particular it follows that for all \(\mathbf{v}\geq 0\), we have
\[\int_{0}^{1}\left(1+\mathbf{v}\cdot\mathbf{F}^{1}(s)\right)^{p/q}ds=\int_{0}^ {1}\left(1+\mathbf{v}\cdot\mathbf{F}^{2}(s)\right)^{p/q}ds.\]
By Lemma 4.2, we can conclude that \(\alpha_{1}=\alpha_{2}\), so \(\mathbf{F}^{1}\) and \(\mathbf{F}^{2}\) are equimeasurable.
Using Theorem 4.1, we can uniquely characterize lattices in \(\mathcal{K}_{p,q}\) in a way that parallels Proposition 2.2.
**Theorem 4.3**.: _Suppose that \(p/q\notin\mathbb{N}\), and let \(E\subseteq L_{p}(L_{q})\) with \(E=<e_{1},...,e_{m}>\). Then the following hold:_
* \(E\in\mathcal{K}_{p,q}\) _iff there exist mutually disjoint measurable functions_ \(\phi(k,j)\in S(L_{p}(L_{q}))_{+}\)_, with_ \(1\leq j\leq n\) _and_ \(1\leq k\leq L\) _such that for each_ \(j\)_,_ \(e_{j}\in<(\phi(k,j))_{k}>=\ell_{p}^{n}\)_, and_ \(<(\phi(k,j))_{k,j}>\in B\mathcal{K}_{p,q}\)_._
* _Suppose_ \(f_{i}:E\to L_{p}(L_{q})\) _is a lattice embedding with_ \(i=1,2\) _and_ \(E\in\mathcal{K}_{p,q}.\) _Then there exist embeddings_ \(f_{i}^{\prime}:E^{\prime}\to L_{p}(L_{q})\) _extending_ \(f_{i}\) _such that_ \(E^{\prime}\in B\mathcal{K}_{p,q}\)_._
Proof.: For part 1, clearly the reverse direction is true. To prove the main direction, we can suppose that \(E\) fully supports \(L_{p}(L_{q})\). If not, recall that the band generated by \(E\) is itself doubly atomless, and hence is lattice isometric to \(L_{p}(L_{q})\) itself. Thus, if under these conditions, there is a \(BL_{p}L_{q}\) sublattices extending \(E\) as in the statement of the theorem, it will also be the case in general.
By Proposition 2.5, we can also suppose that \(\sum_{j}e_{j}=\eta\cdot\mathbf{1}\). Now by assumption, since \(E\in\mathcal{K}_{p,q}\), then there is an embedding \(\psi:E\to\widetilde{E}\in B\mathcal{K}_{p,q}\) such that each \(\psi(e_{j})=\sum_{k}x(k,j)\tilde{e}(k,j)\), with \(1\leq k\leq m_{k}^{\prime}\). Without loss of generality we may also drop any \(\tilde{e}(k,j)\)'s disjoint from \(\psi(E)\) and assume that \(\psi(E)\) fully supports \(\widetilde{E}\). Now \(\widetilde{E}\) is a \(B\mathcal{K}_{p,q}\) lattice admitting a canonical
representation in \(L_{p}(L_{q})\) as described in Theorem 3.2 and Remark 3.3.
So we can assume that \(\psi\) embeds \(E\) into \(L_{p}(L_{q})\) in such a way that \(\psi(E)\) fully supports it and each \(\psi(e_{j})\) is both simple and base-simple. Now, use Proposition 2.5 to adjust \(\psi\) into an automorphism over \(L_{p}(L_{q})\) such that \(\psi(\sum e_{j})=\eta\cdot\mathbf{1}\) in a way that preserves both simplicity and base-simplicity. By Theorem 4.1, \(\psi(\mathbf{e})\) and \(\mathbf{e}\) are base-equimeasurable. Since the \(\psi(k,j)^{\prime}s\) are base-simple, there exist tuples \(\mathbf{s^{1}},...,\mathbf{s^{L}}\in\mathbb{R}^{m}\) such that for a.e. \(t\in[0,1]\), there is some \(k\leq L\) such that \(N[\mathbf{e}](t)=\mathbf{s^{k}}\). By equimeasurability, the same is true for \(N[\psi(\mathbf{e})](t)\).
Let \(\mathbf{S^{k}}=\{t:N[\mathbf{e}](t)=\mathbf{s^{k}}\}\), and let \(S_{j}^{k}=\mathbf{S^{k}}\times[0,1]\cap supp(e_{j})\). Let \(\overline{\mathbf{S^{k}}}=\{t:N[\psi(\mathbf{e})](t)=\mathbf{s^{k}}\}\) with \(\overline{S}_{j}^{k}\) defined similarly. Note that each \(\mathbf{1}_{S_{j}^{k}}\) is also base-characteristic, as \(N[\mathbf{1}_{S_{j}^{k}}]=c_{j}^{k}\mathbf{1}_{\mathbf{S^{k}}}\) for some \(c_{j}^{k}>0\), so for fixed \(k\) and for any \(j,j^{\prime}\leq m_{k}\), we must have that \(N[\mathbf{1}_{S_{j}^{k}}]\) and \(N[\mathbf{1}_{S_{j^{\prime}}^{k}}]\) are scalar multiples of each other. Thus for each appropriate pair \((k,j)\) with \(s_{j}^{k}>0\), define \(\phi(k,j)\) by \(\frac{\mathbf{1}_{S_{j}^{k}}}{\|\mathbf{1}_{S_{j}^{k}}\|}\). By definition of \(\mathbf{S^{k}}\), for any \(k\neq k^{\prime}\) and any appropriate \(j,j^{\prime}\), \(\phi(k,j)\) and \(\phi(k^{\prime},j^{\prime})\) are fiber-disjoint, and \(N[\phi(k,j)]=N[\phi(k,j^{\prime})]\). Thus by Proposition 2.2, \(<(\phi(k,j))_{k,j}>\in B\mathcal{K}_{p,q}\).
To prove part 2, Observe first that we have already essentially proven part 2 in the case that \(f_{1}=Id\) and \(f_{2}=\psi\). To show the general case, we first assume that for each \(i\), \(\sum f_{i}(e_{j})\) maps to \(\mathbf{1}\). Now, by Theorem 4.1, \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are also base-equimeasurable, but by the procedure for part 1, we also know that each \(f_{i}(e_{j})\) is also base-simple. Define \(\mathbf{s^{1}},...,\mathbf{s^{L}}\) as above, and Let \(\mathbf{S^{k}}(i)=\{t:N[f_{i}(\mathbf{e})](t)=\mathbf{s^{k}}\}\). Define similarly \(S_{j}^{k}(i)\) and the associated characteristic functions \(\phi_{i}(k,j)\) for appropriate pairs \(k,j\) such that \(1\leq k\leq l\) and \(s_{j}^{k}:=\|\phi_{i}(k,j)\wedge f_{i}(e_{j})\|>0\). Note first that
\[f_{i}(e_{j})=\sum_{k:s_{k}(j)>0}s_{j}^{k}\phi_{i}(k,j).\]
Second, observe that by equimeasurability, the eligible pairs \((k,j)\) are the same for \(i=1,2\). Let \(E_{i}^{\prime}=<(\phi_{i}(k,j))_{k,j}>\). Clearly \(E_{i}^{\prime}\in B\mathcal{K}_{p,q}\), and since the eligible pairs \((k,j)\) are the same, \(E_{1}^{\prime}\) and \(E_{2}^{\prime}\) are isometric to each other. Let \(E^{\prime}\) be one of the \(E_{i}^{\prime}\)'s and let \(f_{i}^{\prime}:E^{\prime}\to L_{p}(L_{q})\) be the expected embedding mapping \(E^{\prime}\) to \(E_{i}^{\prime}\), and we are done.
From here, we can now easily extend Theorem 3.5 to lattices in \(\mathcal{K}_{p,q}\):
**Corollary 4.4**.: _Suppose \(p/q\notin\mathbb{N}\) and suppose \(f_{i}:E\to L_{p}(L_{q})\) are lattice embeddings from \(E\in K_{p,q}\) with \(f_{i}(E)\) fully supporting \(L_{p}(L_{q})\). Then there exists a lattice automorphism \(\phi\) over \(L_{p}(L_{q})\) such that \(f_{2}=\phi\circ f_{1}\)._
Proof.: Use Theorem 4.3 to generate a \(B\mathcal{K}_{p,q}\) lattice \(E^{\prime}\) containing \(E\) and lattice embeddings \(f^{\prime}_{i}:E^{\prime}\to L_{p}(L_{q})\) such that \(f^{\prime}_{i}|_{E}=f_{i}\). Clearly each \(f^{\prime}_{i}(E^{\prime})\) fully supports \(L_{p}(L_{q})\). Now apply Theorem 3.5 to generate an automorphism \(\phi\) over \(L_{p}(L_{q})\) with \(\phi\circ f^{\prime}_{1}=f^{\prime}_{2}\). Clearly \(\phi\circ f_{1}=f_{2}\) as well.
When \(p/q\notin\mathbb{N}\), using Theorem 4.3, we can show that the same holds with the more general class \(\mathcal{K}_{p,q}\). However, we can make an even stronger claim by showing that homogeneity holds for any finite dimensional sublattice of \(L_{p}(L_{q})\). This is done using the following result, which gives a standard way of approximating finite dimensional sublattices of \(L_{p}(L_{q})\) with lattices in \(\mathcal{K}_{p,q}\).
**Lemma 4.5**.: _Suppose \(p/q\notin\mathbb{N}\), and let \(f_{i}:E\to L_{p}(L_{q})\) be embeddings with \(E=<e_{1},...,e_{n}>\). Then for all \(\varepsilon>0\), there exists a \(\mathcal{K}_{p,q}\) lattice \(E^{\prime}=<e^{\prime}_{1},...,e^{\prime}_{n}>\) and embeddings \(g_{i}:E^{\prime}\to L_{p}(L_{q})\) such \(g_{i}(E^{\prime})\) fully supports \(L_{p}(L_{q})\) and for each \(n\), \(\|f_{i}(e_{n})-g_{i}(e^{\prime}_{n})\|<\varepsilon\)._
Proof.: We can assume each \(f_{i}(E)\) fully supports \(L_{p}(L_{q})\): given \(\varepsilon>0\), use Lemma 3.4 to get copies of \(E\) sufficiently close to each \(f_{i}(E)\) with full support. We then also assume that \(f_{i}(\sum_{1}^{n}e_{k})=\mathbf{1}\) using Proposition 2.5.
By Theorem 4.1, \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are base-equimeasurable. In particular, given any measurable \(C\in\mathbb{R}^{n}\), one has \(\mu(t:N[f_{1}(\mathbf{e})](t)\in C)=\mu(t:N[f_{2}(\mathbf{e})](t)\in C)\). Now pick an almost disjoint partition \(C_{1},...,C_{m}\) of \(S(\ell_{1}^{n})\), where each \(C_{i}\) is closed, has relatively non-empty interior, and is of diameter less than \(\frac{\varepsilon}{2n}\). Let \(D^{i}_{k}=\{t:N[f_{i}(\mathbf{e})](t)\in C_{i}\backslash\cup_{j}^{i-1}C_{j}\}\). Then by equimeasurability, \(\mu(D^{1}_{k})=\mu(D^{2}_{k})\). For each \(k\), pick some \(\mathbf{s}^{k}=(s^{k}_{1},...,s^{k}_{n})\in C_{k}\), and for each \(x\in D^{i}_{k}\), let
\[\overline{e}^{i}_{j}(x,y)=\frac{s^{k}_{j}}{N[f_{i}(e_{j})](x)}f_{i}(e_{j})(x,y).\]
Observe that \(\|\sum_{j}\overline{e}^{i}_{j}-\sum_{j}f_{i}(e_{j})\|<\varepsilon\), and \(N[\overline{e}^{i}_{j}](x)=s^{k}_{j}\) for \(x\in D^{i}_{k}\).
Consider now the lattice \(E^{\prime}=<\overline{e}^{1}_{j},...,\overline{e}^{1}_{n}>\). Now, for any linear combination \(\sum a_{j}\overline{e}^{i}_{j}\), we have, as in the argument in Proposition 2.5, that
\[\|\sum a_{j}\overline{e}^{i}_{j}\|^{p}=\sum_{k}^{M}(\sum_{j}(a_{j}s^{k}_{j})^ {q})^{p/q}\]
implying that \(\|\sum a_{j}\overline{e}^{1}_{j}\|=\|\sum a_{j}\overline{e}^{2}_{j}\|\). It follows both that \(E^{\prime}\) embeds into \(\ell_{p}^{M}(\ell_{q}^{n})\), implying that it is a \(\mathcal{K}_{p,q}\) lattice, and it is isometric to the lattice generated by the \(\overline{e}^{2}_{j}\)'s. Let \(e^{\prime}_{j}=\overline{e}^{1}_{j}\), and define \(g_{i}:E^{\prime}\to L_{p}(L_{q})\) as the maps generated by \(g_{i}(e^{\prime}_{j})=\overline{e}^{i}_{j}\). Clearly these are lattice embeddings and \(\|f_{i}(e_{j})-g_{i}(e^{\prime}_{j})\|<\varepsilon\).
**Theorem 4.6**.: _For all \(1\leq p,q<\infty\) with \(p/q\notin\mathbb{N}\), the lattice \(L_{p}(L_{q})\) is AUH for the class of finite dimensional sublattices of \(L_{p}L_{q}\) lattices._
Proof.: It is sufficient to show that the result is true over generation by basic atoms. Let \(f_{i}:E\to L_{p}(L_{q})\) be two embeddings with \(E=<e_{1},...,e_{n}>\). Use Lemma 4.5 to find \(g_{i}:E^{\prime}\to L_{p}(L_{q})\), with \(E^{\prime}:=<e_{1}^{\prime},...,e_{n}^{\prime}>\in\mathcal{K}_{p,q}\), \(\|g_{i}(e_{k}^{\prime})-f_{i}(e_{k})\|<\varepsilon/2\), and each \(g_{i}(E^{\prime})\) fully supporting \(L_{p}(L_{q})\). Then by Lemma 4.4, there exists an automorphism \(\phi:L_{p}(L_{q})\to L_{p}(L_{q})\) such that \(\phi\circ g_{1}=g_{2}\). Note then that \(\|\phi(f_{1}(e_{k}))-f_{2}(e_{k})\|\leq\|\phi(f_{1}(e_{k})-g_{1}(e_{k}^{\prime }))\|+\|f_{2}(e_{k})-g_{2}(e_{k}^{\prime})\|<\varepsilon\).
In a manner similar to that of Theorem 3.8, we can also extend the AUH property to finitely generated sublattices of \(L_{p}(L_{q})\) as well:
**Theorem 4.7**.: _For all \(1\leq p,q<\infty\) with \(p/q\notin\mathbb{N}\), The lattice \(L_{p}(L_{q})\) is AUH for the class \(\overline{\mathcal{K}_{p,q}}\) of its finitely generated lattices._
Proof.: Suppose \(E\subseteq L_{p}(L_{q})\) is finitely generated. Then since \(E\) is order continuous and separable, it is the inductive limit of finite dimensional lattices as well, so pick a finite dimensional \(E^{\prime}\) with elements sufficiently approximating the generating elements of \(E\), and proceed with the same proof as in Theorem 3.8.
The argument used in Corollary 3.9 can also be used to show:
**Corollary 4.8**.: _For \(p/q\notin\mathbb{N}\), \(L_{p}(L_{q})\) is disjointness preserving AUH over \(\overline{\mathcal{K}_{p,q}}\)._
**Remark 4.9**.: \(L_{p}(L_{q})\) for \(p/q\notin\mathbb{N}\) is AUH over the entire class of its finitely generated sublattices, a property which is equivalent to such a class being a metric _Fraisse class_ with \(L_{p}(L_{q})\) as its _Fraisse limit_. Recall that a class \(\mathcal{K}\) of finitely generated lattices is _Fraisse_ if it satisfies the following properties:
1. _Hereditary Property_ (HP): \(\mathcal{K}\) is closed under finitely generated sublattices.
2. _Joint Embedding Property_ (JEP): any two lattices in \(\mathcal{K}\) lattice embed into a third in \(\mathcal{K}\).
3. _Continuity Property_ (CP): any lattice operation symbol are continuous with respect to the Fraisse pseudo-metric \(d^{\mathcal{K}}\) in [2, Definition 2.11].
4. _Near Amalgamation Property_ (NAP): for any lattices \(E=<e_{1},...e_{n}>_{L}\), \(F_{1}\) and \(F_{2}\) in \(\mathcal{K}\) with lattice embeddings \(f_{i}:E\to F_{i}\), and for all \(\varepsilon>0\), there exists a \(G\in\mathcal{K}\) and embeddings \(g_{i}:F_{i}\to G\) such that \(\|g_{1}\circ f_{1}(e_{k})-g_{2}\circ f_{2}(e_{k})\|<\varepsilon\).
5. _Polish Property_ (PP): The Fraisse pseudo-metric \(d^{\mathcal{K}}\) is separable and complete in \(\mathcal{K}_{n}\) (the \(\mathcal{K}\)-structures generated by \(n\) many elements).
Now clearly the finitely generated sublattices of \(L_{p}(L_{q})\) fulfill the first two properties, and the third follows from the lattice and linear operations
having moduli of continuity independent of lattice geometry. In addition, if one can show that the class \(\mathcal{K}\) has the \(NAP\), has some separable \(X\) which is universal for \(\mathcal{K}\), and its NAP amalgamate lattices can be chosen so that they are closed under inductive limits, then one can prove that \(\mathcal{K}\) also has the Polish Property (a technique demonstrated in [14, Theorem 4.1] and more generally described in Section 2.5 of [9]). The main difficulty in proving that a class of lattices \(\mathcal{K}\) is a Fraisse class is in showing that it has the NAP. However, thanks to Theorem 4.7, we have
**Corollary 4.10**.: \(\overline{\mathcal{K}_{p,q}}\) _has the NAP._
Theorem 4.7 implies an additional collection of AUH Banach lattices to the currently known AUH Banach lattices: namely \(L_{p}\) for \(1\leq p<\infty\),the Gurarij M-space \(\mathcal{M}\) discovered in [5], and the Gurarij lattice discovered in [14].
However, if one considers classes of finite dimensional Banach spaces with Fraisse limits using linear instead of lattice embeddings, the only known separable AUH Banach spaces are the Gurarij space and \(L_{p}\) for \(p\neq 4,6,8,...\), and it is currently unknown if there are other Banach spaces that are AUH over its finite dimensional subspaces with linear embeddings. Certain combinations of \(p\) and \(q\) are also ruled out for \(L_{p}(L_{q})\) as a potential AUH candidate as discussed in Problem 2.9 of [5]: in particular, when \(1\leq p,q<2\), \(L_{p}(L_{q})\) cannot be linearly AUH.
## 5. Failure of homogeneity for \(p/q\in\mathbb{N}\)
Recall that when \(E=<e_{1},...,e_{n}>\in B\mathcal{K}_{p,q}\) is embedded into \(L_{p}(L_{q})\) through \(f_{1},f_{2}\), then we can achieve almost commutativity for any \(p\neq q\). However, the automorphism in Theorem 3.6 clearly preserves the equimea-surability of the generating basic atoms of \(f_{i}(E)\) as it fixes \(\mathbf{1}\).
In this section, we show that the results of Section 4 do not hold when \(p/q\in\mathbb{N}\). The first results in this section show that when some \(e\in L_{p}(L_{q})_{+}\) is sufficiently close to \(\mathbf{1}\), the automorphism originally used in the argument of Proposition 2.5 sending \(\mathbf{1}\) to \(e\) also perturbs selected functions piecewise continuous on their support in a controlled way. Second, Theorem 4.1 does not hold, and thus we cannot infer equimeasurability for arbitrary finite dimensional sublattices of \(L_{p}(L_{q})\). Finally, we use these results to strengthen the homogeneity property for any \(L_{p}(L_{q})\) lattice assumed to be AUH, and then show that when \(p/q\in\mathbb{N}\), \(L_{p}(L_{q})\) does not fulfill this stronger homogeneity property, and thus cannot be AUH.
**Lemma 5.1**.: _Let \(1\leq p\neq q<\infty\), and let \(<f_{1},...,f_{n}>\subseteq L_{p}(L_{q})\) be such that \(\sum f_{i}=\mathbf{1}\). Suppose also that for a.e. \(x\), \(f_{k}(x,\cdot)=\mathbf{1}_{[g_{k}(x),g_{k+1}(x)]}\) where each \(g_{k}\) has finitely many discontinuities. Let \(\varepsilon>0\), and let
fully support \(L_{p}(L_{q})\). Consider_
\[\phi(f)(x,y)=f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\widetilde{e}_{x}(y)_{q}}{N^ {q}[e](x)}\bigg{)}e(x,y)\]
_which is the lattice isometry defined in Proposition 2.5 mapping \(\mathbf{1}\) to \(e\)._
_Then there exists \(\delta\) such that if \(\|\mathbf{1}-e\|<\delta\), then for each \(k\), we have that \(\|\phi(f_{k})-f_{k}\|<\varepsilon\)._
Proof.: We can assume \(\varepsilon<1\). Let \(K\subseteq[0,1]\) be a closed set such that for \(1\leq k\leq n+1\), \(g_{k}|_{K}\) is continuous and \(\mu(K)>1-\varepsilon\). Pick \(\delta^{\prime}<\varepsilon\) such that for any \(x,x^{\prime}\in K\), if \(|x-x^{\prime}|<\delta^{\prime}\), then \(|g_{k}(x)-g_{k}(x^{\prime})|<\varepsilon/4\). Now, let \(\delta<{\delta^{\prime}}^{2p}\) be such that \(1-\frac{\delta^{\prime}}{4}\leq(1-\delta)^{p}<(1+\delta)^{p}<1+\frac{\delta^{ \prime}}{4}\), and suppose \(\|\mathbf{1}-e\|<\delta\). Observe that for each \(x\), we have \(\widetilde{N[\mathbf{1}-e]}(x)_{p}<\delta\). For each \(1\leq k\leq n\), let
\[\widetilde{f}_{n}(x,y)=f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\widetilde{e}_{ x}(y)_{q}}{N^{q}[e](x)}\bigg{)}.\]
Observe that \(\|\widetilde{f}_{k}-\phi(f_{k})\|<\delta<\varepsilon/4\), so it is enough to show that \(\|\widetilde{f}_{k}-f_{k}\|\) is sufficiently small as well.
To this end, first note that since \(f\) is being composed with increasing continuous functions in both arguments, each \(\widetilde{f}_{n}(x,\cdot)\) is also the characteristic function of an interval: indeed, we have piecewise continuous \(\widetilde{g}_{1},...,\widetilde{g}_{n+1}\) with \(\widetilde{g}_{k}(x):=g(\widetilde{N[e]}(x)_{p})\) and \(\widetilde{g}_{n+1}(x)=1\) such that for each \(k\), \(\widetilde{f}_{k}(x,y)=\mathbf{1}_{[\widetilde{g}_{k}(x),\widetilde{g}_{k+1} (x)]}(y)\). Also observe that for \(M:=\{x\in K:N[e-1](x)<\delta\}\), we have \(\mu(M))>1-\delta^{\prime}-\varepsilon\). In addition, as
\[\|f_{k}-\widetilde{f}_{k}\|^{p}=\|N[f_{k}-\widetilde{f}_{k}]\|_{p}^{p}=\int\mu (D(x))^{p}\ dx,\]
Where \(D_{k}(x)=\{y:f_{k}(x,y)\neq\widetilde{f}_{k}(x,y)\}\). The above set up, in combination with the triangle inequality properties of \(N\), leads us to the following inequalities:
* For all \(0\leq x\leq 1\), \(|\widetilde{N[e]}(x)_{p}-x|<\delta\).
* For all \(x\in M\), \(|N[e](x)-1|<\delta\).
* For all \(x\in M\) and \(0\leq y\leq 1\), \(|\widetilde{e}_{x}(y)_{q}-y|<\frac{\delta^{\prime}}{2}\).
* For all \(x\in M\) and \(0\leq y\leq 1\), if \(y^{\prime}:=\frac{\widetilde{e}_{x}(y)_{q}}{N^{q}[e](x)}\), then \(|y^{\prime}-e_{x}(y)_{q}|<\frac{\delta^{\prime}}{2}\) (which implies with the above that \(|y-y^{\prime}|<\delta^{\prime}\)).
We now show that the above implies that \(D_{k}(x)<2\varepsilon\). Observe first that for all \(x\in M\), if \(f_{k}(x,y)\neq\widetilde{f}_{k}(x,y)\) it must be because, but \(y^{\prime}\notin[\widetilde{g}_{k}(x),\widetilde{g}_{k+1}(x)]\), or vice versa. In either case, it can be shown that either \(|y-g_{k}(x)|<\delta+\frac{\varepsilon}{4}\) or \(|y-g_{k+1}(x)|<\delta+\frac{\varepsilon}{4}\). Suppose \(y\in[g_{k}(x),g_{k+1}(x)]\) and \(y^{\prime}<\widetilde{g}_{k}(x)\) (a similar proof will work in the case that \(y^{\prime}>\widetilde{g}_{k+1}(x)\). Then
since \(y>g_{k}(x)\), \(|y-y^{\prime}|\leq\delta^{\prime}\), and \(|g_{k}(x)-\widetilde{g}_{k}(x)|<\frac{\varepsilon}{4}\),
\[0\leq y-g_{k}(x)=(y-y^{\prime})+(y^{\prime}-\widetilde{g}_{k}(x))+(\widetilde{ g}_{k}(x)-g_{k}(x))<\delta+\frac{\varepsilon}{4}.\]
It follows then that accounting for both ends of the interval \([g_{k}(x),g_{k+1}(x)]\) and for \(x\in M\), we have \(D_{k}(x)<2\varepsilon\). Resultantly,
\[\|f_{k}-\widetilde{f}_{k}\|^{p}=\int_{M}\mu(D(x))^{p}\ dx+\int_{M^{c}}\mu(D(x) )^{p}\ dx<(2\varepsilon)^{p}+\delta^{p}<3\varepsilon^{p},\]
which can be made arbitrarily small.
**Theorem 5.2**.: _Let \(1\leq p\neq q<\infty\) and suppose \(L_{p}(L_{q})\) is AUH over its finite dimensional sublattices. Let \(f_{i}:E\to L_{p}(L_{q})\) be lattice embeddings with \(E=<e_{1},...,e_{n}>\) such that \(f_{i}(x)=\mathbf{1}\) for some \(x\in E\). Then for all \(\varepsilon>0\), there exists an automorphism \(\phi\) fixing \(\mathbf{1}\) such that \(\|\phi f_{1}-f_{2}\|<\varepsilon\)._
Proof.: Assume the above, and pick \(E^{\prime}=<e_{1}^{\prime},...,e_{m}^{\prime}>\subseteq L_{p}(L_{q})\), where \(e_{k}^{\prime}=a_{k}\cdot\mathbf{1}_{A_{k}\times B_{k}}\) with \(A_{k}\) and \(B_{k}\) intervals such that \(\sum_{k}\mathbf{1}_{A_{k}\times B_{k}}=\mathbf{1}\) and for each \(e_{k}\) there is \(x_{k}\in S(E^{\prime})_{+}\) such that \(\|x_{k}-f_{2}(e_{k})\|<\frac{\varepsilon}{4n}\).
Since \(L_{p}(L_{q})\) is AUH, there exists an automorphism \(\psi\) such that \(\|\psi f_{1}-f_{2}\|<\delta\), where \(\delta\) satisfies the conditions for \(\frac{\varepsilon}{4mn}\) and each of the \(e_{k}^{\prime}\)'s in \(E^{\prime}\) in Lemma 5.1. Now pick the automorphism \(\phi^{\prime}\) over \(L_{p}(L_{q})\) mapping \(\mathbf{1}\) to \(\psi f_{1}(x)\) as defined in Lemma 5.1. It follows that for each \(e_{k}^{\prime}\), \(\|\phi^{\prime}(e_{k}^{\prime})-e_{k}^{\prime}\|<\frac{\varepsilon}{4mn}\), so \(\|\phi^{\prime}(x_{k})-x_{k}\|<\frac{\varepsilon}{4n}\). Thus for each \(e_{k}\in E\),
\[\|\phi^{\prime}f_{2}(e_{k})-\psi f_{1}(e_{k})\|\leq \|\phi^{\prime}(f_{2}(e_{k})-x_{k})\|+\|\phi^{\prime}(x_{k})-x_{k}\|\] \[+ \|x_{k}-f_{2}(e_{k})\|+\|f_{2}(e_{k})-\psi f_{1}(e_{k})\|<\frac{ \varepsilon}{n},\]
Now let \(\phi={\phi^{\prime}}^{-1}\circ\psi\) to obtain the desired automorphism; then \(\|\phi f_{1}-f_{2}\|<\varepsilon\).
The above can be used to show that if \(L_{p}(L_{q})\) is AUH and \(f_{i}(E)\) contains \(\mathbf{1}\) for \(i=1,2\), then we can induce almost commutativity with automorphisms fixing \(\mathbf{1}\) as well. This will allow us to reduce possible automorphisms over \(L_{p}(L_{q})\) to those that in particular fix \(\mathbf{1}\). The importance of this result is that these particular homomorphisms fixing \(\mathbf{1}\) must always preserve base-equimeasurability for characteristic functions, as shown in Proposition 3.1. Thus a natural approach in disproving that \(L_{p}(L_{q})\) is AUH would involve finding sublattices containing \(\mathbf{1}\) which are lattice isometric but whose generating elements are not base-equimeasurable. The following results do exactly that:
**Lemma 5.3**.: _Lemma 4.2 fails when \(r:=p/q\in\mathbb{N}\). In particular, there exists a non-zero measure \(\nu:=\alpha-\beta\), with \(\alpha\) and \(\beta\) positive measures such that for all polynomials \(P\) of degree \(j\leq r\),_
\[\int_{0}^{1}P(x)\ d\nu(x)=0.\]
**Remark 5.4**.: It is already known that a counter-example exists for \(L_{r}(0,\infty)\) for all \(r\in\mathbb{N}\), with
\[d\nu(x)=e^{-u^{\frac{1}{4}}}\sin(u^{\frac{1}{4}})\ du\]
(see [12] and [8] for more details).
Here we provide another example over the unit interval:
Proof.: Fix such an \(r\), and define a polynomial \(g(x)\) of degree \(r+1\) with \(g(x)=\sum_{0}^{r+1}a_{i}x^{i}\) such that for all \(0\leq j\leq r\), \(\int_{0}^{1}x^{j}g(x)\ dx=0\). This can be done by finding a non-trivial \(a_{0},...,a_{r+1}\) in the null set of the \((n+1)\times(n+2)\) size matrix \(A\) with \(A(i,j)=\frac{1}{i+j+1}\). Then let \(d\nu(x)=g(x)\ dx\). Let \(\alpha=\nu_{+}\) and \(\beta=\nu_{-}\). Clearly \(\alpha\) and \(\beta\) are finite positive Borel measures, but since \(g\neq 0\), \(\alpha\neq\beta\).
**Lemma 5.5**.: _Let \(p/q\in\mathbb{N}\). Then there exists a two dimensional lattice \(E=<e_{1},e_{2}>\) and lattice embeddings \(f_{i}:E\to L_{p}(L_{q})\) with \(\mathbf{1}\in E\) such that \(g_{1}(\mathbf{e})\) and \(g_{2}(\mathbf{e})\) are not base-equimeasurable._
Proof.: Let \(f(x)\) be a polynomial of degree at least \(r+1\) as defined in Lemma 5.3 such that for all \(0\leq k\leq r\), \(\int_{0}^{1}t^{k}f(t)\ dt=0\), and \(\int_{0}^{1}|f(x)|\ dx=1\). Let \(h_{1}(x)=\frac{1}{2}+f(x)_{+}\), and let \(h_{2}(x)=\frac{1}{2}+f(x)_{-}\). Note that each \(h_{i}(x)>0\), and furthermore that \(\int_{0}^{1}h_{i}(t)\ dt=1\). Additionally, each map \(H_{i}(x)=\int_{0}^{x}h_{i}(t)\ dt\) is strictly increasing with \(H_{i}(0)=0\) and \(H_{i}(1)=1\). Now we will construct characteristic functions \(f_{j}^{i}\in L_{p}(L_{q})\) such that the linear map \(f_{j}^{1}\mapsto f_{j}^{2}\) induces an isometry, but \(N\mathbf{f}^{1}\) and \(\mathbf{f}^{2}\) are not base-equimeasurable. From there, we let \(e_{j}=\frac{f_{j}^{1}}{\|f_{j}^{i}\|}\), and let \(g_{i}\) be the lattice isometry induced by \(g_{i}(e_{j})=\frac{f_{j}^{i}}{\|f_{j}^{i}\|}\),
To this end, let
\[F_{1}^{i}(x):=H_{i}^{-1}(x),\text{ and }F_{2}^{i}(x):=1-F_{1}^{i}(x).\]
Observe that \(F_{1}^{1}(x)\neq F_{1}^{2}(x)\). Indeed, one can show that the associated push forwards \(dF_{1\#}^{i}\mu\) for each \(F_{1}^{i}\) have the corresponding equivalence:
\[dF_{1\#}^{i}\mu(x)=h_{i}(x)\ dx\]
So \((F_{1}^{1},F_{2}^{1})\) and \((F_{1}^{2},F_{2}^{2})\) are not equimeasurable. However, For \(0\leq j\leq r\), \(u^{j}h_{i}(u)\ du=u^{j}\ dF_{1\#}^{i}(u)=F_{1}^{i}(x)^{j}\ dx\), so it follows from the construction of the \(h_{i}\)'s that
\[\int_{0}^{1}F_{1}^{1}(x)^{j}\ dx=\int_{0}^{1}F_{1}^{2}(x)^{j}\ dx.\]
Thus for any \(v_{1},v_{2}>0\), since \(F_{1}^{i}\) and \(F_{2}^{i}\) are both positive, we have
\[\int_{0}^{1}|v_{1}F_{1}^{1}(x)+v_{2}F_{2}^{1}(x)|^{r}\ dx=\int_{0}^{1}((v _{1}-v_{2})F_{1}^{1}(x)+v_{2})^{r}\ dx\] \[= \sum_{0}^{r}\binom{r}{j}(v_{1}-v_{2})^{j}v_{2}^{r-j}\int_{0}^{1}F_{ 1}^{1}(x)^{j}\ dx=\int_{0}^{1}|v_{1}F_{1}^{2}(x)+v_{2}F_{2}^{2}(x)|^{r}\ dx\]
To conclude the proof, let \(f_{1}^{i}(x,y)=\mathbf{1}_{[0,F_{1}^{i}(x)]}(y)\), and let \(f_{2}^{i}=\mathbf{1}-f_{1}^{i}\). Clearly \(N[f_{j}^{i}]=F_{j}^{i}\).
**Theorem 5.6**.: _If \(p/q\in\mathbb{N}\) and \(p\neq q\), then \(L_{p}(L_{q})\) is not AUH for the class of its finite dimensional sublattices._
Proof.: Fix \(p/q\in\mathbb{N}\), and let \(E\) be the \(2\)-dimensional lattice generated in Lemma 5.5, with \(f_{i}:E\to L_{p}(L_{q})\) embeddings mapping to copies of \(E=<e_{1},e_{2}>\) such that \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are not base-equimeasurable. In addition, by assumption \(\mathbf{1}\in E\). For notational ease, let \(F_{j}^{i}=N[f_{i}(e_{j})]\).
Suppose for the sake of contradiction that \(L_{p}(L_{q})\) is AUH. Pick some measurable \(C\subseteq[0,1]^{2}\) and \(\varepsilon>0\) such that
\[*\quad\mathbf{F}_{\#}^{2}\mu(C)>\mathbf{F}_{\#}^{1}\mu(C+\varepsilon)+\varepsilon,\]
where
\[C+\varepsilon=\{\mathbf{t}\in[0,1]^{2}:\|\mathbf{t}-\mathbf{s}\|_{\infty}< \varepsilon\text{ for some }\mathbf{s}\in C\}.\]
By Theorem 5.2, there is some lattice automorphism \(\phi:L_{p}(L_{q})\to L_{p}(L_{q})\) fixing \(\mathbf{1}\) such that \(\|\phi\circ f_{1}-f_{2}\|<\varepsilon^{2}\). Let \(\phi F_{j}^{i}=N[\phi f_{i}(e_{j})]\). By Proposition 3.1, \(\phi\) preserves base-equimeasurability, so for any measurable \(B\),
\[\phi\mathbf{F}_{\#}^{1}\mu(B)=\mathbf{F}_{\#}^{1}\mu(B).\]
By the properties of \(N\), we also have \(\|\phi F_{j}^{1}-F_{j}^{2}\|_{p}\leq\|\phi f_{1}(e_{j})-f_{2}(e_{j})\|\). It also follows that
\[\mu(t:\|\phi\mathbf{F}^{1}(t)-\mathbf{F}^{2}(t)\|_{\infty}>\varepsilon)<\varepsilon,\]
so \(\phi\mathbf{F}_{\#}^{1}\mu(C+\varepsilon)+\varepsilon>\mathbf{F}_{\#}^{2}\mu(C)\), but this contradicts the assumption (*). So Theorem 5.2 cannot apply, implying that \(L_{p}(L_{q})\) is not AUH as desired.
**Remark 5.7**.: For \(p/q\in\mathbb{N}\), \(L_{p}(L_{q})\) is the unique lattice that is separably AUH over finitely generated \(BL_{p}L_{q}\) spaces, since up to isometry it is the unique doubly atomless \(BL_{p}L_{q}\) space. In light of Theorem 5.6, this implies that the class of finitely generated sublattices of \(L_{p}(L_{q})\) is not a Fraisse class as defined in [2], as \(L_{p}(L_{q})\) is the only possible candidate as a Fraisse limit.
In particular, \(L_{p}L_{q}\) lacks the NAP. Indeed, otherwise, one can use that NAP with \(BL_{p}L_{q}\) amalgamate lattices and [7, Proposition 2.8] to situate a \(d^{\mathcal{K}}\)-Cauchy sequence into a Cauchy-sequence of generating elements in an ambient separable \(BL_{p}L_{q}\) lattice. Thus \(\overline{\mathcal{K}_{p,q}}\) would also have the Polish
Property, implying that \(\overline{\mathcal{K}_{p,q}}\) is a Fraisse class. Since the only possible candidate Fraisse limit space is \(L_{p}(L_{q})\) itself, this would contradict Theorem 5.6.
| $p/q \notin \mathbb{N}$のとき、$1\leq p, q<\infty$ に対し、$p/q \notin \mathbb{N}$ のとき、$L_p(L_q)$ は約似 ultrahomogeneous (AUH) である。$p/q \in \mathbb{N}$ のときには、これは成り立たない。ただし、$p \neq q$ のとき、$L_p(L_q)$ は $BL_pL_q$ のクラスの有限生成子 lattices に対して AUH である。 |
2309.00084 | $p$-Skwarczyński distance | We introduce a new distance on a domain $\Omega \subset \mathbb{C}^n$ using
the `minimizer' functions on $A^p(\Omega)$. We discuss its invariance,
completeness, and other aspects related to it. | Shreedhar Bhat | 2023-08-31T18:49:41 | http://arxiv.org/abs/2309.00084v1 | # \(p\)-Skwarczynski distance
###### Abstract
We introduce a new distance on a domain \(\Omega\subset\mathbb{C}^{n}\) using the'minimizer' functions on \(\mathcal{A}^{p}(\Omega)\). We discuss its invariance, completeness and other aspects related to it.
+
Footnote †: Mathematics Subject Classification. Primary 32F45; Secondary 32A36
## 1 Introduction
The study of invariant distances on complex domains has been a fundamental topic in complex analysis for decades. The Skwarczynski distance, introduced in 1980 [1], has been extensively studied due to its invariance with respect to biholomorphic functions and its close relationship to the Bergman metric [2], [3], [4], [5]. Recently, a similar distance was defined by Krantz et al. in 2021, which utilized the Szego kernel [6]. In this paper, we introduce a new distance on a bounded domain in \(\mathbb{C}^{n}\) using the'minimizer' function on the \(p\)-Bergman space \(\mathcal{A}^{p}(\Omega)\). We investigate the properties of this new distance, invariance, completeness, and potential applications of the \(p\)-Skwarczynski distance.
For a bounded domain \(\Omega\subset\mathbb{C}^{n}\), \(1\leq p<\infty\), the \(p\)-Bergman space is defined as
\[\mathcal{A}^{p}(\Omega)=\{f\in\mathcal{L}^{p}(\Omega):\text{$f$ is holomorphic in $\Omega$}\}\]
Define
\[m_{p}(z_{0})=\inf\{\|f\|_{p}:f\in\mathcal{A}^{p}(\Omega),f(z_{0})=1\}\]
From [7, Proposition 2.4, 2.5], we know that there exists a unique such function which minimizes the above norm. Let the unique minimizer function be \(m_{p}(\cdot,z_{0})\). Define
\[\text{the $p$-Bergman kernel $K_{p}(z_{0})=m_{p}(z_{0})^{-p}$};\]
\[\text{and the off-diagonal $p$-Begman kernel $K_{p}(\cdot,z_{0})=K_{p}(z_{0})m_{p}(\cdot,z_{0})$}\]
The above function mimics certain properties of the Bergman kernel \((K_{2})\) in \(\mathcal{A}^{p}(\Omega)\) including a reproducing formula:
\[f(z)=m_{p}(z)^{-p}\int_{\Omega}\bigl{|}m_{p}(w,z)\bigr{|}^{p-2}\,\overline{m_ {p}(w,z)}f(w)dw\text{ for }f\in\mathcal{A}^{p}(\Omega)\]
For more properties of the above function cf. [7], [8], [9], [10].
## 2 \(p\)-Skwarczynski distance
On \(\mathcal{A}^{p}(\Omega)\setminus\{0\}\) define a relation \(\sim\) by \(f\sim g\) if and only if \(f=c\cdot g\) for some non-zero complex number \(c\). Let \(\mathbb{P}(\mathcal{A}^{p}(\Omega))=[\mathcal{A}^{p}(\Omega)\setminus\{0\}]/\sim\) denote the projective space
Let \(S_{\mathcal{A}_{p}}\) denote the unit sphere of \(\mathcal{A}^{p}\). Equip \(\mathbb{P}(\mathcal{A}^{p})\) with the distance
\[d([f],[g]) =\mathrm{dist}([f]\cap S_{\mathcal{A}^{p}},[g]\cap S_{\mathcal{A} ^{p}})\] \[=\inf_{t_{1},t_{2}\in[0,2\pi]}\hskip-14.226378pt\left\|e^{it_{1}} \frac{f}{\|f\|}-e^{it_{2}}\frac{g}{\|g\|}\right\|_{p}\] \[=\hskip-14.226378pt\left\|e^{it}\frac{f}{\|f\|}-\frac{g}{\|g\|} \right\|_{p}\qquad\text{ for some }t\in\mathbb{R}.\]
**Lemma 2.1**.: \((\mathbb{P}(\mathcal{A}^{p}(\Omega)),d)\) _is a complete metric space._
Proof.: Let \([f_{n}]\) be a Cauchy sequence and assume without loss of generality that \(\|f_{n}\|=1\) for every \(n\). We can find a subsequence \([f_{n_{k}}]\) such that \(d([f_{n_{k}}],[f_{n_{k+1}}])\leq\frac{1}{2^{k}}\).
Let \(g_{1}\in[f_{n_{1}}],\|g_{1}\|=1\) such that \(dist(g_{1},[f_{n_{2}}])\leq\frac{1}{2}\).
Let \(g_{2}\in[f_{n_{2}}],\|g_{2}\|=1\) such that \(\|g_{1}-g_{2}\|=d([f_{n_{1}}],[f_{n_{2}}])\leq 1/2\).
\(\vdots\)
Let \(g_{k}\in[f_{n_{k}}],\|g_{k}\|=1\) such that \(\|g_{k-1}-g_{k}\|=d([f_{n_{k-1}}],[f_{n_{k}}])\leq\frac{1}{2^{k-1}}\).
Then for \(r<s\),
\[\|g_{r}-g_{s}\|\leq \,\|g_{r}-g_{r+1}\|+\cdots+\|g_{s-1}-g_{s}\|\] \[\leq\frac{1}{2^{r}}+\cdots+\frac{1}{2^{s-1}}=\left(\frac{1}{2} \right)^{r-1}-\left(\frac{1}{2}\right)^{s-1}\xrightarrow{}0\text{ as }r,s\to\infty.\]
Thus \(\{g_{r}\}\) is a Cauchy sequence in \(\mathcal{A}^{p}(\Omega)\) and hence there exists a function \(f\in\mathcal{A}^{p}(\Omega)\), \(\|f\|=1\) such that \(g_{r}\to f\).
By definition
\[d([f_{n_{r}}],[f])=d([g_{r}],[f])\leq \,\|g_{r}-f\|\to 0\]
Using the elementary property that if a subsequence of a Cauchy sequence converges, then the sequence converges, \([f_{n}]\xrightarrow{d}[f]\), that is, every Cauchy sequence is a convergent sequence.
Consider the map
\[\tau:\Omega \to\mathbb{P}(\mathcal{A}^{p}(\Omega))\] \[z \longmapsto[K_{p}(\cdot,z)]=[m_{p}(\cdot,z)]\]
From [7, Proposition 2.15], we know that \(m_{p}(\cdot,z)\neq c\cdot m_{p}(\cdot,w)\) for any \(c\in\mathbb{C}\), when \(z\neq w\in\Omega\).
The above map is an injective map, so we can pull back the distance on \(\tau(\Omega)\) onto \(\Omega\), that is,
\[\rho_{p}(z,w) =\mathrm{dist}([K_{p}(\cdot,z)],[K_{p}(\cdot,w)])=\mathrm{dist}([ m_{p}(\cdot,z)],[m_{p}(\cdot,w)])\] \[=\inf_{\theta_{1},\theta_{2}\in\mathbb{R}}\hskip-14.226378pt\left\| \frac{e^{i\theta_{1}}m_{p}(\cdot,z)}{m_{p}(z)}-\frac{e^{i\theta_{2}}m_{p}( \cdot,w)}{m_{p}(w)}\right\|_{p}\] \[= \left\|\frac{e^{i\theta}m_{p}(\cdot,z)}{m_{p}(z)}-\frac{m_{p}( \cdot,w)}{m_{p}(w)}\right\|_{p}\quad\text{ for some }\theta\in[0,2\pi].\]
**Definition 2.2**.: The distance \(\rho_{p}\) defined above is called the \(p\)**-Skwarczynski distance**. Thus
\[\rho_{p}(z,w)=\min_{t\in[0,2\pi]}\left\|e^{it}\frac{m_{p}(\cdot,z)}{ m_{p}(z)}-\frac{m_{p}(\cdot,w)}{m_{p}(w)}\right\|_{p}. \tag{1}\]
## 3 Properties of the \(p\)-Skwarczynski distance
### Comparing the Euclidean topology and \(p\)-Skwarczynski topology
[1, Theorem III.8] states that the Euclidean topology and the Skwarczynski topology agree for \(p=2\). We will now investigate these topologies for \(p\in[1,\infty)\)
**Lemma 3.1**.: _Let \(\{w_{k}\}\) be a sequence in \(\Omega\) such that \(w_{k}\xrightarrow{\text{Euclidean}}w\). Then \(w_{k}\) converges to \(w\) in the \(p\)-Skwarczynski topology._
Proof.: This follows directly from the following lemma [11, Theorem 1].
**Brezis-Lieb Lemma:** For \(p\in(0,\infty)\), if \(f_{k},f\in\mathcal{L}^{p}\) satisfy \(f_{k}\xrightarrow{a.e.}f\), and \(\left\|f_{k}\right\|_{p}\to\left\|f\right\|_{p}\), then \(\left\|f_{k}-f\right\|_{p}\to 0\).
From [7], we know that \(\frac{m_{p}(\cdot,w_{k})}{m_{p}(w_{k})}\xrightarrow{\text{pointwise}}\frac{ m_{p}(\cdot,w)}{m_{p}(w)}\) and \(\left\|\frac{m_{p}(\cdot,w_{k})}{m_{p}(w_{k})}\right\|=\left\|\frac{m_{p}( \cdot,w)}{m_{p}(w)}\right\|=1\). Thus \(\rho_{p}(w_{k},w)\to 0\).
**Corollary 3.2**.: _The \(p\)-Skwarczynski distance \(\rho_{p}\) is continuous, that is, \(w_{k}\xrightarrow{\text{Euclidean}}w\implies\rho_{p}(w_{k},z)\to\rho_{p}(w,z)\) for every \(z\) in \(\Omega\)._
Proof.: By triangle inequality, \(\left|\rho_{p}(w_{k},z)-\rho_{p}(w,z)\right|\leq\rho_{p}(w_{k},w)\to 0\).
**Lemma 3.3**.: _If \(p\geq 2\), and \(\{w_{k}\}\) is a sequence in \(\Omega\) such that \(w_{k}\xrightarrow{\rho_{p}}w\), then \(w_{k}\) converges to \(w\) in the Euclidean topology._
Proof.: Suppose \(w_{k}\xrightarrow{\rho_{p}}w\) in \(\rho_{p}\), equivalently,
\[\left\|\frac{e^{i\theta_{k}}m_{p}(\cdot,w_{k})}{m_{p}(w_{k})}- \frac{m_{p}(\cdot,w)}{m_{p}(w)}\right\|_{p}\to 0\qquad\text{ for some }\{\theta_{k}\}.\]
Then
\[h_{k}=e^{i\theta_{k}}\frac{m_{p}(\cdot,w_{k})}{m_{p}(w_{k})} \xrightarrow{\text{normally}}h=\frac{m_{p}(\cdot,w)}{m_{p}(w)}.\]
For \(p\geq 2,\left|h_{k}\right|^{p-2}h_{k}\) converges normally to \(\left|h\right|^{p-2}h\). Observing that \(C_{c}^{\infty}(\Omega)\) functions are dense in \(\mathcal{L}^{p}(\Omega)\), we get that normal convergence implies weak convergence i.e.
\[\left\langle h_{k}\right|\!\left|h_{k}\right|\!\right|^{p-2},g \rangle\xrightarrow{\text{$\langle h|h|^{p-2},g\rangle$ for all $g\in\mathcal{L}^{p}(\Omega)$}.\]
Since the domain \(\Omega\) is bounded, \(\{1,\pi_{i}\}\in\mathcal{L}^{p}(\Omega)\), where \(\pi_{i}\) is the coordinate projection onto the \(i^{th}\) coordinate, \(\pi_{i}((w_{1},\ldots,w_{n}))=w_{i}\). Using the'reproducing formula' from [7],
\[\lim_{m\to\infty}e^{-i\theta_{k}}m_{p}(w_{k})=\lim_{k\to\infty}\int_{\Omega} \biggl{|}\frac{m_{p}(\cdot,w_{k})}{m_{p}(w_{k})}\biggr{|}^{p-2}\frac{m_{p}( \cdot,w_{k})}{m_{p}(w)}=\int_{\Omega}\biggl{|}\frac{m_{p}(\cdot,w)}{m_{p}(w)} \biggr{|}^{p-2}\frac{m_{p}(\cdot,w)}{m_{p}(w)}=m_{p}(w).\]
Moreover, for \(i=1,\ldots,n\),
\[\lim_{k\to\infty}\pi_{i}(w_{k})e^{-i\theta_{k}}m_{p}(w_{k}) =\lim_{k\to\infty}\int_{\Omega}\biggl{|}\frac{m_{p}(\cdot,w_{k})} {m_{p}(w_{k})}\biggr{|}^{p-2}\frac{m_{p}(\cdot,w_{k})}{m_{p}(w_{k})}\pi_{i}\] \[=\int_{\Omega}\biggl{|}\frac{m_{p}(\cdot,w)}{m_{p}(w)}\biggr{|}^{ p-2}\frac{m_{p}(\cdot,w)}{m_{p}(w)}\pi_{i}=m_{p}(w)\pi_{i}(w).\]
Thus \(w_{k}\xrightarrow{Euclidean}w\).
**Lemma 3.4**.: _If \(p>2\), then the \(p\)-Skwarczynski distance is locally \(\frac{1}{p}\)-Holder continuous, that is,_
\[\forall z_{0}\in\Omega,\ \exists M,r>0:\rho_{p}(z,w)\leq M|z-w|^{1/p}\qquad z,w \in B(z_{0},r)\subset\Omega.\]
Proof.: Let \(z_{0}\in\Omega\) and \(r>0\) such that \(B(z_{0},r)\subset\subset\Omega\). Let \(z,w\in B(z_{0},r)\).
We use the following inequality from [7, Proposition 4.3 (1)]:
\[|b-a|^{p}\leq 2^{p-1}\left[|b|^{p}+|a|^{p}-\operatorname{Re}(b|^{p-2}\,\bar{b}a+| a|^{p-2}\,\bar{a}b)\right]\text{ when }p\geq 2. \tag{2}\]
Set \(b=\frac{m_{p}(\zeta,z)}{m_{p}(z)}\), \(a=\frac{m_{p}(\zeta,w)}{m_{p}(w)}\), and integrate over \(\Omega\).
\[\int_{\Omega}\biggl{|}\frac{m_{p}(\zeta,z)}{m_{p}(z)}-\frac{m_{p}(\zeta,w)}{m_ {p}(w)}\biggr{|}^{p}\,d\zeta\]
\[\leq 2^{p-1}\left[1+1-\operatorname{Re}\left[\int_{\Omega}\biggl{|}\frac{m_{p} (\zeta,z)}{m_{p}(z)}\biggr{|}^{p-2}\frac{m_{p}(\zeta,z)}{m_{p}(z)}\frac{m_{p}( \zeta,w)}{m_{p}(w)}d\zeta+\int_{\Omega}\biggl{|}\frac{m_{p}(\zeta,w)}{m_{p}(w) }\biggr{|}^{p-2}\,\frac{m_{p}(\zeta,w)}{m_{p}(w)}\frac{m_{p}(\zeta,z)}{m_{p}( z)}d\zeta\right]\right]\]
By the reproducing formula,
\[\rho_{p}(z,w)^{p}\leq 2^{p-1}\operatorname{Re}\left[\frac{m_{p}(w)-m_{p}(z,w)m_{ p}(z)}{m_{p}(w)}+\frac{m_{p}(z)-m_{p}(w,z)m_{p}(w)}{m_{p}(z)}\right]\]
Let \(F(z,w)=\frac{m_{p}(w)-m_{p}(z,w)m_{p}(z)}{m_{p}(w)}\). Then \(F(w,w)=0\), and from [9], we know that \(m_{p}(\cdot),m_{p}(\cdot,w)\) are \(C^{1}\) functions. Thus \(F(\cdot,w)\) is a \(C^{1}\) function, hence locally Lipschitz. Thus
\[\bigl{|}F(z,w)-F(w,w)\bigr{|}=\bigl{|}F(z,w)\bigr{|}\leq C_{z_{0}}|z-w|\text{ \ for }z,w\in B(z_{0},r)\subset\subset\Omega.\]
where \(C_{z_{0}}=\max\left\{\bigl{|}\frac{d}{dz}F(z,w)\bigr{|}:z,w\in\overline{B(z_{0 },r)}\right\}\).
Accordingly, \(\rho_{p}(z,w)^{p}\leq 2^{p-1}\left[\bigl{|}F(z,w)\bigr{|}+\bigl{|}F(w,z) \bigr{|}\right]\leq C^{\prime}|z-w|\), which implies the conclusion of the lemma.
**Lemma 3.5**.: _If \(p\in(1,2)\), then the \(p\)-Skwarczynski distance is locally \(\frac{1}{2}\)-Holder continuous, namely,_
\[\forall z_{0}\in\Omega,\exists M,r>0:\rho_{p}(z,w)\leq M|z-w|^{1/2}\qquad z,w\in B (z_{0},r)\subset\Omega.\]
Proof.: Let \(z_{0}\in\Omega\) and \(r>0\) such that \(B(z_{0},r)\subset\subset\Omega\). Let \(z,w\in B(z_{0},r)\).
We use the following inequality from [7, Proposition 4.3 (2)]:
\[(p-1)|b-a|^{2}\,(\!|a|+\!|b|)^{p-2}\leq\left[\!|b|^{p}+\!|a|^{p}-\operatorname{ Re}(\!|b|^{p-2}\,\bar{b}a+\!|a|^{p-2}\,\bar{a}b)\right]\text{ when }p\in(1,2). \tag{3}\]
Let \(f_{1},f_{2}\in\mathcal{A}^{p}(\Omega)\). By Holder's inequality,
\[\int_{\Omega}\!\!|f_{1}-f_{2}|^{p} =\int_{\Omega}\!\!|f_{2}-f_{1}|^{p}\,(\!|f_{1}|+\!|f_{2}|)^{\frac {p(p-2)}{2}}(\!|f_{1}|+\!|f_{2}|)^{\frac{p(2-p)}{2}}\] \[\leq\left[\int_{\Omega}\!\!|f_{2}-f_{1}|^{2}\,(\!|f_{1}|+\!|f_{2} |)^{(p-2)}\right]^{p/2}\left[\int_{\Omega}\!(\!|f_{1}|+\!|f_{2}|)^{p}\right]^{ 1-\frac{p}{2}}\] \[\leq\left(\frac{1}{p-1}\right)^{p/2}\left[\int_{\Omega}\!\!|f_{2 }|^{p}+\!|f_{1}|^{p}-\operatorname{Re}(\!|f_{1}|^{p-2}\,\bar{f}_{2}f_{1}+\!|f_ {2}|^{p-2}\,\bar{f}_{2}f_{1})\right]^{p/2}\left[\int_{\Omega}\!(\!|f_{1}|+\!| f_{2}|)^{p}\right]^{1-\frac{p}{2}}.\]
Set \(f_{1}=\frac{m_{p}(\zeta,z)}{m_{p}(z)}\) and \(f_{2}=\frac{m_{p}(\zeta,w)}{m_{p}(w)}\). Then
\[\rho_{p}(z,w)^{p}\leq\int_{\Omega}\!\left|\frac{m_{p}(\zeta,z)}{m_{p}(z)}- \frac{m_{p}(\zeta,w)}{m_{p}(w)}\right|^{p}d\zeta\]
By the reproducing formula,
\[\rho_{p}(z,w)^{p}\leq\frac{2^{\frac{(1-p/2)}{p}}}{(p-1)^{p/2}}\operatorname{ Re}\left[\frac{m_{p}(w)-m_{p}(z,w)m_{p}(z)}{m_{p}(w)}+\frac{m_{p}(z)-m_{p}(w,z)m_{ p}(w)}{m_{p}(z)}\right]^{p/2}.\]
Let \(F(z,w)=\frac{m_{p}(w)-m_{p}(z,w)m_{p}(z)}{m_{p}(w)}\). Then \(F(w,w)=0\), and from [9], we know that \(m_{p}(\cdot,w),m_{p}(\cdot)\) are \(C^{1}\) functions. Thus \(F(\cdot,w)\) is a \(C^{1}\) function, hence locally Lipschitz. Thus
\[\big{|}F(z,w)-F(w,w)\big{|}=\big{|}F(z,w)\big{|}\leq C_{z_{0}}|z-w|\text{ \ for }z,w\in B(z_{0},r)\subset\subset\Omega.\]
where \(C_{z_{0}}=\max\!\left\{\!\left|\frac{d}{dz}F(z,w)\right|:z,w\in\overline{B(z_ {0},r)}\right\}\).
Accordingly, \(\rho_{p}(z,w)^{p}\leq\frac{2^{\frac{(1-p/2)}{p}}}{(p-1)^{p/2}}\left[\!\left|F( z,w)\right|+\!\big{|}F(w,z)\big{|}\right]^{p/2}\leq C^{\prime}|z-w|^{p/2}\), which implies the conclusion of the lemma.
Completeness of the \(p\)-Skwarczynski distance
We'll first discuss the completeness of \(p\)-Skwarczynski distance in the unit ball and then use some techniques of [1] to discuss the completeness of the distance in a general domain.
**Theorem 4.1**.: _Let \(\Omega\) be a bounded domain in \(\mathbb{C}^{n}\). Assume that for every sequence \(\{w_{k}\}\) without an accumulation point in \(\Omega\),_
\[\lim_{k\to\infty}\frac{m_{p}(z,w_{k})}{m_{p}(w_{k})}\to 0\quad\text{ for every }z\in\Omega.\]
_Then \(\Omega\) is \(p\)-Skwarczynski complete._
Proof.: Let \(\{w_{k}\}\) be a \(\rho_{p}\) Cauchy sequence. Since \(\{w_{k}\}\) is an infinite subset of \(\overline{\Omega}\), there exists a subsequence \(w_{k_{l}}\) such that \(w_{k_{l}}\xrightarrow{Euclidean}w\in\overline{\Omega}\).
Case 1.\(w\in\Omega\).
Then from Lemma 3.1,
\[w_{k_{l}}\xrightarrow{Euclidean}w\implies w_{k_{l}}\xrightarrow{\rho_{p}}w \implies w_{k}\xrightarrow{\rho_{p}}w.\]
We know that a Cauchy sequence is convergent if and only if it has a convergent subsequence, therefore \(\{w_{k}\}\) is a convergent sequence.
Case 2.\(w\in\partial\Omega\).
By assumption, \([m_{p}(\cdot,w_{k_{l}})]\) is a Cauchy sequence in \((\mathbb{P}(\mathcal{A}^{p}(\Omega),d)\). By completeness of \(\mathbb{P}(\mathcal{A}^{p}(\Omega))\), there exists \(f\), not identically zero, such that \([m_{p}(\cdot,w_{k_{l}})]\xrightarrow{d}[f]\), that is, there exists a sequence \(\{\theta_{k_{l}}\}\) such that \(e^{i\theta_{k_{l}}}\frac{m_{p}(\cdot,w_{k_{l}})}{m_{p}(w_{k_{l}})} \xrightarrow{\mathcal{L}_{p}\text{-norm}}f\). Consequently, these functions converge normally, and in particular pointwise. Therefore
\[f(z)=\lim_{l\to\infty}e^{i\theta_{k_{l}}}\frac{m_{p}(z,w_{k_{l}})}{m_{p}(w_{k _{l}})}=0\quad\text{ for every }z\in\Omega,\]
which is a contradiction.
Combining the two cases shows that every \(\rho_{p}\) Cauchy sequence is \(\rho_{p}\) convergent.
**Remark 4.2**.: _For the unit ball,_
\[m_{p}(\zeta,w)=\left[\frac{1-\left|w\right|^{2}}{1-\left\langle z,w\right\rangle }\right]^{4/p}\quad\text{ and }\quad m_{p}(w)=[\pi(1-\left|w\right|^{2})]^{\frac{1}{p}},\]
_so the unit ball satisfies the hypothesis of Theorem 4.1. Hence **the unit ball is \(p\)-Skwarczynski complete.**_
**Lemma 4.3**.: _When \(p>2\), there are positive constants \(c_{p}\) and \(C_{p}\) such that_
\[c_{p}\cdot(d([m_{p}(\cdot,z)],[f])^{p}\leq\left[1-\frac{\left|f(z)\right|}{K_{ p}(z)^{1/p}}\right]\leq C_{p}(d([m_{p}(\cdot,z)],[f])^{2}, \tag{4}\]
_where \(z\in\Omega\), \(f\in\mathcal{A}^{p}(\Omega)\), and \(\left\|f\right\|_{p}=1\)._
The proof of Lemma 4.3 is provided in the Appendix.
Recall from [1] that if \(p=2\), then
\[\frac{|f(z)|}{K_{p}(z)^{1/p}}=\left[1-\frac{(d([m_{p}(\cdot,z)],[f])^{p}}{p} \right]. \tag{5}\]
Using Lemma 4.3, we have the following results regarding completeness (analogous to Theorem III.6-III.13 of [1]).
**Theorem 4.4**.: _A sequence \(\{w_{k}\}\) is a \(\rho_{p}\) Cauchy sequence if and only if \(\{[m_{p}(\cdot,w_{k})]\}\) is Cauchy in \(\mathbb{P}(\mathcal{A}^{p})\)._
Proof.: Obvious from the definition of \(\rho_{p}\).
**Theorem 4.5**.: _Suppose \(p>2\). A sequence \(\{w_{k}\}\) in \(\Omega\) is a \(p\)-Skwarczynski Cauchy sequence if and only if there is \(f\) in \(\mathcal{A}^{p}(\Omega)\) of norm \(1\) such that_
\[\lim_{k\to\infty}\frac{\left|f(w_{k})\right|^{p}}{K_{p}(w_{k})}=1. \tag{6}\]
Proof.: (\(\Longleftarrow\)) Assume that
\[\lim_{k\to\infty}\frac{\left|f(w_{k})\right|}{K_{p}(w_{k})^{1/p}}=1.\]
By inequality 4,
\[c_{p}(d([m_{p}(\cdot,w_{k})],[f]))^{p}\leq\left[1-\frac{\left|f(w_{k})\right| }{K_{p}(w_{k})^{1/p}}\right],\]
so the assumption implies that
\[d([m_{p}(\cdot,w_{k})],[f])\xrightarrow{k\to\infty}0,\]
that is, \(w_{k}\) is a \(\rho_{p}\)-Skwarczynski Cauchy sequence.
(\(\implies\)) If \(\{w_{k}\}\) is a \(p\)-Skwarczynski Cauchy sequence, then completeness of \(\mathbb{P}(\mathcal{A}^{p})\) yields \(f\) in \(\mathcal{A}^{p}(\Omega)\) of norm \(1\) such that \(d([m_{p}(\cdot,w_{k}],[f])\to 0\). By inequality 4,
\[\left[1-\frac{\left|f(z)\right|}{K_{p}(z)^{1/p}}\right]\leq C_{p}(d([m_{p}( \cdot,z)],[f])^{2},\]
so
\[\lim_{k\to\infty}\frac{\left|f(w_{k})\right|}{K_{p}(w_{k})^{1/p}}=1.\qed\]
**Theorem 4.6**.: _Suppose \(p>2\). Assume there exists a sequence \(\{w_{k}\}\) with no accumulation point in \(\Omega\) and a function \(f\) in \(\mathcal{A}^{p}(\Omega)\) of norm \(1\) such that_
\[\lim_{k\to\infty}\frac{\left|f(w_{k})\right|^{p}}{K_{p}(w_{k})}=1. \tag{7}\]
_Then \(\Omega\) is not \(p\)-Skwarczynski complete._
Proof.: Suppose on the contrary that \(\Omega\) is \(p\)-Skwarczynski complete. By the theorem's hypothesis and Theorem 4.5, the sequence \(\{w_{k}\}\) is a \(p\)-Skwarczynski Cauchy sequence in \(\Omega\). By completeness, there is a point \(w_{0}\) such that \(w_{k}\xrightarrow{\rho_{p}}w_{0}\), whence \(w_{k}\xrightarrow{\text{Euclidean}}w_{0}\), contradicting the hypothesis that \(\{w_{k}\}\) has no accumulation point.
**Theorem 4.7**.: _Suppose \(p>2\). Assume that for every sequence \(\{w_{k}\}\) with no accumulation point in \(\Omega\) and for every \(f\in\mathcal{A}^{p}(\Omega)\),_
\[\lim_{k\to\infty}\frac{\left|f(w_{k})\right|^{p}}{K_{p}(w_{k})}=0. \tag{8}\]
_Then \(\Omega\) is \(p\)-Skwarczynski complete._
Proof.: Suppose to the contrary that \(\Omega\) is not \(p\)-Skwarczynski complete, and let \(\{w_{k}\}\) be a \(p\)-Skwarczynski Cauchy sequence without a limit in \(\Omega\). Since the two topologies agree on \(\Omega\), the sequence \(\{w_{k}\}\) has no accumulation point in \(\Omega\). By Theorem 4.5, there exists \(f\) in \(\mathcal{A}^{p}(\Omega)\) of norm \(1\) such that
\[\lim_{k\to\infty}\frac{\left|f(w_{k})\right|^{p}}{K_{p}(w_{k})}=1,\]
contradicting 8.
**Theorem 4.8**.: _Suppose \(p>2\). Assume that for each point \(w\in b\Omega\) there exists a holomorphic peak function \(h\) such that_
1. \(\left|h(\zeta)\right|<1\) _for every_ \(\zeta\) _in_ \(\Omega\)_, and_
2. \(\lim_{\zeta\to w}\bigl{|}h(\zeta)\bigr{|}=1\)_._
_Then \(\Omega\) is \(p\)-Skwarczynski complete._
Proof.: Suppose to the contrary that \(\Omega\) is not \(p\)-Skwarczynski complete. By Theorem 4.7, there exists a sequence \(\{w_{k}\}\) without an accumulation point in \(\Omega\) and a function \(f\in\mathcal{A}^{p}(\Omega)\) such that
\[\lim_{k\to\infty}\frac{\left|f(w_{k})\right|^{p}}{K_{p}(w_{k})}\not\to 0. \tag{9}\]
Since \(\{w_{k}\}\) has no accumulation point in \(\Omega\), there exists a subsequence \(\{w_{k_{l}}\}\) of such that \(w_{k_{l}}\to w\in b\Omega\). Thus, without loss of generality, we can assume that \(w_{k}\to w\in b\Omega\) satisfying 9.
Given \(\varepsilon>0\), notice that \(\bigl{\{}\!\bigl{|}h^{k}f\bigr{|}:k\in\mathbb{Z}\bigr{\}}\) is a sequence of functions in \(\mathcal{A}^{p}(\Omega)\), converging pointwise to zero and is dominated by \(f\in\mathcal{A}^{p}(\Omega)\). By dominated convergence theorem, for large enough \(k_{0}\in\mathbb{Z},\bigl{\|}h^{k_{0}}f\bigr{\|}_{p}^{p}\leq\varepsilon\)
Also, since \(w_{k}\to w\in b\Omega\), for large \(k_{1},\bigl{|}h^{k_{0}}(w_{k})\bigr{|}^{p}\geq 1-\varepsilon\) for \(k\geq k_{1}\)
\[(1-\varepsilon)\bigl{|}f(w_{k})\bigr{|}^{p}\leq\Bigl{|}h^{k_{0}}(w_{k})f(w_{k} )\Bigr{|}^{p}\leq K_{p}(w_{k})\Bigl{\|}h^{k_{0}}f\Bigr{\|}_{p}^{p}\leq \varepsilon K_{p}(w_{k})\text{ for }k\geq k_{1}\]
Thus
\[\frac{\bigl{|}f(w_{k})\bigr{|}^{p}}{K_{p}(w_{k})}\leq\frac{\varepsilon}{1- \varepsilon}\text{ for }k\geq k_{1}\implies\lim_{k\to\infty}\frac{\bigl{|}f(w_{k}) \bigr{|}^{p}}{K_{p}(w_{k})}\to 0\]
contradicting our assumption
**Corollary 4.9**.: _Let \(G\) be a domain in \(\mathbb{C}^{n}\) and \(\Omega\) be a connected analytic polyhedron defined by its frame \(\{h_{i}\}_{i=1}^{k}\subset\mathcal{O}(G)\)_
\[\Omega=\{z\in G:\left|h_{i}(z)\right|<1;i=1,\ldots,k\}.\]
_Then \(\Omega\) is p-Skwarczynski complete, when \(p>2\)._
Proof.: For every \(w_{0}\in b\Omega\), there is at least one \(i=i_{0}\) such that \(\left|h_{i_{0}}(w_{0})\right|=1\). Thus \(\Omega\) satisfies the hypothesis of Theorem 4.8 and hence \(\Omega\) is \(p\)-Skwarczynski complete for \(p>2\).
**Remark 4.10**.: _The following domains are known to satisfy the hypothesis of Theorem 4.8 :_
1. _Smooth bounded strictly pseudoconvex domains_ _[_12_, Prop 2.1]__;_
2. \(h\)_-extendible (semiregular) domains_ _[_13_, Theorem A]__;_
3. _Bounded pseudoconvex domains in_ \(\mathbb{C}^{2}\) _with a real analytic boundary_ _[_14_, Theorem 3.1]__._
_Thus, for any domain \(\Omega\) of the above type, \(\Omega\) is \(p\)-Skwarczynski complete for \(p>2\)._
**Theorem 4.11**.: _Suppose \(p>2\). Assume that for every point \(w\in b\Omega\),_
1. \(\lim_{\zeta\to w}K_{p}(\zeta)\to\infty\)_, and_
2. \(\mathcal{O}(\Omega)\cap C(\overline{\Omega})\) _is dense in_ \(\mathcal{A}^{p}(\Omega)\)_._
_Then \(\Omega\) is p-Skwarczynski complete._
Proof.: We verify the hypothesis of Theorem 4.7. Let \(f\in\mathcal{A}^{p}(\Omega)\) be any holomorphic function and let \(\{z_{k}\}\) be any sequence in \(\Omega\) without an accumulation point in \(\Omega\). Then we can find a sequence of holomorphic functions \(g_{k}\in\mathcal{O}(\Omega)\cap C(\overline{\Omega})\) such that \(g_{k}\xrightarrow{\left\|\cdot\right\|_{p}}f\). Hence we have
\[\lim_{k\to\infty}\frac{\left|f(z_{k})\right|^{p}}{K_{p}(z_{k})}=\lim_{k\to \infty}\frac{\left|g_{k}(z_{k})\right|^{p}}{K_{p}(z_{k})}=0\]
proving \(\Omega\) is \(p\)-Skwarczynski complete.
## 5 Invariance of the \(p\)-Skwarczynski distance
### \(\mathcal{A}^{p}\) preserving biholomorphisms
Let \(F:\Omega_{1}\to\Omega_{2}\) be a biholomorphism. Then \(F\) induces an isometric isomorphism on \(\mathcal{A}^{2}\) in a natural way, namely
\[F^{\#}:\mathcal{A}^{2}(\Omega_{2}) \to\mathcal{A}^{2}(\Omega_{1})\] \[f \mapsto f\circ F\cdot J_{F}\]
where \(J_{F}=det(F^{\prime})\).
By the change of variables formula,
\[\int_{\Omega_{1}}\left|f\circ F\right|^{2}\left|J_{F}\right|^{2}=\int_{\Omega _{2}}\left|f\right|^{2}\]
thereby defining an isometric isomorphism. However in case of \(\mathcal{A}^{p}\), not all biholomorphisms have a 'natural' extension onto the \(\mathcal{A}^{p}\) space.
**Definition 5.1**.: Let \(F:\Omega_{1}\rightarrow\Omega_{2}\) be a biholomorphism between two bounded domains \(\Omega_{1},\Omega_{2}\). We say that \(F\) is an \(\mathcal{A}^{p}\)**preserving biholomorphism** if the map \(F^{\#}\) extends naturally onto \(\mathcal{A}^{p}\) space defining an isometric isomorphism i.e.
\[F^{\#}:\mathcal{A}^{p}(\Omega_{2}) \rightarrow\mathcal{A}^{p}(\Omega_{1})\] \[f \mapsto f\circ F\cdot J_{F}^{2/p}\] \[\left\|f\right\|_{\mathcal{A}^{p}(\Omega_{2})}= \left\|f\circ F\cdot J_{F}^{2/p}\right\|_{\mathcal{A}^{p}(\Omega_ {1})}\]
**Remark 5.2**.:
* _The above definition is equivalent to saying the_ \(p/2^{th}\) _root of the function_ \(J_{F}\) _is a holomorphic function on_ \(\Omega_{1}\)_._
* _Every biholomorphism is_ \(\mathcal{A}^{2}\) _preserving and_ \(\mathcal{A}^{p}\) _preserving for_ \(p\) _such that_ \(2/p\in\mathbb{N}\)_._
* _Every biholomorphism on a simply connected domain is an_ \(\mathcal{A}^{p}\) _preserving biholomorphism._
* _If_ \(F:\Omega_{1}\rightarrow\Omega_{2}\) _is an_ \(\mathcal{A}^{p}\) _preserving biholomorphism, then_ \(F^{-1}:\Omega_{1}\rightarrow\Omega_{2}\) _is also an_ \(\mathcal{A}^{p}\) _preserving biholomorphism._
**Theorem 5.3**.: _Let \(p\geq 1\) and \(F:\Omega_{1}\rightarrow\Omega_{2}\) be an \(\mathcal{A}^{p}\) preserving biholomorphism between two bounded domains \(\Omega_{1},\Omega_{2}\). Then_
\[\rho_{p,\Omega_{1}}(z,w)=\rho_{p,\Omega_{2}}(F(z),F(w))\text{ for }z,w\in\Omega_{1}.\]
_In other words, the \(p\)-Skwarczynski distance is invariant under \(\mathcal{A}^{p}\) preserving biholomorphisms._
Proof.: Let \(F:\Omega_{1}\rightarrow\Omega_{2}\) be an \(\mathcal{A}^{p}\) preserving biholomorphsim. Then from [7], we have
\[m_{p,\Omega_{1}}(w) =m_{p,\Omega_{2}}(F(w))\cdot\left|J_{F}(w)\right|^{-2/p}.\] \[m_{p,\Omega_{1}}(\zeta,w) =m_{p,\Omega_{2}}(F(\zeta),F(w))\cdot J_{F}(w)^{-2/p}\cdot J_{F} (\zeta)^{2/p}.\]
By definition,
\[\rho_{p,\Omega_{1}}^{p}(z,w)=\min_{t_{1},t_{2}\in[0,2\pi]}\left\|e^{it_{1}} \frac{m_{p,\Omega_{1}}(\cdot,w)}{m_{p,\Omega_{1}}(w)}-e^{it_{2}}\frac{m_{p, \Omega_{1}}(\cdot,z)}{m_{p,\Omega_{1}}(z)}\right\|_{p,\Omega_{1}}^{p}\] \[=\min_{t_{1},t_{2}\in[0,2\pi]}\int_{\Omega_{1}}\left|e^{it_{1}} \frac{m_{p,\Omega_{2}}(F(\zeta),F(w))\cdot J_{F}(w)^{-2/p}\cdot J_{F}(\zeta)^ {2/p}}{m_{p,\Omega_{2}}(F(w))\cdot\left|J_{F}(w)\right|^{-2/p}}-e^{it_{2}} \frac{m_{p,\Omega_{2}}(F(\zeta),F(z))\cdot J_{F}(z)^{-2/p}\cdot J_{F}(\zeta)^ {2/p}}{m_{p,\Omega_{2}}(F(z))\cdot\left|J_{F}(z)\right|^{-2/p}}\right|^{p}d\zeta\] \[=\min_{t_{1}^{\prime},t_{2}^{\prime}\in[0,2\pi]}\int_{\Omega_{2}} \left|e^{it_{1}^{\prime}}\frac{m_{p,\Omega_{2}}(\zeta^{\prime},F(w))}{m_{p, \Omega_{2}}(F(w))}-e^{it_{2}^{\prime}}\frac{m_{p,\Omega_{2}}(\zeta^{\prime},F (z))}{m_{p,\Omega_{2}}(F(z))}\right|^{p}d\zeta^{\prime}=\rho_{p,\Omega_{2}}^{ p}(F(z),F(w))\]
Thus the \(p\)-Skwarczynski distance is invariant under \(\mathcal{A}^{p}\) preserving biholomorphisms.
Continuity of the \(p\)-Skwarczynski distance with respect to \(p\)
We will use the following result from [15, Theorem 2.2]. We will prove it here for the sake of completeness.
**Lemma 6.1**.: _Let \(\{f_{k}\}\subset\mathcal{L}^{q}(\Omega)\) for some \(1\leq q<\infty\). Assume that_
1. \(f_{k}\to f\) _pointwise almost everywhere on_ \(\Omega\)__
2. _the sequence_ \(\{f_{k}\}\) _is uniformly bounded in_ \(\mathcal{L}^{q}\) _i.e._ \(\left\|f_{k}\right\|_{q}\leq M\) _for all_ \(k\) _and some_ \(M>0\)_._
_If the volume of \(\Omega\), \(|\Omega|\), is finite, then \(f\in\mathcal{L}^{q}(\Omega)\) and \(f_{k}\to f\) in \(\mathcal{L}^{p}(\Omega)\) for \(0<p<q\)._
Proof.: By Fatou's Lemma,
\[\left\|f\right\|_{q}\leq\liminf_{k\to\infty}\lVert f_{k}\rVert_{q}\leq M\implies f \in\mathcal{L}^{q}(\Omega).\]
Let \(E\subset\Omega\) be a measurable set and \(0<p<q\). Applying Holder's inequality and triangle inequality
\[\int_{E}\lvert f_{k}-f\rvert^{p}\leq \left\|f_{k}-f\right\|_{q}^{p}\lvert E\rvert^{\frac{q-p}{q}}\leq (M+\lVert f\rVert_{q})^{p}\lvert E\rvert^{\frac{q-p}{q}}\]
Thus \(\left\lvert f_{k}-f\right\rvert^{p}\to 0\) a.e. and is uniformly integrable over \(\Omega\). By Vitali's convergence theorem \(f_{k}\to f\) in \(\mathcal{L}^{p}(\Omega)\).
A bounded domain \(\Omega\) is said to be hyperconvex if there exists a negative continuous plurisubharmonic function \(r\), such that \(\{r<c\}\subset\subset\Omega\) for all \(c<0\). Further assume that the above function \(r\) satisfies a growth condition \(-r\leq C\delta^{\alpha}\), for some \(\alpha,C>0\) and \(\delta\) denotes the distance function to the boundary. Let \(\alpha(\Omega)\) be the supremum of all such \(\alpha\). The hyperconvexity index \(\alpha(\Omega)\) is studied in detail in [16]
**Theorem 6.2**.: _Let \(\Omega\) be a bounded hyperconvex domain with \(\alpha(\Omega)>0\)._
1. _Then for_ \(p\in(1,2]\)__ \[\lim_{q\to p^{-}}\rho_{q}(z,w)=\rho_{p}(z,w)\]
2. _Let_ \(p\in[1,2)\)_. Additionally if_ \(\Omega\) _is such that_ \(A^{p^{\prime}}(\Omega)\) _is dense in_ \(A^{p}(\Omega)\) _for some_ \(p^{\prime}>p\)_, then_ \[\lim_{s\to p^{+}}\rho_{s}(z,w)=\rho_{p}(z,w)\]
Proof.: Fix \(p\in(1,2]\). Suppose that \(\Omega\) is a bounded hyperconvex domain with \(\alpha(\Omega)>0\). Then from [9, Theorem 1.4]
\[K_{p}(\cdot,z)\in\mathcal{L}^{q}(\Omega)\text{ for }q<\frac{2pn}{2n-\alpha(\Omega)} \tag{10}\]
and from [7, Theorem 6.5]
\[\lim_{s\to p^{-}}m_{s}(\zeta,w)=m_{p}(\zeta,w)\text{ for }z,w\in\Omega\]
Let \(p_{n}\nearrow p\) be any sequence and let
\[f_{n}^{\theta}(\zeta)=e^{i\theta}\frac{m_{p_{n}}(\zeta,z)}{m_{p_{n}}(z)}-\frac{m _{p_{n}}(\zeta,w)}{m_{p_{n}}(w)}\]
\[f^{\theta}(\zeta)=e^{i\theta}\frac{m_{p}(\zeta,z)}{m_{p}(z)}-\frac{m_{p}(\zeta, w)}{m_{p}(w)}\]
Choose \(\theta_{n}\) such that \(\rho_{p_{n}}(z,w)=\left\|f_{n}^{\theta_{n}}\right\|_{p_{n}}\) and \(\tau_{0}\) such that \(\rho_{p}(z,w)=\left\|f^{\tau_{0}}\right\|_{p}\).
(We know that \(\rho\) is a continuous function if for every sequence \(x_{n}\to x\), we have a subsequence of \(\rho(x_{n})\) which converges to \(\rho(x)\).) Therefore, without loss of any generality, we can assume that \(\theta_{n}\rightarrow\theta_{0}\) for some \(\theta_{0}\in[0,2\pi]\).
By [9, Theorem 1.4], for \(p_{n}\) sufficiently close to \(p\), \(f_{n}\in L^{q}(\Omega)\) (for some \(q>p\)) and by [7, Theorem 6.5]\(f_{n}^{\theta_{n}}\xrightarrow{\text{\it pointwise}}f^{\theta_{0}}\) and \(f_{n}^{\tau_{0}}\xrightarrow{\text{\it pointwise}}f^{\tau_{0}}\). By Lemma 6.1, for every \(\varepsilon>0\), we can find \(N_{0}\in\mathbb{N}\) such that
\[\left\|f^{\theta_{0}}-f_{n}^{\theta_{n}}\right\|_{p}\leq \varepsilon/2\text{ for }n\geq N_{0}\] \[\left\|f_{n}^{\tau_{0}}\right\|_{p}\leq \left\|f^{\tau_{0}}\right\|_{p}+\varepsilon/2\text{ for }n\geq N_{0}\]
Let \(p^{\prime}<p\). Choose \(p^{\prime}<p_{k}<p\) and \(k\geq N_{0}\). Using the triangle inequality and Holder's inequality,
\[\left\|f^{\theta_{0}}\right\|_{p^{\prime}}\leq \left\|f^{\theta_{0}}-f_{k}^{\theta_{k}}\right\|_{p^{\prime}}+ \left\|f_{k}^{\theta_{k}}\right\|_{p^{\prime}}\] \[\leq \left\|f^{\theta_{0}}-f_{k}^{\theta_{k}}\right\|_{p}\cdot|\Omega |^{\frac{1}{p^{\prime}}-\frac{1}{p}}+\left\|f_{k}^{\theta_{k}}\right\|_{p_{k}} \cdot|\Omega|^{\frac{1}{p^{\prime}}-\frac{1}{p_{k}}}\] \[\leq \left\|f^{\theta_{0}}-f_{k}^{\theta_{k}}\right\|_{p}\cdot|\Omega |^{\frac{1}{p^{\prime}}-\frac{1}{p}}+\left\|f_{k}^{\tau_{0}}\right\|_{p_{k}} \cdot|\Omega|^{\frac{1}{p^{\prime}}-\frac{1}{p_{k}}}\] \[\leq \left(\varepsilon/2+\varepsilon/2+\left\|f^{\tau_{0}}\right\|_{p }\right)\cdot|\Omega|^{\frac{1}{p^{\prime}}-\frac{1}{p}}\]
By continuity of norm
\[\left\|f^{\theta_{0}}\right\|_{p}\leq \left\|f^{\tau_{0}}\right\|_{p}+\varepsilon\text{ and }\left\|f^{\tau_{0}}\right\|_{p}\leq \left\|f^{\theta_{0}}\right\|_{p}\]
thereby proving,
\[\lim_{q\to p^{-}}\rho_{q}(z,w)= \left\|f^{\theta_{0}}\right\|_{p}= \left\|f^{\tau_{0}}\right\|_{p}=\rho_{p}(z,w)\]
Additionally, assume that \(A^{q}(\Omega)\) lies dense in \(A^{p}(\Omega)\) for some \(q>p\). Then by [7, Theorem 6.5]
\[\lim_{s\to p^{+}}m_{s}(\zeta,w)=m_{p}(\zeta,w) \tag{11}\]
Let \(1\leq p<2\), \(p_{n}\searrow p\) (\(p_{1}<2\)) be any sequence and
\[g_{n}^{\theta}(\zeta)=e^{i\theta}\frac{m_{p_{n}}(\zeta,z)}{m_{p_{n}}(z)}-\frac {m_{p_{n}}(\zeta,w)}{m_{p_{n}}(w)}\]
\[g^{\theta}(\zeta)=e^{i\theta}\frac{m_{p}(\zeta,z)}{m_{p}(z)}-\frac{m_{p}( \zeta,w)}{m_{p}(w)}\]
Choose \(\theta_{n}\) such that \(\rho_{p_{n}}(z,w)=\left\|g_{n}^{\theta_{n}}\right\|_{p_{n}}\) and \(\tau_{0}\) such that \(\rho_{p}(z,w)=\left\|g^{\tau_{0}}\right\|_{p}\). As above, without loss of any generality, we can assume that \(\theta_{n}\to\theta_{0}\) for some \(\theta_{0}\in[0,2\pi]\).
According to [9, Theorem 1.4], \(g_{n}^{\theta}\in L^{p_{1}}(\Omega)\) and by [7, Theorem 6.5]\(g_{n}^{\theta_{n}}\xrightarrow{ pointwise}g^{\theta_{0}}\) and \(g_{n}^{\tau_{0}}\xrightarrow{ pointwise}g^{\tau_{0}}\). Let \(p^{\prime}>p\). By Lemma 6.1, for every \(\varepsilon>0\), we can find \(N_{0}\in\mathbb{N}\) such that
\[\left\|g^{\theta_{0}}-g_{n}^{\theta_{n}}\right\|_{p^{\prime}}\leq \varepsilon/2\text{ for }n\geq N_{0}\] \[\left\|g_{n}^{\tau_{0}}\right\|_{p^{\prime}}\leq \left\|g^{\tau_{0}}\right\|_{p^{\prime}}+\varepsilon/2\text{ for }n\geq N_{0}\]
Choose \(p<p_{k}<p^{\prime}\), \(k\geq N_{0}\). Using triangle inequality and Holder's inequality,
\[\left\|g^{\theta_{0}}\right\|_{p}\leq \left\|g^{\theta_{0}}-g_{k}^{\theta_{k}}\right\|_{p}+\left\|g_{k }^{\theta_{k}}\right\|_{p}\] \[\leq \left\|g^{\theta_{0}}-g_{k}^{\theta_{k}}\right\|_{p^{\prime}} \cdot|\Omega|^{\frac{1}{p}-\frac{1}{p^{\prime}}}+\left\|g_{k}^{\theta_{k}} \right\|_{p_{k}}\cdot|\Omega|^{\frac{1}{p}-\frac{1}{p_{k}}}\] \[\leq \left\|g^{\theta_{0}}-g_{k}^{\theta_{k}}\right\|_{p^{\prime}} \cdot|\Omega|^{\frac{1}{p}-\frac{1}{p^{\prime}}}+\left\|g_{k}^{\tau_{0}} \right\|_{p_{k}}\cdot|\Omega|^{\frac{1}{p^{\prime}}-\frac{1}{p}}\] \[\leq \left(\varepsilon/2+\varepsilon/2+\left\|g^{\tau_{0}}\right\|_{p^ {\prime}})\cdot|\Omega|^{\frac{1}{p^{\prime}}-\frac{1}{p}}\]
By continuity of norm
\[\left\|g^{\theta_{0}}\right\|_{p}\leq \left\|g^{\tau_{0}}\right\|_{p}+\varepsilon\text{ and }\left\|g^{\tau_{0}}\right\|_{p}\leq \left\|g^{\theta_{0}}\right\|_{p}\]
which proves,
\[\lim_{s\to p^{+}}\rho_{s}(z,w)= \left\|g^{\theta_{0}}\right\|_{p}= \left\|g^{\tau_{0}}\right\|_{p}=\rho_{p}(z,w)\]
## 7 Product Domains
### \(p\)-Skwarczynski distance on the product domain
**Lemma 7.1**.: _Suppose that \(\Omega_{1}\subset\mathbb{C}^{n_{1}},\Omega_{2}\subset\mathbb{C}^{n_{2}}\) are two bounded domains. Let \(\Omega=\Omega_{1}\times\Omega_{2}\), \(z=(z_{1},z_{2})\), \(w=(w_{1},w_{2})\). Then_
\[\rho_{p,\Omega}(z,w)\leq\rho_{p,\Omega_{1}}(z_{1},w_{1})+\rho_{p,\Omega_{2}}( z_{2},w_{2})\]
Proof.: By definition and the product rule from [7, Proposition 2.8]
\[\rho_{p,\Omega}(z,w) =\min_{\theta\in[0,2\pi]}\left\|e^{i\theta}\frac{m_{p,\Omega}( \cdot,z)}{m_{p,\Omega}(z)}-\frac{m_{p,\Omega}(\cdot,w)}{m_{p,\Omega}(w)} \right\|_{p,\Omega}\] \[=\min_{\theta\in[0,2\pi]}\left\|e^{i\theta}\frac{m_{p,\Omega_{1}} (\cdot,z_{1})m_{p,\Omega_{2}}(\cdot,z_{2})}{m_{p,\Omega_{1}}(z_{1})m_{p, \Omega_{2}}(\cdot,z_{2})}-\frac{m_{p,\Omega_{1}}(\cdot,w_{1})m_{p,\Omega_{2}} (\cdot,w_{2})}{m_{p,\Omega_{1}}(w_{1})m_{p,\Omega_{2}}(w_{2})}\right\|_{p,\Omega}\]
For \(i=1,2\),
\[\text{set }f_{i}(\zeta_{i})=e^{i\theta_{i}}\frac{m_{p,\Omega_{i}}(\zeta_{i},z_{i} )}{m_{p,\Omega_{i}}(z_{i})};\ \ g_{i}(\zeta_{i})=\frac{m_{p,\Omega_{i}}(\zeta_{i},w_{i})}{m_{p,\Omega_{i}}(w_ {i})}\in\mathcal{A}^{p}(\Omega_{i})\]
where \(\theta_{i}\) is chosen such that \(\rho_{p,\Omega_{i}}(z_{i},w_{i})=\left\|f_{i}-g_{i}\right\|_{p,\Omega_{i}}\).
Then
\[\rho_{p,\Omega}(z,w)\leq \left\|f_{1}f_{2}-g_{1}g_{2}\right\|_{p,\Omega}\leq \left\|f_{1}f_{2}-g_{1}f_{2}\right\|_{p,\Omega}+\left\|g_{1}f_{2}-g _{1}g_{2}\right\|_{p,\Omega}.\]
Consider
\[\left\|f_{1}f_{2}-g_{1}f_{2}\right\|_{p,\Omega} =\left(\int_{\Omega_{1}\times\Omega_{2}}\left|f_{1}(\zeta_{1})-g_ {1}(\zeta_{1})\right|^{p}\left|f_{2}(\zeta_{2})\right|^{p}d\zeta_{1}d\zeta_{2 }\right)^{1/p}\] \[=\left(\int_{\Omega_{1}}\left|f_{1}(\zeta_{1})-g_{1}(\zeta_{1}) \right|^{p}d\zeta_{1}\right)^{1/p}\left(\int_{\Omega_{2}}\left|f_{2}(\zeta_{2 })\right|^{p}d\zeta_{2}\right)^{1/p}=\rho_{p,\Omega_{1}}(z_{1},w_{1}).\]
Similarly \(\left\|g_{1}f_{2}-g_{1}g_{2}\right\|_{p,\Omega}=\rho_{p,\Omega_{2}}(z_{2},w_ {2})\), so
\[\rho_{p,\Omega}(z,w)\leq\rho_{p,\Omega_{1}}(z_{1},w_{1})+\rho_{p,\Omega_{2}}(z _{2},w_{2})\]
### \(p\)-Bergman metric on the product domain
In [7], the \(p\)-Bergman metric was defined as follows for a vector field \(X\).
\[B_{p}(z_{0};X):=K_{p}(z_{0})^{-\frac{1}{p}}\sup\{\left|Xf(z_{0})\right|:f\in \mathcal{A}^{p},\;f(z_{0})=0,\left\|f\right\|_{p}=1\,\}.\]
The problem was posed to find the \(p\)-Bergman metric on product of two domains. Here, we provide a partial solution to this problem.
**Lemma 7.2**.: _Suppose that \(\Omega_{1}\subset\mathbb{C}^{k_{1}}\) and \(\Omega_{2}\subset\mathbb{C}^{k_{2}}\) are bounded domains. Let \(\Omega=\Omega_{1}\times\Omega_{2}\), \(z=(z_{1},z_{2})\), \(X=(X_{1},X_{2})\). Then_
\[B_{p,\Omega}(z;X)\geq\max_{i=1,2}B_{p,\Omega_{i}}(z_{i};X_{i}).\]
Proof.: Suppose \(h\in\mathcal{A}^{p}(\Omega_{1}),\left\|h\right\|_{p,\Omega_{1}}=1\), \(f\in\mathcal{A}^{p}(\Omega_{2})\), \(f(z_{2})=0\), and \(\left\|f\right\|_{p,\Omega_{2}}=1\). If \(g(\zeta_{1},\zeta_{2}):=h(\zeta_{1})\cdot f(\zeta_{2})\), then \(g\in\mathcal{A}^{p}(\Omega)\), and \(\left\|g\right\|_{p,\Omega}=\left\|h\right\|_{p,\Omega_{1}}\cdot\left\|f \right\|_{p,\Omega_{2}}=1\). Now
\[B_{p,\Omega}(z;X)\geq K_{p,\Omega}^{-\frac{1}{p}}(z)\left|X(g)(z)\right|=K_{p, \Omega_{1}}(z_{1})^{-\frac{1}{p}}\left|h(z_{1})\right|\cdot K_{p,\Omega_{2}}( z_{2})^{-\frac{1}{p}}\left|X_{2}(f)(z_{2})\right|.\]
Taking the supremum over all functions \(f\) and \(h\) as above, we get
\[\sup_{h}K_{p,\Omega_{1}}(z_{1})^{-\frac{1}{p}}\left|h(z_{1})\right|=1\text{ and }\sup_{f}K_{p,\Omega_{2}}(z_{2})^{-\frac{1}{p}}\left|X_{2}(f)(z_{2})\right|=B_{p, \Omega_{2}}(z_{2};X_{2}).\]
By symmetry,
\[B_{p,\Omega}(z;X)\geq\max_{i=1,2}B_{p,\Omega_{i}}(z_{i};X_{i}).\qed\]
## 8 Application
From [7], we have \(\big{|}m_{p}(z,w)\big{|}\leq m_{p}(w)/m_{p}(z)\), and equality holds if and only if \(z=w\). This follows from the reproducing formula and Holder's inequality. Indeed,
\[\big{|}m_{p}(z,w)\big{|}= \bigg{|}m_{p}(z)^{-p}\int_{\Omega}\!\big{|}m_{p}(\zeta,z)\big{|}^{ p-2}\,\overline{m_{p}(\zeta,z)}m_{p}(\zeta,w)d\zeta\bigg{|}\leq m_{p}(w)/m_{p}(z).\]
We can find a stronger bound using the \(p\)-Skwarczynski distance.
**Lemma 8.1**.: _If \(p>2\), then_
\[\big{|}m_{p}(z,w)\big{|}\leq\frac{m_{p}(w)}{m_{p}(z)}\left[1-\frac{\rho_{p}(z, w)^{p}}{p\cdot 4^{p+3}}\right]. \tag{12}\]
Equation (12) shows that \(\big{|}m_{p}(z,w)\big{|}=m_{p}(w)/m_{p}(z)\) if and only if \(\rho_{p}(z,w)=0\) (equivalently \(z=w\)).
Proof.: From [7, Proposition 4.3 (3)], we have
\[|b|^{p}\geq |a|^{p}+p\operatorname{Re}(\!\left|a\right|^{p-2}\bar{a}(b-a))+ \frac{1}{4^{p+3}}|b-a|^{p}\;\;\text{when $p>2$}.\]
Set \(a=m_{p}(\zeta,z)/m_{p}(z)\), and \(b=e^{i\theta}m_{p}(\zeta,w)/m_{p}(w)\), where \(\theta\) will be specified below. Integrating the above inequality shows that
\[1\geq 1+p\operatorname{Re}\left\{\int_{\Omega}m_{p}(z)^{-p+1} \big{|}m_{p}(\zeta,z)\big{|}^{p-2}\,\overline{m_{p}(\zeta,z)}\left[\frac{e^{i \theta}m_{p}(\zeta,w)}{m_{p}(w)}-\frac{m_{p}(\zeta,z)}{m_{p}(z)}\right]d\zeta\right\} \\ +\frac{1}{4^{p+3}}\int_{\Omega}\!\left|\frac{e^{i\theta}m_{p}( \zeta,w)}{m_{p}(w)}-\frac{m_{p}(\zeta,z)}{m_{p}(z)}\right|^{p}d\zeta.\]
Applying the reproducing property shows that
\[pm_{p}(z)\operatorname{Re}\left\{\frac{e^{i\theta}m_{p}(z,w)}{m_{p}(w)}- \frac{m_{p}(z,z)}{m_{p}(z)}\right\}+\frac{\rho_{p}(z,w)^{p}}{4^{p+3}}\leq 0.\]
Now choose \(\theta\) such that \(e^{i\theta}m_{p}(z,w)=\big{|}m_{p}(z,w)\big{|}\). Then,
\[\frac{\big{|}m_{p}(z,w)\big{|}\,m_{p}(z)}{m_{p}(w)}-1\leq-\frac{\rho_{p}(z,w) ^{p}}{p\cdot 4^{p+3}}\]
which is equivalent to (12).
## 9 Appendix
We will now look at the proof of Lemma 4.3 by partially following the steps in _Appendix_ of [7].
Proof.: Let \(a,b\in\mathbb{C}\), \(p\geq 1\). Define
\[\eta(t)=\left|a+t(b-a)\right|^{2};\kappa(t)=\eta(t)^{p/2}=\left|a+t(b-a)\right|^{ p}.\]
\[\kappa^{\prime}(t)=\frac{p}{2}\eta(t)^{p/2-1}\eta^{\prime}(t)=p\cdot\left|a+t(b-a) \right|^{p-2}Re(\bar{a}(b-a)+t(\left|a-b\right|^{2})).\]
Using \((Re\{\bar{a}(b-a)\}+t|b-a|^{2})^{2}+(Im(\bar{a}b))^{2}=\left|b-a\right|^{2} \left|a+t(b-a)\right|^{2}\)
\[\kappa^{\prime\prime}(t)=p\big{|}a+t(b-a)\big{|}^{p-4}\left[(Im\{\bar{a}b\})^ {2}+(p-1)\cdot(Re\{\bar{a}(b-a)\}+t|b-a|^{2})^{2}\right]\]
which implies,
\[p\min\{1,(p-1)\}\big{|}a+t(b-a)\big{|}^{p-2}|b-a|^{2}\leq\kappa^{\prime\prime }(t)\leq p\max\{1,(p-1)\}\big{|}a+t(b-a)\big{|}^{p-2}|b-a|^{2}\]
Using integration by parts, we have
\[\kappa(1)=\kappa(0)+\kappa^{\prime}(0)+\int_{0}^{1}(1-t)k^{\prime\prime}(t)\]
Applying the upper and lower limits on \(\kappa^{\prime\prime}(t)\), we get
\[\left|b\right|^{p}\geq\left|a\right|^{p}+pRe(\left|a\right|^{p-2}\bar{a}(b-a))+ p\min\{1,(p-1)\}\int_{0}^{1}(1-t)\big{|}a+t(b-a)\big{|}^{p-2}|b-a|^{2}\,dt \tag{13}\]
\[\left|b\right|^{p}\leq\left|a\right|^{p}+pRe(\left|a\right|^{p-2}\bar{a}(b-a)) +p\max\{1,(p-1)\}\int_{0}^{1}(1-t)\big{|}a+t(b-a)\big{|}^{p-2}|b-a|^{2}\,dt \tag{14}\]
Let \(p>2\), using (13)
\[\left|b\right|^{p}\geq\left|a\right|^{p}+pRe(\left|a\right|^{p-2}\bar{a}(b-a) )+p\cdot\left|b-a\right|^{2}\cdot I,\]
where \(I=\int_{0}^{1}(1-t)\big{|}a+t(b-a)\big{|}^{p-2}\,dt\).
We will now compute a lower bound on \(I\)
\[I\geq\int_{0}^{1}\left|\left|a\right|-t\left|b-a\right|\right|^{p-2}dt.\]
If \(a\geq\left|b-a\right|/2\), then
\[I\geq\left|b-a\right|^{p-2}\int_{0}^{1/4}(1-t)(1/2-t)^{p-2}dt.\]
If \(a\leq\left|b-a\right|/2\), then
\[I\geq\left|b-a\right|^{p-2}\int_{3/4}^{1}(1-t)(t-1/2)^{p-2}dt.\]
Thus there exists \(c>0\) such that \(I\geq c\left|b-a\right|^{p-2}\). Hence we have
\[|b|^{p}\geq\left|a\right|^{p}+pRe\left|a\right|^{p-2}\bar{a}(b-a))+c|b-a|^{p}\,.\]
Let \(f\in\mathcal{A}^{p}\),\(\left\|f\right\|_{p}=1\)\(b=e^{i\theta}f(\zeta),a=m_{p}(\zeta,z)/m_{p}(z)\) and integrate the above inequality
\[1\geq\frac{m_{p}(z)^{p}}{m_{p}(z)^{p}}+pRe\left\{\int_{\Omega}m_{p}(z)^{-p+1} \big{|}m_{p}(\zeta,z)\big{|}^{p-2}\,\overline{m_{p}(\zeta,z)}\left[e^{i\theta} f(\zeta)-\frac{m_{p}(\zeta,z)}{m_{p}(z)}\right]d\zeta\right\}\]
\[+c\int_{\Omega}\left|\frac{e^{i\theta}m_{p}(\zeta,w)}{m_{p}(w)}-\frac{m_{p}( \zeta,z)}{m_{p}(z)}\right|^{p}\,d\zeta\]
Using the reproducing property,
\[pm_{p}(z)Re\left\{e^{i\theta}f(z)-\frac{1}{m_{p}(z)}\right\}+c\cdot d([m_{p}( \cdot,z)],[f])^{p}\leq 0.\]
Choose \(\theta\) such that \(e^{i\theta}f(z)=\left|f(z)\right|\)
\[\left|f(z)\right|m_{p}(z)\leq\left[1-\frac{(d([m_{p}(\cdot,z)],[f])^{p}}{c_{p }}\right] \tag{15}\]
Let \(p>2\). Using (14)
\[|b|^{p}\leq\left|a\right|^{p}+p\cdot Re\left[\left|a\right|^{p-2}\bar{a}(b-a) \right]+p(p-1)\int_{0}^{1}\left|b-a\right|^{2}\left[(1-t)\right]\left|a+t(b-a) \right|^{p-2}dt\]
Let \(f\in\mathcal{A}^{p},\left\|f\right\|_{p}=1\), \(b=e^{i\theta}f(\zeta)\), \(a=m_{p}(\zeta,z)/m_{p}(z)\) and integrate the above inequality
\[\int_{\Omega}f(\zeta)^{p}\leq\int_{\Omega}\left|\frac{m_{p}(\zeta,z)}{m_{p}( z)}\right|^{p}+pRe\left\{\int_{\Omega}m_{p}(z)^{-p+1}\big{|}m_{p}(\zeta,z) \big{|}^{p-2}\,\overline{m_{p}(\zeta,z)}\left[e^{i\theta}f(\zeta)-\frac{m_{p} (\zeta,z)}{m_{p}(z)}\right]d\zeta\right\}\]
\[+p(p-1)\int_{\Omega}\int_{0}^{1}(1-t)\Bigg{|}e^{i\theta}f(\zeta)-\frac{m_{p} (\zeta,z)}{m_{p}(z)}\Bigg{|}^{2}\Bigg{|}\frac{m_{p}(\zeta,z)}{m_{p}(z)}+t \left[e^{i\theta}f(\zeta)-\frac{m_{p}(\zeta,z)}{m_{p}(z)}\right]\Bigg{|}^{p-2 }\,dtd\zeta\]
Consider
\[I_{1}=\int_{\Omega}\int_{0}^{1}[(1-t)]\Bigg{|}e^{i\theta}f(\zeta)-\frac{m_{p} (\zeta,z)}{m_{p}(z)}\Bigg{|}^{2}\Bigg{|}\frac{m_{p}(\zeta,z)}{m_{p}(z)}+t \left[e^{i\theta}f(\zeta)-\frac{m_{p}(\zeta,z)}{m_{p}(z)}\right]\Bigg{|}^{p-2 }\,dtd\zeta\]
Using Fubini's theorem and Holder's inequality (with \(p^{\prime}=p/2,q^{\prime}=p/(p-2)\))
\[I_{1}\leq\int_{0}^{1}[(1-t)]\left[\int_{\Omega}\left|e^{i\theta}f(\zeta)- \frac{m_{p}(\zeta,z)}{m_{p}(z)}\right|^{p}d\zeta\right]^{2/p}\left[\int_{ \Omega}\left|\frac{m_{p}(\zeta,z)}{m_{p}(z)}+t\left[e^{i\theta}f(\zeta)-\frac {m_{p}(\zeta,z)}{m_{p}(z)}\right]\right|^{p}d\zeta\right]^{(p-2)/p}dt\]
\[\leq\frac{3^{(p-2)/p}}{2}\left[\int_{\Omega}\left|e^{i\theta}f(\zeta)-\frac{m_{p}( \zeta,z)}{m_{p}(z)}\right|^{p}d\zeta\right]^{2/p}\]
Thus using the reproducing formula,
\[p\cdot Re\left[m_{p}(z)e^{i\theta}f(z)-1\right]+C_{p}\left[\int_{\Omega}\left| e^{i\theta}f(\zeta)-\frac{m_{p}(\zeta,z)}{m_{p}(z)}\right|^{p}d\zeta\right]^{2/p}\geq 0\]
for all \(\theta\) and for some \(C_{p}>0\).
Choose \(\theta\) such that \(\int_{\Omega}\left|e^{i\theta}f(\zeta)-\frac{m_{p}(\zeta,z)}{m_{p}(z)}\right|^ {p}d\zeta=(d([m_{p}(\cdot,z)],[f])^{p}\). Then,
\[C^{\prime}\cdot d([m_{p}(\cdot,z)],[f])^{2}\geq 1-m_{p}(z)Re\{e^{i\theta}f(z)\} \geq 1-m_{p}(z)\big{|}f(z)\big{|}\]
### Acknowledgements
The author would like to thank Prof. Harold Boas for giving valuable advice and feedback during the preparation of this note. He would also like to thank Tanuj Gupta, Siddharth Sabharwal and John Treuer for useful conversations.
| ```
$\Omega$ の領域における新しい距離を、$A^p(\Omega)$ 上の「最小値」関数を用いて導入します。その不変性、完備性、その他の関連する側面について議論します。
```
This translation is accurate and conveys the meaning of the original sentence in a clear and natural Japanese. |
2309.13946 | Observational constraints on interactions between dark energy and dark
matter with momentum and energy transfers | We place observational constraints on a dark energy (DE) model in which a
quintessence scalar field $\phi$ is coupled to dark matter (DM) through
momentum and energy exchanges.The momentum transfer is weighed by an
interaction between the field derivative and DM four velocity with a coupling
constant $\beta$, whereas the energy exchange is characterized by an
exponential scalar-field coupling to the DM density with a coupling constant
$Q$. A positive coupling $\beta$ leads to the suppression for the growth of DM
density perturbations at low redshifts, whose property offers a possibility for
resolving the $\sigma_8$ tension problem. A negative coupling $Q$ gives rise to
a $\phi$-matter-dominated epoch, whose presence can reduce the sound horizon
around the Cosmic Microwave Background (CMB) decoupling epoch. Using the data
of Planck 2018, 12-th Sloan Digital Sky Survey, Phantheon supernovae samples,
and 1-year dark energy survey, we find that the two couplings are constrained
to be $\beta=0.332^{+1.246}_{-0.237}$ and $Q =-0.0312^{+0.0312}_{-0.0085}$ at
68\,\% confidence level (CL). Thus, there is an interesting observational
signature of the momentum exchange ($\beta \neq 0$) between DE and DM, with a
peak of the probability distribution of the energy transfer coupling at $Q<0$. | Xiaolin Liu, Shinji Tsujikawa, Kiyotomo Ichiki | 2023-09-25T08:26:51 | http://arxiv.org/abs/2309.13946v2 | # Observational constraints on interactions between dark energy and dark matter
###### Abstract
We place observational constraints on a dark energy (DE) model in which a quintessence scalar field \(\phi\) is coupled to dark matter (DM) through momentum and energy exchanges. The momentum transfer is weighed by an interaction between the field derivative and DM four velocity with a coupling constant \(\beta\), whereas the energy exchange is characterized by an exponential scalar-field coupling to the DM density with a coupling constant \(Q\). A positive coupling \(\beta\) leads to the suppression for the growth of DM density perturbations at low redshifts, whose property offers a possibility for resolving the \(\sigma_{8}\) tension problem. A negative coupling \(Q\) gives rise to a \(\phi\)-matter-dominated epoch, whose presence can reduce the sound horizon around the Cosmic Microwave Background (CMB) decoupling epoch. Using the data of Planck 2018, 12-th Sloan Digital Sky Survey, Phantheon supernovae samples, and 1-year dark energy survey, we find that the two couplings are constrained to be \(\beta=0.417^{+1.592}_{-0.307}\) and \(Q=-0.036^{+0.036}_{-0.010}\) at 68 % confidence level (CL). Thus, there is an interesting observational signature of the momentum exchange (\(\beta\neq 0\)) between DE and DM, with a peak of the probability distribution of the energy transfer coupling at \(Q<0\).
+
Footnote †: preprint: WUAP-23-10
## I Introduction
Revealing the origin of the dark sector in our Universe is an important challenge for the modern cosmology [1; 2; 3; 4; 5; 6; 7]. Dark energy (DE) accelerates the current Universe, while cold dark matter (CDM) is the main source for the formation of large-scale structures. The origin of DE can be a cosmological constant \(\Lambda\)[8; 9; 10; 11], but it is theoretically challenging to naturally explain its small value from the vacuum energy arising from particle physics [12; 13]. Instead, there have been many attempts for constructing DE models with dynamical propagating degrees of freedom such as scalar fields, vector fields, and massive gravitons (see Refs. [14; 15; 16; 17; 18; 19] for reviews). Among them, the scalar-field DE, which is dubbed quintessence [20; 21; 22; 23; 24; 25; 26; 27], is one of the simplest models which can be distinguished from the cosmological constant through its time-varying equation of state (EOS) \(w_{\rm DE}\).
From the observational side, we have not yet found compelling evidence that quintessence is favored over the cosmological constant. In particular, the joint analysis based on the data of supernovae Ia (SN Ia), baryon acoustic oscillations (BAO), and the cosmic microwave background (CMB) showed that the quintessence EOS needs to be close to \(-1\) at low redshifts [28; 29; 30; 31; 32]. Hence it is difficult to distinguish between quintessence and \(\Lambda\) from the information of \(w_{\rm DE}\) alone. At the level of perturbations, the \(\Lambda\)CDM model has a so-called \(\sigma_{8}\) tension for the amplitude of matter density contrast between the Planck CMB data [31] and low-redshift probes like shear-lensing [33; 34; 35] and redshift-space distortions [36; 37]. For both \(\Lambda\) and quintessence, the effective gravitational coupling \(G_{\rm eff}\) on scales relevant to the growth of large-scale structures is equivalent to the Newton constant \(G\). Then, the problem of the \(\sigma_{8}\) tension cannot be addressed by quintessence either. Moreover, for both \(\Lambda\) and quintessence, there is the tension of today's Hubble expansion rate \(H_{0}\) between the CMB data and low-redshift measurements [38; 39; 40; 41; 42; 43; 44; 45].
If we allow for a possibility of interactions between DE and DM, the cosmic expansion and growth histories can be modified in comparison to the \(\Lambda\)CDM model. One example of such couplings corresponds to an energy exchange between DE and DM through an interacting Lagrangian \(L_{\rm E}=-(e^{Q\phi/M_{\rm Pl}}-1)\rho_{c}\)[46; 47; 48; 49], where \(Q\) is a coupling constant, \(M_{\rm Pl}\) is the reduced Planck mass, and \(\rho_{c}\) is the CDM density. The similar type of couplings arises from Brans-Dicke theories [50] after transforming the Jordan-frame action to that in the Einstein frame [51; 52; 53]. In the presence of such an energy transfer, it is possible to realize a so-called \(\phi\)-matter-dominated epoch (\(\phi\)MDE) [47] in which the DE (scalar field) density parameter takes a nonvanishing constant value \(\Omega_{\rm DE}=2Q^{2}/3\). The presence of the \(\phi\)MDE can reduce the sound horizon at CMB decoupling [54; 55; 56], which may offer a possibility for alleviating the \(H_{0}\) tension. On the other hand, the effective gravitational coupling of CDM is given by \(G_{\rm eff}=G(1+2Q^{2})\)[57; 58], which is larger than \(G\). This property is not welcome for reducing the \(\sigma_{8}\) tension, as we require that \(G_{\rm eff}<G\) to address this problem.
The scalar field can also mediate the momentum exchange with CDM through a scalar product \(Z=u_{c}^{\mu}\nabla_{\mu}\phi\)[49; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70], where \(u_{c}^{\mu}\) is a CDM four velocity and \(\nabla_{\mu}\phi\) is a covariant derivative of \(\phi\). If we consider an interacting Lagrangian of the form \(L_{\rm M}=\beta Z^{2}\), where \(\beta\) is a coupling constant, the modification to the background equations arises only through a change of the kinetic term \(\dot{\phi}^{2}/2\to(1+2\beta)\dot{\phi}^{2}/2\) in the density and pressure of \(\phi\)[63; 59]. At the level of perturbations, the Euler equation is modified by the momentum transfer, while the continuity equation is not affected. For \(\beta>0\), the conditions for the absence of ghosts and Laplacian instabilities of scalar and tensor perturbations are consistently satisfied [66]. In this case, the effective gravitational coupling of CDM is smaller than \(G\) at low redshifts [66; 63; 49; 69]. Then, there is an intriguing possibility for reducing the \(\sigma_{8}\) tension by the momentum transfer [63; 67; 65; 67].
An interacting model of DE and DM with both momentum and energy transfers was proposed in Ref. [68] as a possible solution to the problems of \(\sigma_{8}\) and \(H_{0}\) tensions. This is described by the interacting Lagrangian \(L_{\rm int}=\beta Z^{2}-(e^{Q\phi/M_{\rm Pl}}-1)\rho_{c}\) with a canonical scalar field \(\phi\) having a potential \(V(\phi)\). Since the model has an explicit Lagrangian, the perturbation equations of motion are unambiguously fixed by varying the corresponding action with respect to the perturbed variables. We would like to stress that this is not the case for many interacting DE and DM models in which the background equations alone are modified by introducing phenomenological couplings [71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86]. We note however that there are some other models with concrete Lagrangians or energy-momentum tensors based on interacting fluids of DE and DM [87; 88; 89; 90] or on vector-tensor theories [91].
In Ref. [68], it was anticipated that the momentum transfer associated with the coupling \(\beta\) may address the \(\sigma_{8}\) tension due to the suppression of growth of matter perturbations and that the energy transfer characterized by the coupling \(Q\) may ease the \(H_{0}\) tension by the presence of the \(\phi\)MDE. While the gravitational attraction is enhanced by the energy transfer, the decrease of \(G_{\rm eff}\) induced by the coupling \(\beta\) can overwhelm the increase of \(G_{\rm eff}\) induced by the coupling \(Q\)[68; 69]. We also note that the coupling \(\beta\) does not remove the existence of the \(\phi\)MDE at the background level. These facts already imply that nonvanishing values of couplings may be favored, but we require a statistical analysis with actual observational data to see the signatures of those couplings.
In this paper, we perform the Markov chain Monte Carlo (MCMC) analysis of the interacting model of DE and DM with momentum and energy transfers mentioned above. For this purpose, we exploit the recent data of Planck CMB [92], 12-th Sloan Digital Sky Survey (SDSS) [93], Phantheon supernovae samples [94], and 1-year dark energy survey (DES) [95]. We show that the nonvanishing value of \(\beta\) is statistically favoured over the case \(\beta=0\), so there is an interesting signature of the momentum transfer between DE and DM. For the energy transfer, the probability distribution of the coupling has a peak at \(Q<0\). The \(Q=0\) case is also consistent with the data at \(68\,\%\) CL, so the signature of energy transfer is not so significant compared to that of momentum transfer. Today's Hubble constant is constrained to be \(H_{0}=68.22^{+0.58}_{-0.61}\) (\(68\,\%\) CL), which is not much different from the bound derived for the \(\Lambda\)CDM model with the above data sets. Like most of the models proposed in the literature, our coupled DE-DM scenario does not completely resolve the Hubble tension problem present in the current observational data.
This paper is organized as follows. In Sec. II, we revisit the background dynamics in our interacting model of DE and DM. In Sec. III, we show the full linear perturbation equations of motion and discuss the stability and the effective gravitational couplings of nonrelativistic matter. In Sec. IV, we explain the methodology of how to implement the background and perturbation equations in the CAMB code. We also discuss the impact of our model on several observables. In Sec. V, we present our MCMC results and interpret constraints on the model parameters. Sec. VI is devoted to conclusions. Throughout the paper, we work in the natural unit system, i.e., \(c=\hbar=k_{B}=1\).
## II Background equations of motion
We consider a DE scalar field \(\phi\) interacting with CDM through energy and momentum transfers. We assume that \(\phi\) is a canonical field with the kinetic term \(X=-(1/2)\nabla^{\mu}\phi\nabla_{\mu}\phi\) and the exponential potential \(V(\phi)=V_{0}e^{-\lambda\phi/M_{\rm Pl}}\), where \(V_{0}\) and \(\lambda\) are constants. The choice of the exponential potential is not essential for the purpose of probing the DE-DM couplings, but we can choose other quintessence potentials like the inverse power-law type \(V(\phi)=V_{0}\phi^{-p}\)[54; 55; 56]. The energy transfer is described by the interacting Lagrangian \(L_{\rm E}=-(e^{Q\phi/M_{\rm Pl}}-1)\rho_{c}\), where \(Q\) is a coupling constant and \(\rho_{c}\) is the CDM density. In the limit that \(Q\to 0\), we have \(L_{\rm E}\to 0\). The momentum transfer is weighed by the interacting Lagrangian \(L_{\rm M}=\beta Z^{2}\), where \(\beta\) is a coupling constant and \(Z\) is defined by
\[Z=u_{c}^{\mu}\nabla_{\mu}\phi\,, \tag{1}\]
where \(u_{c}^{\mu}\) is the CDM four velocity. For the gravity sector, we consider Einstein gravity described by the Lagrangian of a Ricci scalar \(R\). Then, the total action is given by [68]
\[\mathcal{S}=\int{\rm d}^{4}x\sqrt{-g}\left[\frac{M_{\rm Pl}^{2}}{2}R+X-V_{0}e^ {-\lambda\phi/M_{\rm Pl}}-\left(e^{Q\phi/M_{\rm Pl}}-1\right)\rho_{c}+\beta Z^ {2}\right]+\mathcal{S}_{m}\,, \tag{2}\]
where \(g\) is a determinant of the metric tensor \(g_{\mu\nu}\), \(\mathcal{S}_{m}\) is the matter action containing the contributions of CDM, baryons, and radiation with the energy densities \(\rho_{I}\), EOSs \(w_{I}\), and squared sound speeds \(c_{I}\), which are labeled by \(I=c,b,r\) respectively. We assume that neither baryons nor radiation are coupled to the scalar field. The action \(\mathcal{S}_{m}\) of perfect fluids can be expressed as a form of the Schutz-Sorkin action [96; 97; 98]
\[\mathcal{S}_{m}=-\sum_{I=c,b,r}\int\mathrm{d}^{4}x\left[\sqrt{-g}\,\rho_{I}(n_ {I})+J_{I}^{\mu}\partial_{\mu}\ell_{I}\right]\,, \tag{3}\]
where \(\rho_{I}\) depends on the number density \(n_{I}\) of each fluid. The current vector field \(J_{I}^{\mu}\) is related to \(n_{I}\) as \(n_{I}=\sqrt{g_{\mu\nu}J_{I}^{\mu}J_{I}^{\nu}/g}\), with \(\ell_{I}\) being the Lagrange multiplier. The fluid four velocity is given by
\[u_{I}^{\mu}=\frac{J_{I}^{\mu}}{n_{I}\sqrt{-g}}\,, \tag{4}\]
which satisfies the normalization \(u_{I}^{\mu}u_{I\mu}=-1\). Varying the action (2) with respect to \(\ell_{I}\), it follows that \(\partial_{\mu}J_{I}^{\mu}=0\). In terms of the four velocity, this current conservation translates to
\[u_{I}^{\mu}\partial_{\mu}\rho_{I}+\left(\rho_{I}+P_{I}\right) \nabla_{\mu}u_{I}^{\mu}=0\,, \tag{5}\]
where \(P_{I}=n_{I}\rho_{I,n}-\rho_{I}\) is the pressure of each fluid.
We discuss the cosmological dynamics on the spatially-flat Friedmann-Lemaitre-Robertson-Walker (FLRW) background given by the line element
\[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a^{2}(t)\delta_{ij}\mathrm{d}x^{i}\mathrm{d}x ^{j}\,, \tag{6}\]
where \(a(t)\) is the time-dependent scale factor. On this background we have \(u_{I}^{\mu}=(1,0,0,0)\) and \(\nabla_{\mu}u_{I}^{\mu}=3H\), where \(H=\dot{a}/a\) is the expansion rate of the Universe and a dot denotes the derivative with respect to the cosmic time \(t\). From Eq. (5), we have
\[\dot{\rho}_{I}+3H\left(\rho_{I}+P_{I}\right)=0\,, \tag{7}\]
which holds for each \(I=c,b,r\). We consider the cosmological dynamics after the CDM and baryons started to behave as non-relativistic particles. At this epoch, we have \(w_{c}=0\), \(w_{b}=0\), \(c_{c}^{2}=0\), and \(c_{b}^{2}=0\). The radiation has a usual relativistic EOS \(w_{r}=1/3\) with \(c_{r}^{2}=1/3\). The gravitational field equations of motion are given by
\[3M_{\rm pl}^{2}H^{2}=\rho_{\phi}+e^{Q\phi/M_{\rm Pl}}\rho_{c}+ \rho_{b}+\rho_{r}\,, \tag{8}\] \[M_{\rm pl}^{2}\left(2\dot{H}+3H^{2}\right)=-P_{\phi}-\frac{1}{3} \rho_{r}\,, \tag{9}\]
where \(\rho_{\phi}\) and \(P_{\phi}\) are the scalar-field density and pressure defined, respectively, by
\[\rho_{\phi}=\frac{1}{2}q_{s}\dot{\phi}^{2}+V_{0}e^{-\lambda\phi/M_{\rm Pl}}\,, \qquad P_{\phi}=\frac{1}{2}q_{s}\dot{\phi}^{2}-V_{0}e^{-\lambda\phi/M_{\rm Pl }}\,, \tag{10}\]
with
\[q_{s}\equiv 1+2\beta\,. \tag{11}\]
We require that \(q_{s}>0\) to have a positive kinetic term in \(\rho_{\phi}\).
The scalar-field equation can be expressed in the form
\[\dot{\rho}_{\phi}+3H\left(\rho_{\phi}+P_{\phi}\right)=-\frac{Q\dot{\phi}}{M_{ \rm Pl}}\hat{\rho}_{c}\,, \tag{12}\]
where
\[\hat{\rho}_{c}\equiv e^{Q\phi/M_{\rm Pl}}\rho_{c}\,. \tag{13}\]
Note that \(\hat{\rho}_{c}\) is the CDM density containing the effect of an energy transfer, and the energy flows from CDM to \(\phi\) if \(\dot{\phi}>0\) with \(Q<0\). From Eq. (7), CDM obeys the continuity equation \(\dot{\rho}_{c}+3H(\rho_{c}+P_{c})=0\). In terms of \(\hat{\rho}_{c}\), this equation can be expressed as
\[\dot{\hat{\rho}}_{c}+3H\hat{\rho}_{c}=+\frac{Q\dot{\phi}}{M_{\rm Pl}}\hat{\rho }_{c}\,. \tag{14}\]
From Eqs. (2.12) and (2.14), it is clear that there is the energy transfer between the scalar field and CDM, but the momentum exchange between DE and DM does not occur at the background level. The effect of the coupling \(\beta\) appears only as the modification to the coefficient of \(\dot{\phi}^{2}\).
To study the background cosmological dynamics, it is convenient to introduce the following dimensionless variables
\[x_{1}=\frac{\dot{\phi}}{\sqrt{6}M_{\rm Pl}H}\,,\qquad x_{2}=\sqrt{\frac{V_{0}}{ 3}}\frac{e^{-\lambda\phi/(2M_{\rm Pl})}}{M_{\rm Pl}H}\,, \tag{2.15}\]
and
\[\Omega_{\phi}=q_{s}x_{1}^{2}+x_{2}^{2}\,,\qquad\Omega_{c}=\frac{e^{Q\phi/M_{\rm pl }}\rho_{c}}{3M_{\rm Pl}^{2}H^{2}}\,,\qquad\Omega_{b}=\frac{\rho_{b}}{3M_{\rm pl }^{2}H^{2}}\,,\qquad\Omega_{r}=\frac{\rho_{r}}{3M_{\rm pl}^{2}H^{2}}\,. \tag{2.16}\]
From Eq. (2.8), the density parameters are subject to the constraint
\[\Omega_{c}=1-\Omega_{\phi}-\Omega_{b}-\Omega_{r}\,. \tag{2.17}\]
The variables \(x_{1}\), \(x_{2}\), \(\Omega_{b}\), and \(\Omega_{r}\) obey the differential equations
\[\frac{{\rm d}x_{1}}{{\rm d}N} = \frac{1}{2}x_{1}\left(6q_{s}x_{1}^{2}-6+3\Omega_{c}+3\Omega_{b}+ 4\Omega_{r}\right)+\frac{\sqrt{6}}{2q_{s}}\left(\lambda x_{2}^{2}-Q\Omega_{c} \right)\,, \tag{2.18}\] \[\frac{{\rm d}x_{2}}{{\rm d}N} = \frac{1}{2}x_{2}\left(6q_{s}x_{1}^{2}-\sqrt{6}\lambda x_{1}+3 \Omega_{c}+3\Omega_{b}+4\Omega_{r}\right)\,,\] (2.19) \[\frac{{\rm d}\Omega_{b}}{{\rm d}N} = \Omega_{b}\left(6q_{s}x_{1}^{2}-3+3\Omega_{c}+3\Omega_{b}+4 \Omega_{r}\right)\,,\] (2.20) \[\frac{{\rm d}\Omega_{r}}{{\rm d}N} = \Omega_{r}\left(6q_{s}x_{1}^{2}-4+3\Omega_{c}+3\Omega_{b}+4 \Omega_{r}\right)\,, \tag{2.21}\]
where \(N=\ln a\). The scalar-field EOS \(w_{\phi}=P_{\phi}/\rho_{\phi}\) and effective EOS \(w_{\rm eff}=-1-2\dot{H}/(3H^{2})\) are
\[w_{\phi}=\frac{q_{s}x_{1}^{2}-x_{2}^{2}}{q_{s}x_{1}^{2}+x_{2}^{2}}\,,\qquad w_ {\rm eff}=-1+2q_{s}x_{1}^{2}+\Omega_{c}+\Omega_{b}+\frac{4}{3}\Omega_{r}\,. \tag{2.22}\]
The fixed points with constant values of \(x_{1}\), \(x_{2}\), \(\Omega_{b}\), and \(\Omega_{r}\) relevant to the radiation, matter, and dark-energy dominated epochs are given, respectively, by
* Radiation point (A) \[x_{1}=0\,,\quad x_{2}=0\,,\quad\Omega_{b}=0\,,\quad\Omega_{r}=1\,,\quad\Omega_ {\phi}=0\,,\quad w_{\rm eff}=\frac{1}{3}\,.\] (2.23)
* \(\phi\)MDE point (B) \[x_{1}=-\frac{\sqrt{6}Q}{3q_{s}}\,,\quad x_{2}=0\,,\quad\Omega_{b}=0\,,\quad \Omega_{r}=0\,,\quad\Omega_{\phi}=w_{\rm eff}=\frac{2Q^{2}}{3q_{s}}\,,\quad w_ {\phi}=1\,.\] (2.24)
* Accelerated point (C) \[x_{1}=\frac{\lambda}{\sqrt{6}q_{s}}\,,\quad x_{2}=\sqrt{1-\frac{\lambda^{2}}{ 6q_{s}}}\,,\quad\Omega_{b}=0\,,\quad\Omega_{r}=0\,,\quad\Omega_{\phi}=1\,, \quad w_{\phi}=w_{\rm eff}=-1+\frac{\lambda^{2}}{3q_{s}}\,.\] (2.25)
The coupling \(Q\) modifies the standard matter era through the nonvanishing values of \(\Omega_{\phi}\) and \(w_{\rm eff}\). To avoid the dominance of the scalar-field density over the CDM and baryon densities during the \(\phi\)MDE, we require that \(\Omega_{\phi}\ll 1\), i.e.,
\[Q^{2}\ll\frac{3}{2}(1+2\beta)\,. \tag{2.26}\]
To have the epoch of late-time cosmic acceleration driven by point (C), we need the condition \(w_{\rm eff}<-1/3\), i.e.,
\[\lambda^{2}<2(1+2\beta)\,. \tag{2.27}\]
Under this condition, we can show that point (C) is stable against the homogeneous perturbation if [68]
\[\lambda(\lambda+Q)<3(1+2\beta)\,. \tag{2.28}\]
Provided that the conditions (2.26)-(2.28) hold, the cosmological sequence of fixed points (A) \(\rightarrow\) (B) \(\rightarrow\) (C) can be realized. We refer the reader to Ref. [68] for the numerically integrated background solution. Taking the limits \(Q\to 0\), \(\beta\to 0\), and \(\lambda\to 0\), we recover the background evolution in the \(\Lambda\)CDM model.
## III Perturbation equations of motion
In Ref. [68], the scalar perturbation equations of motion were derived without fixing particular gauges. The perturbed line element containing four scalar perturbations \(\alpha\), \(\chi\), \(\zeta\), and \(E\) on the spatially-flat FLRW background is given by
\[\mathrm{d}s^{2}=-(1+2\alpha)\mathrm{d}t^{2}+2\partial_{i}\chi\mathrm{d}t \mathrm{d}x^{i}+a^{2}(t)\left[(1+2\zeta)\delta_{ij}+2\partial_{i}\partial_{j}E \right]\mathrm{d}x^{i}\mathrm{d}x^{j}\,. \tag{10}\]
Tensor perturbations propagate in the same manner as in the \(\Lambda\)CDM model, so we do not consider them in the following. The scalar field \(\phi\) is decomposed into the background part \(\bar{\phi}(t)\) and the perturbed part \(\delta\phi\), as
\[\phi=\bar{\phi}(t)+\delta\phi(t,x^{i})\,, \tag{11}\]
where we omit the bar from background quantities in the following.
The spatial components of four velocities \(u_{I}=J_{Ii}/(n_{I}\sqrt{-g})\) in perfect fluids are related to the scalar velocity potentials \(v_{I}\), as
\[u_{Ii}=-\partial_{i}v_{I}\,. \tag{12}\]
The fluid density is given by \(\rho_{I}=\rho_{I}(t)+\delta\rho_{I}(t,x^{i})\), where the perturbed part is [69; 66; 49]
\[\delta\rho_{I}=\frac{\rho_{I,n_{I}}}{a^{3}}\left[\delta J_{I}-\mathcal{N}_{I} \left(3\zeta+\partial^{2}E\right)\right]\,, \tag{13}\]
where \(\rho_{I,n_{I}}=\partial\rho_{I}/\partial n_{I}\), and \(\mathcal{N}_{I}=n_{I}a^{3}\) is the background particle number of each fluid (which is conserved).
We can construct the following gauge-invariant combinations
\[\delta\phi_{\mathrm{N}}=\delta\phi+\dot{\phi}\left(\chi-a^{2} \dot{E}\right)\,,\qquad\delta\rho_{I\mathrm{N}}=\delta\rho_{I}+\dot{\rho}_{I} \left(\chi-a^{2}\dot{E}\right)\,,\qquad v_{I\mathrm{N}}=v_{I}+\chi-a^{2}\dot{E }\,,\] \[\Psi=\alpha+\frac{\mathrm{d}}{\mathrm{d}t}\left(\chi-a^{2}\dot{E }\right)\,,\qquad\Phi=\zeta+H\left(\chi-a^{2}\dot{E}\right)\,. \tag{14}\]
We also introduce the dimensionless variables
\[\delta_{I\mathrm{N}}=\frac{\delta\rho_{I\mathrm{N}}}{\rho_{I}}\,,\qquad \delta\varphi_{\mathrm{N}}=\frac{H}{\dot{\phi}}\delta\phi_{\mathrm{N}}\,, \qquad V_{I\mathrm{N}}=Hv_{I\mathrm{N}}\,,\qquad\mathcal{K}=\frac{k}{aH}\,, \tag{15}\]
where \(k\) is a comoving wavenumber. In Fourier space, the linear perturbation equations of motion are given by [68]
\[6q_{s}x_{1}^{2}\frac{\mathrm{d}\delta\varphi_{\mathrm{N}}}{ \mathrm{d}N}-6\frac{\mathrm{d}\Phi}{\mathrm{d}N}+6\left(1-q_{s}x_{1}^{2} \right)\left(\xi\delta\varphi_{\mathrm{N}}+\Psi\right)-2\mathcal{K}^{2}\Phi+3 \left(3\Omega_{c}+3\Omega_{b}+4\Omega_{r}\right)\delta\varphi_{\mathrm{N}}\] \[+3\left(\Omega_{c}\delta_{c\mathrm{N}}+\Omega_{b}\delta_{b\mathrm{ N}}+\Omega_{r}\delta_{r\mathrm{N}}\right)=0\,, \tag{16}\] \[\frac{\mathrm{d}\Phi}{\mathrm{d}N}-\Psi-\xi\delta\varphi_{ \mathrm{N}}+\frac{3}{2}\left(\Omega_{c}+4\beta x_{1}^{2}\right)\left(V_{c \mathrm{N}}-\delta\varphi_{\mathrm{N}}\right)+\frac{3}{2}\Omega_{b}\left(V_{b \mathrm{N}}-\delta\varphi_{\mathrm{N}}\right)+2\Omega_{r}\left(V_{r\mathrm{N}} -\delta\varphi_{\mathrm{N}}\right)=0\,,\] (17) \[\frac{\mathrm{d}\delta_{I\mathrm{N}}}{\mathrm{d}N}+3\left(c_{I}^{2 }-w_{I}\right)\delta_{I\mathrm{N}}+\left(1+w_{I}\right)\left(\mathcal{K}^{2}V_ {I\mathrm{N}}+3\frac{\mathrm{d}\Phi}{\mathrm{d}N}\right)=0\,,\qquad(\text{for }I=c,b,r),\] (18) \[\left(\Omega_{c}+4\beta x_{1}^{2}\right)\frac{\mathrm{d}V_{c \mathrm{N}}}{\mathrm{d}N}-\left[\xi\left(\Omega_{c}+4\beta x_{1}^{2}\right)-4 \beta x_{1}^{2}(3+2\epsilon_{\phi})-\sqrt{6}Qx_{1}\Omega_{c}\right]V_{c \mathrm{N}}-\Omega_{c}\Psi\] \[-4\beta x_{1}^{2}\frac{\mathrm{d}\delta\varphi_{\mathrm{N}}}{ \mathrm{d}N}+\left[4\beta x_{1}(\xi-3-2\epsilon_{\phi})-\sqrt{6}Q\Omega_{c} \right]x_{1}\delta\varphi_{\mathrm{N}}=0\,,\] (19) \[\frac{\mathrm{d}V_{I\mathrm{N}}}{\mathrm{d}N}-\left(\xi+3c_{I}^{2 }\right)V_{I\mathrm{N}}-\Psi-\frac{c_{I}^{2}}{1+w_{I}}\delta_{I\mathrm{N}}=0\,, \qquad(\text{for }I=b,r),\] (20) \[\frac{\mathrm{d}^{2}\varphi_{\mathrm{N}}}{\mathrm{d}N^{2}}+\left(3 -\xi+2\epsilon_{\phi}\right)\delta\frac{\mathrm{d}\varphi_{\mathrm{N}}}{ \mathrm{d}N}+\left[\hat{c}_{s}^{2}\mathcal{K}^{2}-\frac{\mathrm{d}\xi}{ \mathrm{d}N}-3\xi+\frac{\mathrm{d}\epsilon_{\phi}}{\mathrm{d}N}+\epsilon_{\phi} ^{2}+(3-\xi)\epsilon_{\phi}+\frac{3}{q_{s}}\left(\lambda^{2}x_{2}^{2}+Q^{2} \Omega_{c}\right)\right]\delta\varphi_{\mathrm{N}}\] \[+3\hat{c}_{s}^{2}\frac{\mathrm{d}\Phi}{\mathrm{d}N}-\frac{ \mathrm{d}\Psi}{\mathrm{d}N}-2\left(3+\epsilon_{\phi}\right)\Psi-\frac{2\beta}{q _{s}}\frac{\mathrm{d}\delta_{c\mathrm{N}}}{\mathrm{d}N}+\frac{\sqrt{6}Q\Omega_{c }}{2q_{s}x_{1}}\delta_{c\mathrm{N}}=0\,,\] (21) \[\Psi=-\Phi\,, \tag{22}\]
where
\[\xi=-3q_{s}x_{1}^{2}-\frac{3}{2}\Omega_{c}-\frac{3}{2}\Omega_{b}-2\Omega_{r}\,, \qquad\epsilon_{\phi}=-3+\frac{\sqrt{6}}{2q_{s}x_{1}}\left(\lambda x_{2}^{2}-Q \Omega_{c}\right)\,,\qquad\hat{c}_{s}^{2}=\frac{1}{q_{s}}\,. \tag{23}\]
We can choose any convenient gauges at hand in the perturbation Eqs. (3.7)-(3.13). For example, the Newtonian gauge corresponds to \(\chi=0=E\), in which case Eqs. (3.7)-(3.13) can be directly solved for the gravitational potentials \(\Psi\), \(\Phi\) and the scalar-field perturbation \(\delta\varphi_{\rm N}\). For the unitary gauge \(\delta\phi=0=E\), we can introduce the curvature perturbation \({\cal R}=\Phi-\delta\varphi_{\rm N}\) and the CDM density perturbation \(\delta\rho_{\rm cu}=\delta\rho_{c\rm N}-\dot{\rho}_{c}\delta\phi_{\rm N}/\dot{\phi}\) as two propagating degrees of freedom. These dynamical perturbations have neither ghost nor Laplacian instabilities under the following conditions [69; 49; 66]
\[q_{s} \equiv 1+2\beta>0\,, \tag{3.15}\] \[q_{c} \equiv 1+\frac{4\beta x_{1}^{2}}{\Omega_{c}}>0\,,\] (3.16) \[c_{s}^{2} \equiv \dot{c}_{s}^{2}+\frac{8\beta^{2}x_{1}^{2}}{q_{s}(4\beta x_{1}^{2 }+\Omega_{c})}>0\,. \tag{3.17}\]
Since the CDM effective sound speed vanishes for \(c_{c}^{2}\to+0\), it does not provide an additional Laplacian stability condition. The conditions (3.15)-(3.17) are independent of the gauge choices.
The evolution of perturbations after the onset of the \(\phi\)MDE can be analytically estimated for the modes deep inside the sound horizon. Under the quasi-static approximation, the dominant terms in Eqs. (3.7)-(3.13) are those containing \({\cal K}^{2}\), \(\delta_{c\rm N}\), \({\rm d}\delta_{c\rm N}/{\rm d}N\), and \(\delta_{b\rm N}\). From Eqs. (3.7), (3.12), and (3.13), it follows that
\[\Psi=-\Phi\simeq-\frac{3}{2{\cal K}^{2}}\left(\Omega_{c}\delta_{c\rm N}+\Omega _{b}\delta_{b\rm N}\right)\,,\qquad\delta\varphi_{\rm N}\simeq\frac{1}{q_{s} \dot{c}_{s}^{2}{\cal K}^{2}}\left(2\beta\frac{{\rm d}\delta_{c\rm N}}{{\rm d}N }-\frac{\sqrt{6}Q\Omega_{c}}{2x_{1}}\delta_{c\rm N}\right)\,. \tag{3.18}\]
We differentiate Eq. (3.9) with respect to \(N\) and then use Eqs. (3.10) and (3.11) for CDM and baryons, respectively. On using Eq. (3.18) together with the quasi-static approximation, we obtain the second-order differential equations of CDM and baryons, as [68]
\[\frac{{\rm d}^{2}\delta_{c\rm N}}{{\rm d}N^{2}}+\nu\frac{{\rm d} \delta_{c\rm N}}{{\rm d}N}-\frac{3}{2G}\left(G_{cc}\Omega_{c}\delta_{c\rm N}+G _{cb}\Omega_{b}\delta_{b\rm N}\right)\simeq 0\,, \tag{3.19}\] \[\frac{{\rm d}^{2}\delta_{b\rm N}}{{\rm d}N^{2}}+\left(2+\xi\right) \frac{{\rm d}\delta_{b\rm N}}{{\rm d}N}-\frac{3}{2G}\left(G_{bc}\Omega_{c} \delta_{c\rm N}+G_{bb}\Omega_{b}\delta_{b\rm N}\right)\simeq 0\,, \tag{3.20}\]
where
\[G_{cc}=\frac{1+r_{1}}{1+r_{2}}G\,,\qquad G_{cb}=\frac{1}{1+r_{2}}G\,,\qquad G _{bc}=G_{bb}=G\,, \tag{3.21}\]
with
\[r_{1}=\frac{2Q[3Q\Omega_{c}+2\sqrt{6}\beta x_{1}(2+\epsilon_{\phi}+\sqrt{6}Qx _{1})]}{3\Omega_{c}}\,,\qquad r_{2}=\frac{4\beta(1+2\beta)x_{1}^{2}}{\Omega_{ c}}\,, \tag{3.22}\]
and
\[\nu=\frac{4\beta(1+2\beta)(5+\xi+2\epsilon_{\phi})x_{1}^{2}+(2+\xi+\sqrt{6}Qx _{1})\Omega_{c}}{4\beta(1+2\beta)x_{1}^{2}+\Omega_{c}}\,. \tag{3.23}\]
Since \(G_{bc}\) and \(G_{bb}\) are equivalent to \(G\), the baryon perturbation is not affected by the DE-DM couplings. On the other hand, \(G_{cc}\) and \(G_{cb}\) are different from \(G\) for nonvanishing values of \(Q\) and \(\beta\).
During the \(\phi\)MDE, we obtain
\[G_{cc}=\left(1+\frac{2Q^{2}}{1+2\beta}\right)G\,,\qquad G_{cb}=\left[1-\frac{ 8\beta Q^{2}}{3-2Q^{2}+2(3+4Q^{2})\beta}\right]G\,. \tag{3.24}\]
Under the no-ghost condition (3.15), we have \(G_{cc}>G\). So long as the coupling \(Q\) is in the range \(Q^{2}\ll 1\), \(G_{cb}\) is smaller than \(G\).
After the end of the \(\phi\)MDE, we do not have a simple formula for \(G_{cc}\). However, assuming that \(|\beta|\ll 1\) and \(|Q|\ll 1\), we find
\[G_{cc}\simeq\left(1+2Q^{2}-\frac{4\beta x_{1}^{2}}{\Omega_{c}}\right)G\,. \tag{3.25}\]
Since \(\Omega_{c}\) decreases and \(x_{1}^{2}\) increases at low redshifts, the third term in the parenthesis of Eq. (3.25) dominates over \(2Q^{2}\) to realize the value of \(G_{cc}\) smaller than \(G\). Indeed, the numerical simulation in Ref. [68] shows that the growth rate of \(\delta_{c\rm N}\) can be less than the value for \(\beta=0\) even in the presence of the coupling \(Q\). This suppressed growth of \(\delta_{c\rm N}\) at low redshifts should allow the possibility of reducing the \(\sigma_{8}\) tension.
Methodology
We implement our model into the public code CAMB[99] and simulate the evolution of density perturbations with the background equations to compute the CMB and matter power spectra. In this section, we rewrite the background and perturbation equations of motion in the language of the CAMB code. For this purpose, we use the conformal time defined by \(\tau=\int a^{-1}\mathrm{d}t\). The background Eqs. (7), (8), (9), and (12) can be expressed as
\[\rho^{\prime}_{I}+3\mathcal{H}\left(\rho_{I}+P_{I}\right)=0\,, \qquad\text{(for \ $I=c,b,r$)}\,, \tag{14}\] \[3M_{\mathrm{Pl}}^{2}\mathcal{H}^{2}=\frac{1}{2}q_{s}\phi^{\prime 2 }+a^{2}\left(V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}+e^{Q\phi/M_{\mathrm{Pl}}} \rho_{c}+\rho_{b}+\rho_{r}\right)\,,\] (15) \[2M_{\mathrm{Pl}}^{2}\left(\mathcal{H}^{\prime}-\mathcal{H}^{2} \right)=-q_{s}\phi^{\prime 2}-a^{2}\left(e^{Q\phi/M_{\mathrm{Pl}}}\rho_{c}+ \rho_{b}+\frac{4}{3}\rho_{r}\right)\,,\] (16) \[q_{s}\left(\phi^{\prime\prime}+2\mathcal{H}\phi^{\prime}\right) +\frac{a^{2}}{M_{\mathrm{Pl}}}\left(Q\rho_{c}e^{Q\phi/M_{\mathrm{Pl}}}- \lambda V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}\right)=0\,, \tag{17}\]
where a prime represents the derivative with respect to \(\tau\), and we have introduced the conformal Hubble parameter \(\mathcal{H}\) as
\[\mathcal{H}\equiv aH=\dot{a}=\frac{a^{\prime}}{a}\,. \tag{18}\]
For perturbations, we adopt the synchronous gauge conditions
\[\alpha=0\,,\qquad\chi=0\,. \tag{19}\]
Following Ma and Bertschinger [100], we use the notations
\[\zeta=-\eta\,,\qquad E=-\frac{h+6\eta}{2k^{2}}\,,\qquad\theta_{I}=\frac{k^{2}} {a}v_{I}\,. \tag{20}\]
Then, some of the gauge-invariant variables defined in Eqs. (3) and (6) reduce to
\[\Psi=\frac{1}{2k^{2}}\left(h^{\prime\prime}+\mathcal{H}h^{\prime }+6\eta^{\prime\prime}+6\mathcal{H}\eta^{\prime}\right)\,,\qquad\Phi=-\eta+ \frac{\mathcal{H}}{2k^{2}}\left(h^{\prime}+6\eta^{\prime}\right)\,,\] \[\delta_{I\mathrm{N}}=\delta_{I}-\frac{3\mathcal{H}}{2k^{2}}(1+w_ {I})(h^{\prime}+6\eta^{\prime})\,,\qquad\delta\varphi_{I\mathrm{N}}=\mathcal{ H}\left(\frac{\delta\phi}{\phi^{\prime}}+\frac{h^{\prime}+6\eta^{\prime}}{2k^{2}} \right)\,,\qquad V_{I\mathrm{N}}=\frac{\mathcal{H}}{k^{2}}\left(\theta_{I}+ \frac{1}{2}h^{\prime}+3\eta^{\prime}\right)\,, \tag{21}\]
where \(\delta_{I}\equiv\delta\rho_{I}/\rho_{I}\) and \(w_{I}\equiv P_{I}/\rho_{I}\). In the presence of perfect fluids of CDM (\(w_{c}=0=c_{c}^{2}\)), baryons (\(w_{b}=0=c_{b}^{2}\)), and radiation (\(w_{r}=1/3=c_{r}^{2}\)), we can express the perturbation Eqs. (3)-(3.13) in the forms
\[k^{2}\eta-\frac{\mathcal{H}}{2}h^{\prime}+\frac{a^{2}}{2M_{ \mathrm{Pl}}^{2}}\left[\frac{q_{s}}{a^{2}}\phi^{\prime}\delta\phi^{\prime}+ \frac{1}{M_{\mathrm{Pl}}}\left(Q\rho_{c}e^{Q\phi/M_{\mathrm{Pl}}}-\lambda V_{0 }e^{-\lambda\phi/M_{\mathrm{Pl}}}\right)\delta\phi+e^{Q\phi/M_{\mathrm{Pl}}} \rho_{c}\delta_{c}+\rho_{b}\delta_{b}+\rho_{r}\delta_{r}\right]=0, \tag{22}\] \[k^{2}\eta^{\prime}-\frac{a^{2}}{2M_{\mathrm{Pl}}^{2}}\left[\frac{ k^{2}}{a^{2}}\phi^{\prime}\delta\phi+\left(\rho_{c}e^{Q\phi/M_{\mathrm{Pl}}}+ \frac{2\beta\phi^{\prime 2}}{a^{2}}\right)\theta_{c}+\rho_{b}\theta_{b}+\frac{4}{3}\rho_{r} \theta_{r}\right]=0,\] (23) \[\delta^{\prime}_{c}+\theta_{c}+\frac{1}{2}h^{\prime}=0,\] (24) \[\delta^{\prime}_{b}+\theta_{b}+\frac{1}{2}h^{\prime}=0,\] (25) \[\delta^{\prime}_{r}+\frac{4}{3}\theta_{r}+\frac{2}{3}h^{\prime}=0,\] (26) \[\theta^{\prime}_{c}+\mathcal{H}\theta_{c}-\frac{1}{q_{s}q_{c} \phi^{\prime 2}M_{\mathrm{Pl}}^{2}}\bigg{[}q_{s}(q_{c}-1)\phi^{\prime}M_{ \mathrm{Pl}}k^{2}\delta\phi^{\prime}+\left\{Q\phi^{\prime 2}+a^{2}(q_{c}-1)\lambda V_{0}e^{- \lambda\phi/M_{\mathrm{Pl}}}\right\}k^{2}\delta\phi\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \left\{Q(q_{s}-2)\phi^{\prime 3}+3q_{s}(q_{c}-1)\mathcal{H}\phi^{\prime 2}M_{\mathrm{Pl}}-2a^{2}(q_{c}-1)\phi^{ \prime}\lambda V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}\right\}\theta_{c} \bigg{]}=0,\] (27) \[\theta^{\prime}_{b}+\mathcal{H}\theta_{b}=0\,,\] (28) \[\theta^{\prime}_{r}-\frac{k^{2}}{4}\delta_{r}=0\,, \tag{29}\]
\[\delta\phi^{\prime\prime}+2{\cal H}\delta\phi^{\prime}+\frac{k^{2}M_{ \rm Pl}^{2}+a^{2}(\lambda^{2}V_{0}e^{-\lambda\phi/M_{\rm Pl}}+Q^{2}\rho_{e}e^{Q \phi/M_{\rm Pl}})}{q_{s}M_{\rm Pl}^{2}}\delta\phi+\frac{\phi^{\prime}}{2}h^{ \prime}+\frac{2\beta}{q_{s}}\phi^{\prime}\theta_{c}+\frac{a^{2}Q\rho_{e}e^{Q \phi/M_{\rm Pl}}}{q_{s}M_{\rm Pl}}\delta_{c}=0, \tag{4.17}\] \[h^{\prime\prime}+6\eta^{\prime\prime}+2{\cal H}(h^{\prime}+6\eta ^{\prime})-2\eta k^{2}=0\,, \tag{4.18}\]
where \(q_{s}\) and \(q_{c}\) are defined by Eqs. (3.15) and (3.16), respectively. The perturbation equations of motion for baryons and radiation are the same as those in \(\Lambda\)CDM model. Thus we modify the equations for CDM and gravitational field equations in the CAMB code. We also take into account the background and perturbation equations of motion for the scalar field, i.e., Eqs. (4.4) and (4.17). Note that the CDM velocity is usually set to zero all the time as a result of the gauge fixing condition in CAMB based on the synchronous gauge. In the models considered here, CDM has non-zero velocity due to the coupling to \(\phi\) in the late Universe. However, we will set \(\theta_{c}=0\) as the initial condition to eliminate the gauge degree of freedom, assuming that CDM streams freely in the early Universe (i.e., we neglect the interaction between DE and CDM) as in the standard scenario.
In the background Eqs. (4.2)-(4.4), the coupling \(\beta\) appears through the positive no-ghost parameter \(q_{s}=1+2\beta\). In the limit \(q_{s}\to\infty\), Eq. (4.4) shows that \(\phi\) approaches a constant after the onset of the \(\phi\)MDE. This limit corresponds to the \(\Lambda\)CDM model with a constant potential energy. Since the parameter space for large values of \(q_{s}\) spreads widely, the MCMC chains tend to wander in such regions. This actually leads to the loss of information about the evolution of the scalar field itself. To avoid this, we introduce a set of new variables \(p_{s},\hat{\lambda},\hat{Q}\) defined by
\[p_{s}\equiv q_{s}^{-1/2}=\frac{1}{\sqrt{1+2\beta}}\,,\qquad\hat{ \lambda}\equiv p_{s}\lambda\,,\qquad\hat{Q}\equiv p_{s}Q\,. \tag{4.19}\]
As we discussed in Sec. III, the growth of matter perturbations is suppressed for positive values of \(\beta\). In the MCMC analysis, we will set the prior
\[\beta\geq 0\,. \tag{4.20}\]
In this case, the stability conditions (3.15)-(3.17) are automatically satisfied. Then, the parameter \(p_{s}\) is in the range \(0<p_{s}\leq 1\). For the parameter \(\lambda\), we choose the value
\[\lambda>0\,, \tag{4.21}\]
without loss of generality. In Eq. (4.4), we observe that, for \(Q>0\), the background scalar field can approach the instantaneous minima characterized by the condition \(Q\rho_{e}e^{Q\phi/M_{\rm Pl}}=\lambda V_{0}e^{-\lambda\phi/M_{\rm Pl}}\) even during the matter era. Since we would like to study the case in which the \(\phi\)MDE is present, we will focus on the coupling range
\[Q\leq 0\,. \tag{4.22}\]
The same prior was chosen in the MCMC analysis of Refs. [54; 55; 56]1 for the coupled DE-DM model with \(Q\neq 0\) and \(\beta=0\).
Footnote 1: In these papers, the sign convention of \(Q\) is opposite to ours.
To implement our model in the CAMB code, we use the unit \(M_{\rm Pl}=1\) and replace \(\phi\) and \(\delta\phi\) with the following new variables
\[\phi\equiv p_{s}\hat{\phi}\,,\qquad\delta\phi\equiv p_{s}\delta \hat{\phi}\,. \tag{4.23}\]
Then, the background scalar-field equation can be expressed as
\[\hat{\phi}^{\prime\prime}+2{\cal H}\hat{\phi}^{\prime}+a^{2}\left( \hat{\rho}_{c,\hat{\phi}}+V_{,\hat{\phi}}\right)=0, \tag{4.24}\]
where \(\hat{\rho}_{c}=\rho_{e}e^{\hat{Q}\hat{\phi}}\) and \(V_{,\hat{\phi}}={\rm d}V/{\rm d}\hat{\phi}\). The energy density and pressure of \(\hat{\phi}\) read \(\rho_{\phi}=\hat{\phi}^{\prime 2}/(2a^{2})+V_{0}e^{-\hat{\lambda}\hat{\phi}}\) and \(P_{\phi}=\hat{\phi}^{\prime 2}/(2a^{2})-V_{0}e^{-\hat{\lambda}\hat{\phi}}\), respectively. This means that, at the background level, the effect of the momentum transfer can be absorbed into the redefined canonical scalar field \(\hat{\phi}\). We note that \(\hat{\phi}\) mediates the energy with CDM through the term \(a^{2}\hat{\rho}_{c,\hat{\phi}}\) in Eq. (4.24). Using the variables and parameters defined above, the perturbation equations of motion for \(\theta_{c}\) and \(\delta\phi\) are now expressed as
\[\theta_{c}^{\prime}+{\cal H}\theta_{c}-\frac{1-p_{s}^{2}}{a^{2} \hat{\rho}_{c}q_{c}}\left[k^{2}\hat{\phi}^{\prime}\delta\hat{\phi}^{\prime}-a ^{2}k^{2}\delta\hat{\phi}V_{,\hat{\phi}}+\left(3{\cal H}\hat{\phi}^{\prime}+2a^ {2}V_{,\hat{\phi}}\right)\hat{\phi}^{\prime}\theta_{c}\right]-\frac{\hat{Q}}{q _{c}}\left[k^{2}p_{s}^{2}\delta\hat{\phi}+(1-2p_{s}^{2})\hat{\phi}^{\prime} \theta_{c}\right]=0\,, \tag{4.25}\]
\[\delta\hat{\phi}^{\prime\prime}+2{\cal H}\delta\hat{\phi}^{\prime}+\left[p_{s}^{2}k ^{2}+a^{2}\left(V_{,\phi\hat{\phi}}+\hat{\rho}_{c,\hat{\phi}\hat{\phi}}\right) \right]\delta\hat{\phi}+\left[k{\cal Z}+(1-p_{s}^{2})\theta_{c}\right]\hat{\phi} ^{\prime}+a^{2}\hat{\rho}_{c,\hat{\phi}}\,\delta_{c}=0\,, \tag{4.26}\]
where \({\cal Z}\equiv h^{\prime}/(2k)\). We will also express the other perturbation equations of motion in terms of the new variables introduced above and numerically solve them with the background equations.
In Fig. 1, we plot the density parameters \(\Omega_{\phi}\), \(\Omega_{r}\), \(\Omega_{c}\), \(\Omega_{b}\) (left panel) and \(w_{\rm eff}\), \(w_{\phi}\) (right panel) for the model parameters \(Q=-0.04\), \(\lambda=0.5\), and \(\beta=0.4\). We observe that the solution temporally approaches the \(\phi\)MDE characterized by \(\Omega_{\phi}=w_{\rm eff}=2Q^{2}/[3(1+2\beta)]\), which is a distinguished feature compared to the \(\Lambda\)CDM model. The \(\phi\)MDE is followed by the epoch of cosmic acceleration (\(w_{\rm eff}<-1/3\)) driven by the fixed point (C).
In left panel of Fig. 2, we show the CMB angular power spectra of temperature anisotropies for several different values of \(Q\) and \(\beta\), with \(\lambda=0.3\). Compared to the uncoupled quintessence, there are two main effects on CMB induced mostly by the coupling \(Q\). The first is the shift of acoustic peaks toward larger multipoles \(\ell\). The multiple
\(\ell_{A}\) corresponding to the sound horizon \(r_{s*}\) at decoupling (redshift \(z_{*}\)) is given by
\[\ell_{A}=\pi\frac{D_{A}(z_{*})}{r_{s*}}\,, \tag{4.27}\]
where
\[D_{A}(z_{*})=\int_{0}^{z_{*}}\frac{1}{H(z)}\mathrm{d}z \tag{4.28}\]
is the comoving angular diameter distance, and
\[r_{s*}=\frac{1}{\sqrt{3}}\int_{0}^{a_{*}}\frac{\mathrm{d}a}{\sqrt{1+R_{s}(a)} \,a^{2}H(a)}\,, \tag{4.29}\]
with \(R_{s}(a)=(3\Omega_{b0}/4\Omega_{\gamma 0})a\) and \(a_{*}=(1+z_{*})^{-1}\)[101; 102]. Here, \(\Omega_{b0}\) and \(\Omega_{\gamma 0}\) are today's density parameters of baryons and photons, respectively. In our model, there is the \(\phi\)MDE in which the CDM density grows faster toward the higher redshift (\(\rho_{c}\propto(1+z)^{3+2Q^{2}/(1+2\beta)}\)) in comparison to the uncoupled case (\(Q=0\)). Moreover, the scalar-field density \(\rho_{\phi}\) scales in the same manner as \(\rho_{c}\) during the \(\phi\)MDE. These properties lead to the larger Hubble expansion rate before the decoupling epoch, so that the sound horizon (4.29) gets smaller in comparison to the uncoupled case.
The coupling \(Q\) can increase the value of \(H(z)\) from the end of the \(\phi\)MDE toward the decoupling epoch \(z=z_{*}\), which results in the decrease of \(D_{A}(z_{*})\). However, for fixed \(H_{0}\), the increase of \(1/r_{s*}\) induced by the same coupling typically overwhelms the reduction of \(D_{A}(z_{*})\) in the estimation of \(\ell_{A}\) in Eq. (4.27). For the model parameters \(Q=0\) with \(\beta=0\) and \(\lambda=0.5\), we obtain the numerical values \(D_{A}(z_{*})=13.84\) Gpc and \(r_{s*}=144.40\) Mpc. If we change the coupling \(Q\) to \(-0.2\), the two distances change to \(D_{A}(z_{*})=12.95\) Gpc and \(r_{s*}=127.20\) Mpc, respectively. Clearly, the reduction of \(r_{s*}\) induced by the coupling \(Q\) is stronger than the decrease of \(D_{A}(z_{*})\), which leads to the increase of \(\ell_{A}\) from 301.17 (for \(Q=0\)) to 319.85 (for \(Q=-0.2\)). Hence the larger coupling \(|Q|\) leads to the shift of CMB acoustic peaks toward smaller scales. This effect tends to be significant especially for \(|Q|\gtrsim 0.1\). We note that the positive coupling \(\beta\) works to suppress the factor \(2Q^{2}/(1+2\beta)\) in the \((1+z)\)-dependent power of \(\rho_{c}\) during the \(\phi\)MDE. In comparison to the case \(\beta=0\), we need to choose larger values of \(|Q|\) to have the shift of acoustic peaks toward smaller scales.
The second effect of the coupling \(Q\) on the CMB temperature spectrum is the suppressed amplitude of acoustic peaks. The existence of the \(\phi\)MDE gives rise to the larger CDM density \(\rho_{c}\) at decoupling, while the baryon density \(\rho_{b}\) is hardly affected. Then, the coupling \(Q\) gives rise to a smaller ratio \(\rho_{b}/\rho_{c}\) around \(z=z_{*}\). For \(Q=0\) with \(\beta=0\) and \(\lambda=0.5\), we obtain the numerical value \(\rho_{b}/\rho_{c}=0.186\), while, for \(Q=-0.2\) with the same values of \(\beta\) and \(\lambda\), this ratio decreases to \(\rho_{b}/\rho_{c}=0.116\). This is the main reason for the reduction of the height of CMB acoustic peaks seen in Fig. 2. We note that, in the MCMC analysis performed in Sec. V, the best-fit value of today's density parameter \(\Omega_{c0}\) is slightly smaller than the one in the \(\Lambda\)CDM model. However, for \(Q\neq 0\), the increase of \(\rho_{c}\) toward the past during the \(\phi\)MDE results in the larger CDM density at decoupling in comparison to the uncoupled case, suppressing the early ISW contribution around the first acoustic peak.
In the right panel of Fig. 2, we show the evolution of \(f\sigma_{8}\) for several different model parameters, where \(f=\dot{\delta}_{m}/(H\delta_{m})\) is the growth rate of matter density contrast (incorporating both CDM and baryons) and \(\sigma_{8}\) is the amplitude of matter over-density at the comoving \(8h^{-1}\) Mpc scale (\(h\) is the normalized Hubble constant \(H_{0}=100\,h\) km/s/Mpc). We find that the large coupling \(\beta\) induces the suppression for the growth rate of matter perturbations at low redshifts. This is also the case even in the presence of the coupling \(Q\) of order \(-0.01\). This result is consistent with the analytic estimation for the growth of perturbations discussed in Sec. III.
## V Results and Discussion
We are now going to place observational constraints on our model by using the MCMC likelihood CosmoMC [103]. In our analysis, we will exploit the following data sets.
(i) The CMB data containing TT, TE, EE+lowE from Planck 2018 [92], and the large-scale structure data from the 12-th data release of SDSS [93].
(ii) The Phantheon supernovae samples containing 1048 type Ia supernovae magnitudes with redshift in the range of \(0.01<z<2.3\)[94], which are commonly used to constrain the property of late-time cosmic acceleration.
(iii) The 1-st year DES results [95], which are the combined analyses of galaxy clustering and weak gravitational lensing.
We stop the calculations when the Gelman-Rubin statistic \(R-1\sim 0.01\) is reached.
In Fig. 3 and Table 1, we present the results of observational constraints on our model parameters.
First, let us discuss constraints on the parameter \(\beta\). In Table 1, the bounds on \(\beta\) (68 % CL) constrained by different data sets are presented in terms of the log prior. From the joint analysis based on the data sets (i)+(ii)+(iii), this bound translates to
\[\beta=0.417^{+1.592}_{-0.307}\qquad(68\,\%\,\text{CL})\,, \tag{10}\]
where \(0.417\) is the mean value. Since \(\beta\) is constrained to be larger than \(0.11\) at \(1\sigma\), there is an interesting observational signature of the momentum exchange between DE and DM. Even with the analysis of the data set (i) or with the
Figure 3: Triangle plot for the 1-dimensional marginalized distributions on individual parameters and the \(1\sigma\) and \(2\sigma\) 2-dimensional contours. The blue dashed lines represent constraints by the Planck 2018 [104] and 12-th SDSS data sets, which we call (i). The red and green solid lines correspond to constraints when the data sets (ii) and (ii)+(iii) are combined with (i), respectively.
data sets (i)+(ii), the \(1\sigma\) lower limits on \(\beta\) are close to the value \(0.1\). Hence the Planck CMB data combined with the SDSS data already show the signature of the momentum transfer. We note that this result is consistent with the likelihood analysis of Refs. [63; 65; 67; 70] performed for the model \(Q=0\), where the joint analysis based on the CMB and galaxy clustering data favour nonvanishing values of \(\beta\).
With the data sets (i)+(ii)+(iii), we also obtain the following \(2\sigma\) bound
\[0.014<\beta<10.756\qquad(95\,\%\,{\rm CL})\,. \tag{5.2}\]
Since the lower limit of \(\beta\) is as small as \(0.01\), this value is not significantly distinguished from \(\beta=0\). This means that the evidence for the momentum transfer can be confirmed at \(68\,\%\) CL, but not firmly at \(95\,\%\) CL, with the current observational data. We note that the mean value of \(\sigma_{8}\) constrained by the data sets (i)+(ii)+(iii) is \(0.7996\), which is smaller than the Planck 2018 bound \(\sigma_{8}=0.8111\pm 0.0060\)[31] derived for the \(\Lambda\)CDM model. Thus, in our model, the \(\sigma_{8}\) tension between the CMB and other measurements is alleviated by the momentum transfer. This property is mostly attributed to the fact that the growth rate of \(\delta_{e}\) at low redshifts is suppressed by the positive coupling \(\beta\).
The other coupling constant \(Q\), which mediates the energy transfer between DE and DM, is constrained to be
\[Q=-0.0355^{+0.035}_{-0.0097}\qquad(68\,\%\,{\rm CL})\,, \tag{5.3}\]
where \(-0.0355\) is the mean value. As we see in Fig. 3, the analysis based on the data sets (i) + (ii) gives rise to a peak in the 1-dimensional probability distribution of \(Q\) around \(-0.04\). This property also holds by adding the data set (iii). Since the vanishing coupling (\(Q=0\)) is within the \(1\sigma\) contour, we do not have strong observational evidence that the nonvanishing value of \(Q\) is favored over the \(Q=0\) case. However, it is interesting to note that the current data give rise to the probability distribution of \(Q\) with a peak at \(Q<0\).
In Refs. [54; 55; 56], the couplings \(|Q|\) slightly smaller than the mean value of (5.3) were obtained by the MCMC analysis with several data sets for the coupled dark energy model with \(\beta=0\). In our model, we have \(\Omega_{\phi}=w_{\rm eff}=2Q^{2}/[3(1+2\beta)]\) during the \(\phi\)MDE, so both \(\Omega_{\phi}\) and \(w_{\rm eff}\) are suppressed by the positive coupling \(\beta\). This allows the larger values of \(|Q|\) in comparison to the case \(\beta=0\). Still, the coupling \(|Q|\) exceeding the order \(0.1\) is forbidden from the data because of the significant changes of heights and positions in CMB acoustic peaks (see Fig. 2).
The parameter \(\lambda\) is related to the slope of the scalar-field potential. To realize the DE equation of state closer to \(-1\) at late times, we require that \(\lambda\) can not be significantly away from \(0\). From the MCMC analysis with the data sets (i)+(ii)+(iii), we obtain the upper limit
\[\lambda<0.641\qquad(68\,\%\,{\rm CL})\,. \tag{5.4}\]
We also remark that, for larger \(\lambda\), the distance to the CMB last scattering surface is reduced. To compensate this property, we require smaller values of \(H_{0}\). This explains the tendency for blue contours seen in the \(\lambda\)-\(H_{0}\) plane. Thus, the smaller values of \(\lambda\) are favored from the viewpoint of increasing \(H_{0}\).
In Fig. 3, we find that today's CDM density parameter \(\Omega_{c0}\) is constrained to be smaller than the Planck 2018 bound \(\Omega_{c0}h^{2}=0.120\pm 0.001\) derived for the \(\Lambda\)CDM model [31]. In spite of this decrease of \(\Omega_{c0}\), the CDM density evolves as \(\rho_{c}\propto(1+z)^{3+2Q^{2}/(1+2\beta)}\) during the \(\phi\)MDE and hence \(\Omega_{c}\) at decoupling can be increased by the nonvanishing coupling \(Q\). We note that today's baryon density parameter \(\Omega_{b0}\) is only slightly larger than the Planck 2018 bound \(\Omega_{b0}=0.0224\pm 0.0001\) (see Fig. 3). Then, the nonvanishing coupling \(Q\) hardly modifies the value of \(\Omega_{b}\) at \(z=z_{*}\) in comparison to the case \(Q=0\). Since the ratio \(\Omega_{b}/\Omega_{c}\) at decoupling is decreased by the coupling \(|Q|\) larger than the order \(0.01\), this suppresses the height of CMB acoustic peaks. The MCMC analysis with the CMB data alone already places the bound \(|Q|<0.1\) at \(95\,\%\,{\rm CL}\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameters & Priors & mean (best fit) (i) & mean (best fit) (i)+(ii) & mean (best fit) (i)+(ii)+(iii) \\ \hline \(H_{0}\) [km/s/Mpc] & \([20,100]\) & \(67.44(67.26)^{+1.01}_{-0.69}\) & \(67.93(67.66)^{+0.58}_{-0.68}\) & \(68.22(68.41)^{+0.58}_{-0.61}\) \\ \(\Omega_{c0}h^{2}\) & \([0.001,0.99]\) & \(0.11802(0.11958)^{+0.0018}_{-0.0010}\) & \(0.11819(0.11904)^{+0.0014}_{-0.0010}\) & \(0.11712(0.11580)^{+0.0013}_{-0.0009}\) \\ \(\Omega_{b0}h^{2}\) & \([0.005,0.1]\) & \(0.02237(0.02237)^{+0.00014}_{-0.0014}\) & \(0.02237(0.02238)^{+0.00015}_{-0.0014}\) & \(0.02247(0.02248)^{+0.00014}_{-0.00013}\) \\ \(\ln\beta\) & \(*\) & \(-1.0131(-0.1997)^{+1.1754}_{-1.1754}\) & \(-0.7919(-1.5209)^{+1.1593}_{-1.1593}\) & \(-0.8754(-2.6179)^{+0.00014}_{-1.3232}\) \\ \(\lambda\) & \([0.1,\infty]\) & \(0.6028(0.4083)^{+0.1658}_{-0.5928}\) & \(0.4235(0.2467)^{+0.16573}_{-0.088}\) & \(0.4988(0.5269)^{+0.4159}_{-0.4676}\) \\ \(Q\) & \([-\infty,0]\) & \(-0.0396(-0.0072)^{+0.0396}_{-0.0108}\) & \(-0.0422(-0.0096)^{+0.408}_{-0.0125}\) & \(-0.0355(-0.0396)^{+0.0355}_{-0.0097}\) \\ \(\sigma_{8}\) & \(*\) & \(0.8031(0.8057)^{+0.0231}_{-0.0148}\) & \(0.8105(0.8058)^{+0.0169}_{-0.0148}\) & \(0.7996(0.8084)^{+0.0174}_{-0.0120}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Priors, mean values, best-fit values and \(1\sigma\) errors of the model parameters \(\ln\beta\), \(\lambda\), \(Q\) and cosmological parameters \(H_{0}\), \(\Omega_{c0}h^{2}\), \(\Omega_{\Omega0}h^{2}\), \(\sigma_{8}\), where \(\Omega_{c0}\) and \(\Omega_{b0}\) are today’s density parameters of CDM and baryons respectively. The third, fourth, and fifth columns correspond to the constraints derived by the data sets (i), (i)+(ii), and (i)+(ii)+(iii), respectively.
As we discussed in Sec. IV, the nonvanishing coupling \(Q\) reduces the sound horizon \(r_{s\ast}\) at \(z=z_{\ast}\). This leads to the shift of CMB acoustic peaks toward smaller scales. To keep the position of the multipole \(\ell_{A}\) corresponding to the sound horizon, we require that the comoving angular diameter distance \(D_{A}(z_{\ast})\) appearing in the numerator of Eq. (4.27) should be also reduced. We can express Eq. (4.28) as \(D_{A}(z_{\ast})=H_{0}^{-1}\int_{0}^{z_{\ast}}E^{-1}(z)\mathrm{d}z\), where \(E(z)=H(z)/H_{0}\). In the \(\Lambda\)CDM model we have \(E(z)=[\Omega_{m0}(1+z)^{3}+\Omega_{\Lambda}+\Omega_{r0}(1+z)^{4}]^{1/2}\), where \(\Omega_{m0}=\Omega_{c0}+\Omega_{b0}\). In our model, the CDM density parameter during the \(\phi\)MDE has the dependence \(\Omega_{c0}(1+z)^{3+2Q^{2}/(1+2\beta)}\) instead of \(\Omega_{c0}(1+z)^{3}\), together with the scaling behavior of \(\rho_{\phi}\) with \(\rho_{c}\). Then, the coupling \(Q\) leads to the increase of \(E(z)\) from the end of \(\phi\)MDE to the decoupling epoch, so that the integral \(\int_{0}^{z_{\ast}}E^{-1}(z)\mathrm{d}z\) is decreased. This property is different from the early DE scenario of Ref. [105], where the energy density of early DE quickly decays after the recombination epoch.
In our model, increasing the value of \(H_{0}\) also reduces \(D_{A}(z_{\ast})\), so it can compensate the reduction of \(r_{s\ast}\). However, the integral \(\int_{0}^{z_{\ast}}E^{-1}(z)\mathrm{d}z\) is already decreased at some extent by the existence of the \(\phi\)MDE. In this sense, there is the limitation for realizing \(H_{0}\) significantly larger than the value obtained for \(Q=0\). The observational constraint on \(H_{0}\) derived by the data set (i) for the model with \(Q=0\) is consistent with the Planck 2018 bound \(H_{0}=67.27\pm 0.60\) km/s/Mpc. In the presence of the negative coupling \(Q\), the likelihood region in the \(Q\)-\(H_{0}\) plane shown in Fig. 3 shifts toward larger values of \(H_{0}\). With the full data sets (i)+(ii)+(iii), the Hubble constant is constrained to be
\[H_{0}=68.22^{+0.58}_{-0.61}\,\,\mathrm{km/s/Mpc}\qquad(68\,\%\,\mathrm{CL})\,, \tag{5.5}\]
whose mean value is larger than the one derived for the \(\Lambda\)CDM model with the Planck 2018 data alone. However, it is not possible to reach the region \(H_{0}>70\) km/s/Mpc due to the limitation of reducing \(D_{A}(z_{\ast})\) by increasing the value of \(H_{0}\). We also carried out the MCMC analysis for the \(\Lambda\)CDM model and obtained the bound \(H_{0}=68.19^{+0.37}_{-0.38}\) km/s/Mpc with the full data sets (i)+(ii)+(iii). The \(1\sigma\) upper limit of the constraint (5.5) is only slightly larger than that of the \(\Lambda\)CDM bound. Hence the Hubble tension problem between the Planck 2018 data and those constrained by the direct measurements of \(H_{0}\) still persists in our coupled DE scenario.
Albeit the difficulty of resolving the Hubble tension problem, the fact that the probability distribution of \(Q\) has a peak around \(-0.04\) is an interesting property of our model. Moreover, there are observational signatures of the momentum transfer with \(\beta>0\) between DE and DM at \(68\,\%\) CL. The coupling \(\beta\) can alleviate the \(\sigma_{8}\) tension without spoiling the existence of the \(\phi\)MDE.
## VI Conclusions
In this paper, we put observational constraints on an interacting model of DE and DM given by the action (2.2). Since our model has a concrete Lagrangian, the background and perturbation equations of motion are unambiguously fixed by the variational principle. This is not the case for many coupled DE-DM models studied in the literature, in which the interacting terms are added to the background equations by hands. In our model, the DE scalar field \(\phi\) and the CDM fluid mediate both energy and momentum transfers, whose coupling strengths are characterized by the constants \(Q\) and \(\beta\), respectively. We considered an exponential potential \(V(\phi)=V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}\) of the scalar field to derive late-time cosmic acceleration, but the different choice of quintessence potentials should not affect the observational constraints on \(Q\) and \(\beta\) significantly.
The coupling \(Q\) can give rise to the \(\phi\)MDE during which the scalar-field density parameter \(\Omega_{\phi}\) and the effective equation of state \(w_{\mathrm{eff}}\) are nonvanishing constants, such that \(\Omega_{\phi}=w_{\mathrm{eff}}=2Q^{2}/[3(1+2\beta)]\). In this epoch, the CDM density grows as \(\rho_{c}\propto(1+z)^{3+2Q^{2}/(1+2\beta)}\) toward the past and hence the value of \(\rho_{c}\) at CMB decoupling can be increased by the coupling \(Q\). Since this enhances the Hubble expansion rate in the past, the sound horizon \(r_{s\ast}\) at decoupling (redshift \(z_{\ast}\)) gets smaller. Moreover, the ratio between the baryon and CDM densities, \(\rho_{b}/\rho_{c}\), is suppressed at \(z=z_{\ast}\) due to the increase of \(\rho_{c}\) induced by the presence of the \(\phi\)MDE. These modifications shift the positions and heights of acoustic peaks of CMB temperature anisotropies, so that the coupling \(Q\) can be tightly constrained from the CMB data.
The effect of momentum transfers on the dynamics of perturbations mostly manifests itself for the evolution of CDM density contrast \(\delta_{c}\) at low redshifts. For \(\beta>0\), the growth of \(\delta_{c}\) is suppressed due to the decrease of an effective gravitational coupling \(G_{\mathrm{eff}}\) on scales relevant to the galaxy clustering. The coupling \(Q\) enhances the value of \(G_{\mathrm{eff}}\) through the energy transfer between DE and DM. However, the reduction of \(G_{\mathrm{eff}}\) induced by positive \(\beta\) typically overwhelms the increase of \(G_{\mathrm{eff}}\) for the redshift \(z\lesssim 1\). Hence the growth rate of CDM perturbations is suppressed in comparison to the \(\Lambda\)CDM model.
We carried out the MCMC analysis for our model by using the observational data of Planck 2018 [92], 12-th SDSS, Phantheon supernovae samples, and 1-year DES. The coupling \(\beta\) is constrained to be in the range \(\beta=0.417^{+1.592}_{-0.307}\) (\(68\,\%\) CL) by using all the data sets. Since the \(\beta=0\) case is outside the \(1\sigma\) observational contour, there is an interesting
observational signature of the momentum transfer between DE and DM. This is an outcome of the suppressed growth of \(\delta_{c}\) at low redshifts, thereby easing the \(\sigma_{8}\) tension. Indeed, we found that the mean value of \(\sigma_{8}\) constrained by the full data is 0.7996, which is smaller than the best-fit value 0.8111 derived for the \(\Lambda\)CDM model with the Planck data alone.
For the coupling characterizing the energy transfer, we obtained the bound \(Q=-0.0355^{+0.0355}_{-0.0097}\) (\(68\,\%\,\)CL) by the analysis with full data sets. While the \(Q=0\) case is within the \(1\sigma\) observational contour, there is a peak for the probability distribution of the coupling at a negative value of \(Q\). This result is consistent with the likelihood analysis performed for the model with \(\beta=0\)[54; 55; 56], but now the constrained values of \(|Q|\) get larger. This increase of \(|Q|\) is mostly attributed to the fact that the effective equation of state during the \(\phi\)MDE is modified to \(w_{\rm eff}=2Q^{2}/[3(1+2\beta)]\) through the coupling \(\beta\). In comparison to the momentum transfer, we have not yet detected significant observational signatures of the energy transfer, but the future high-precision data will clarify this issue.
The presence of the coupling \(Q\) reduces the sound horizon \(r_{s*}\) at decoupling, thereby increasing the multipole \(\ell_{A}\) defined in Eq. (4.27). To keep the position of CMB acoustic peaks, we require that the comoving angular diameter distance \(D_{A}(z_{*})\) from \(z=0\) to \(z=z_{*}\) decreases. During the \(\phi\)MDE, the Hubble expansion rate increases due to the enhancement of \(\rho_{c}\) induced by the energy transfer. Since this leads to the decrease of \(D_{A}(z_{*})\), the further reduction of \(D_{A}(z_{*})\) by the choice of larger values of \(H_{0}\) is quite limited in our model. From the MCMC analysis of full data sets we obtained the bound \(H_{0}=68.22^{+0.58}_{-0.61}\) km/s/Mpc, whose mean value is larger than the one derived for the \(\Lambda\)CDM model with the Planck 2018 data alone. However, the Hubble constant \(H_{0}\) does not exceed the value 70 km/s/Mpc, so the Hubble tension problem is not completely resolved in our scenario.
It is still encouraging that the current data support signatures of the interaction between DE and DM. We expect that upcoming observational data like those from the Euclid satellite will place further tight constraints on the couplings \(\beta\) and \(Q\). Along with the \(H_{0}\) tension problem, we hope that we will be able to approach the origins of DE and DM and their possible interactions in the foreseeable future.
## Acknowledgements
XL is supported by the National Natural Science Foundation of China under Grants Nos. 11920101003, 12021003 and 11633001, and the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB23000000. ST is supported by the Grant-in-Aid for Scientific Research Fund of the JSPS No. 22K03642 and Waseda University Special Research Project No. 2023C-473. KI is supported by the JSPS grant number 21H04467, JST FOREST Program JPMJFR20352935, and by JSPS Core-to-Core Program (grant number:JPJSCCA20200002, JPJSCCA20200003).
| ```
ダークエネルギー(DE)モデルにおける観測的制約を、アキントエッセンススカラ場φがダークマター(DM)に運動量とエネルギー交換を通じて結合していることを前提とする。運動量の伝達は、その場微分とDMの四速度の間の相互作用によって加重され、結合定数βを用いて、エネルギー交換は、DMの密度の指数関数的なスカラ場結合によって特徴付けられている。正の結合定数βは、低赤方偏移でのDM密度擾動の成長を抑制することとなり、その性質は、$\sigma_8$の緊張問題の解決の可能性を提供する。負の結合定数Qは、$\phi$-物質支配の時代を誘発し、その存在により、宇宙マイクロ波背景(CMB)脱調時における音響限界が減少する。プラク2018年Planckデータ、12番目のSloan |
2310.20237 | The Toll Walk Transit Function of a Graph: Axiomatic Characterizations
and First-Order Non-definability | A walk $W=w_1w_2\dots w_k$, $k\geq 2$, is called a toll walk if $w_1\neq w_k$
and $w_2$ and $w_{k-1}$ are the only neighbors of $w_1$ and $w_k$,
respectively, on $W$ in a graph $G$. A toll walk interval $T(u,v)$, $u,v\in
V(G)$, contains all the vertices that belong to a toll walk between $u$ and
$v$. The toll walk intervals yield a toll walk transit function $T:V(G)\times
V(G)\rightarrow 2^{V(G)}$. We represent several axioms that characterize the
toll walk transit function among chordal graphs, trees, asteroidal triple-free
graphs, Ptolemaic graphs, and distance hereditary graphs. We also show that the
toll walk transit function can not be described in the language of first-order
logic for an arbitrary graph. | Manoj Changat, Jeny Jacob, Lekshmi Kamal K. Sheela, Iztok Peterin | 2023-10-31T07:42:12 | http://arxiv.org/abs/2310.20237v1 | # The Toll Walk Transit Function of a Graph:
###### Abstract
A walk \(W=w_{1}w_{2}\ldots w_{k}\), \(k\geq 2\), is called a toll walk if \(w_{1}\neq w_{k}\) and \(w_{2}\) and \(w_{k-1}\) are the only neighbors of \(w_{1}\) and \(w_{k}\), respectively, on \(W\) in a graph \(G\). A toll walk interval \(T(u,v)\), \(u,v\in V(G)\), contains all the vertices that belong to a toll walk between \(u\) and \(v\). The toll walk intervals yield a toll walk transit function \(T:V(G)\times V(G)\to 2^{V(G)}\). We represent several axioms that characterize the toll walk transit function among chordal graphs, trees, asteroidal triple-free graphs, Ptolemaic graphs, and distance hereditary graphs. We also show that the toll walk transit function can not be described in the language of first-order logic for an arbitrary graph.
## 1 Introduction
A toll walk denoted as \(W\) is a type of walk on a graph \(G\) that starts at a vertex \(u\) and ends at a distinct vertex \(v\). It possesses two distinct properties: first, it includes exactly one neighbor of \(u\) as its second vertex, and second, it involves exactly one neighbor of \(v\) as its penultimate vertex. A toll walk can be likened to a journey with an entrance fee or a toll that is paid only once, specifically at the outset when entering a system represented by a graph. Similarly, one exits the system precisely once, and this occurs at the neighbor of the final vertex.
The concept of toll walks was introduced by Alcon [1] as a tool to characterize dominating pairs in interval graphs. Subsequently, Alcon et al. [2], despite the publication year discrepancy, recognized that all vertices belonging to toll walks between \(u\) and \(v\) could be viewed as the toll interval \(T(u,v)\). This led to the development of the toll walk transit function \(T:V(G)\times V(G)\to 2^{V(G)}\) for a graph \(G\) and the concept of toll convexity. A pivotal result established in [2] asserts that a graph \(G\) conforms to the principles of toll convexity if and only if it is an interval graph. Furthermore, research extended to explore toll
convexity within standard graph products, examining classical convexity-related invariants, as investigated by Gologranc and Repolusk [11, 12]. More recently, Dourado [10] explored the hull number with respect to the toll convexity.
In [23] an axiomatic examination of the toll walk function \(T\) in a graph was explored. The main tool for this axiomatic approach is the notion of transit function. Mulder [20] introduced transit functions in discrete structures to present a unifying approach for results and ideas on intervals, convexities, and betweenness in graphs, posets, vector spaces, and several other mathematical structures. A transit function is an abstract notion of an interval, and hence the axioms on a transit function are sometimes known as betweenness axioms.
Specifically, in [23] an examination of the various well-established axioms of betweenness along with certain axioms studied in the context of the induced path function, a well-studied transit function on graphs, supplemented by new axioms tailored to the toll walk transit function, was attempted. In addition, in [23] a novel axiomatic characterization of interval graphs and subclass of asteroidal triple-free graphs was established. Two problems were posed in [23], which are the following.
Problem 1: Is there an axiomatic characterization of the toll walk transit function of an arbitrary connected graph \(G\)?
Problem 2: Is there a characterization of the toll walk transit function of chordal graphs?
In this paper, we solve the Problem 2 affirmatively and provide the axiomatic characterization of chordal graphs and trees (Section 3), along with AT-free graphs (Section 4), Ptolemaic graphs (Section 5) and distance-hereditary graphs (Section 5) using the betweenness axioms on an arbitrary transit function \(R\). Interestingly, we prove that for the Problem 1, there is no characterization of the toll walk transit function of an arbitrary connected graph using a set of first-order axioms. In other words, in Section 6 we prove that the toll walk transit function is not first-order axiomatizable. We use the standard technique of Ehrenfeucht-Fraisse Game of first-order logic to prove the non-definability of the toll walk transit function. In the following section, we settle the notation and recall some known results.
## 2 Preliminaries
Let \(G\) be a finite simple graph with the vertex set \(V(G)\) and the edge set \(E(G)\). For a positive integer \(k\), we use the notation \([k]\) for the set \(\{1,2,\ldots,k\}\). The set \(\{u\in V(G):uv\in E(G)\}\) is the _open neighborhood_\(N(v)\) of \(v\in V(G)\) and contains all neighbors of \(v\). The _closed neighborhood_\(N[v]\) is then \(N(v)\cup\{v\}\). A vertex \(v\) with \(N[V]=V(G)\) is called _universal_. Vertices \(w_{1},\ldots,w_{k}\) form a _walk_\(W_{k}\) of length \(k-1\) in \(G\) if \(w_{i}w_{i+1}\in E(G)\) for every \(i\in[k-1]\). We simply write \(W_{k}=w_{1}\cdots w_{k}\). A walk \(W_{k}\) is called a _path_ of \(G\) if all vertices of \(W_{k}\) are different. We use the notation \(v_{1},v_{k}\)-path for a path \(P_{k}=v_{1}\cdots v_{k}\) where
starts at \(v_{1}\) and ends at \(v_{k}\). Furthermore, \(u\xrightarrow{P}x\) denotes the sub-path of a path \(P\) with end vertices \(u\) and \(x\). An edge \(v_{i}v_{j}\) with \(|i-j|>1\) is called a _chord_ of \(P_{k}\). A path without chords is an _induced path_. The minimum number of edges on a \(u,v\)-path is the distance \(d(u,v)\) between \(u,v\in V(G)\). If there is no \(u,v\)-path in \(G\), then we set \(d(u,v)=\infty\). A \(u,v\)-path of length \(d(u,v)\) is called a \(u,v\)-_shortest path_.
A walk \(W=w_{1}\cdots w_{k}\) is called a _toll walk_ if \(w_{1}\neq w_{k}\), \(w_{2}\) is the only neighbor of \(w_{1}\) on \(W\) in \(G\) and \(w_{k-1}\) is the only neighbor of \(w_{k}\) on \(W\) in \(G\). The only toll walk that starts and ends at the same vertex \(v\) is \(v\) it itself. The following lemma from [2] will be useful on several occasions.
Lemma 1: _A vertex \(v\) is in some toll walk between two different non-adjacent vertices \(x\) and \(y\) if and only if \(N[x]-\{v\}\) does not separate \(v\) from \(y\) and \(N[y]-\{v\}\) does not separate \(v\) from \(x\)._
We use the standard notation \(C_{n}\) for a _cycle_ on \(n\geq 3\) vertices and \(K_{n}\) for a _complete graphs_ on \(n\geq 1\) vertices. Further graph families that are important to us for \(n\geq 1\) are _fans_\(F_{2}^{n+1}\) that contain a universal vertex \(y_{2}\) and a path \(p_{1}p_{2}\ldots p_{n}\), graphs \(F_{3}^{n}\) built by two universal vertices \(y_{1},y_{2}\) and a path \(p_{1}p_{2}\ldots p_{n}\) and \(F_{4}^{n}\) that is obtained from \(F_{3}^{n}\) by deleting the edge \(y_{1}y_{2}\). In addition, we define the families \(XF_{2}^{n+1}\), \(XF_{3}^{n}\) and \(XF_{4}^{n}\) as follows. We get graph \(XF_{2}^{n+1}\) from \(F_{2}^{n+1}\) by adding vertices \(u,v,x\) and edges \(up_{1},p_{n}v,y_{2}x\), similarly we get \(XF_{3}^{n}\) from \(F_{3}^{n}\) by adding vertices \(u,v,x\) and edges \(up_{1},uy_{1},vp_{n},vy_{2},xy_{1},xy_{2}\) and finally we get \(XF_{4}^{n}\) from \(F_{4}^{n}\) by adding vertices \(u,v,x\) and edges \(up_{1},uy_{1},vp_{n},vy_{2},xy_{1},xy_{2}\). Observe \(XF_{2}^{n+1}\), \(XF_{3}^{n}\) and \(XF_{4}^{n}\) in the last three right spots, respectively, in the last line of Figure 2.
In this work, we often consider classes of graphs that can be described by forbidden induced subgraphs. A graph \(G\) is _chordal_ if there is no induced cycle of length at least four in \(G\) and all chordal graphs form a class of _chordal graphs_. We call cycles of length at least five _holes_.
Another class of graphs important for us are _distance-hereditary_ graphs which are formed by all graphs \(G\) in which every induced path in \(G\) is also a shortest path in \(G\). They also have a forbidden induced subgraphs characterization presented by graphs on Figure 1, see also Theorem 2.1.
Theorem 2.1: _[_3_]_ _A graph \(G\) is a distance-hereditary graph if and only if \(G\) is \(HholeDF_{2}^{5}\)-free._
In Section 4 we further define the class of Ptolemaic graphs. Next, we define AT-_free_ graphs that contain all asteroidal-triple free graphs. The vertices \(u,v,w\) form an _asteroidal triple_ in \(G\) if there exists a \(u,v\) path without a neighbor of \(w\), a \(u,w\) path without a neighbor of \(v\), and a \(v,w\) path without a neighbor of \(u\). A graph \(G\) is called an _AT-free graph_ if \(G\) does not have an asteroidal triple. The following characterization of \(AT\)-free graphs with forbidden induced subgraphs from [17], see also [25], will be important later. All forbidden induced subgraphs are depicted in Figure 2. We use the same notation as presented in [25].
Theorem 3.1: [17] _A graph \(G\) is \((C_{k}T_{2}X_{2}X_{3}X_{30}\ldots X_{41}XF_{2}^{n+1}XF_{3}^{n}XF_{4}^{n})\)-free for \(k\geq 6\) and \(n\geq 1\) if and only if \(G\) is \(AT\)-free graph._
We continue with the formal definition of a transit function. A _transit function_ on a set \(V\) is a function \(R:V\times V\longrightarrow 2^{V}\) such that for every \(u,v\in V\) the following three conditions hold:
1. \(u\in R(u,v)\);
2. \(R(u,v)=R(v,u)\);
3. \(R(u,u)=\{u\}\).
The _underlying graph_\(G_{R}\) of a transit function \(R\) is a graph with vertex set \(V\), where distinct vertices \(u\) and \(v\) are adjacent if and only if \(R(u,v)=\{u,v\}\).
The well studied transit functions in graphs are the interval function \(I_{G}\), induced path function \(J_{G}\) and the all paths function \(A_{G}\). The _interval function_\(I_{G}\) of a connected graph \(G\) is defined with respect to the standard distance \(d\) in \(G\) as \(I:V\times V\longrightarrow 2^{V}\) where
\[I_{G}(u,v)=\{w\in V(G):w\text{ lies on some }u,v\text{-shortest path in }G\}.\]
The _induced path transit function_\(J(u,v)\) of \(G\) is a natural generalization of the interval function and is defined as
\[J(u,v)=\{w\in V(G):w\text{ lies on an induced }u,v\text{-path}\}.\]
The well known is also the _all-path transit function_\(A(u,v)=\{w\in V(G):w\text{ lies on a }u,v\text{-path}\}\), see [8], which consists of the vertices lying on at least one \(u,v\text{-path}\). For any two vertices \(u\) and \(v\) of a connected graph \(G\), it is clear that \(I(u,v)\subseteq J(u,v)\subseteq A(u,v)\).
Probably, the first approach to the axiomatic description of a transit function \(I_{G}\) for a tree \(G\) goes back to Sholander [24]. His work was later improved by Chvatal et al [9]. A full characterization of \(I_{G}\) for a connected graph \(G\) was presented by Mulder and Nebesky [21]. They used (t1) and (t2) and three other betweenness axioms. The idea of the name, "betweenness", is that \(x\in R(u,v)\)
Figure 1: Graphs house \(H\), \(C_{5}\), hole (different form \(C_{5}\)), domino \(D\) and 3-fan \(F_{2}^{5}\) (from left to right).
can be reinterpreted as \(x\) is between \(u\) and \(v\). Two of the axioms of Mulder [20] are important for our approach and follow for a transit function \(R\).
**Axiom (b1).** If there exist elements \(u,v,x\in V\) such that \(x\in R(u,v),x\neq v\), then \(v\notin R(x,u)\).
**Axiom (b2).** If there exist elements \(u,v,x\in V\) such that \(x\in R(u,v)\), then \(R(u,x)\subseteq R(u,v)\).
An axiomatic characterization of the induced path transit function \(J\) for several classes of graphs, including chordal graphs, was presented in [7]. These characterizations also use axioms (b1) and (b2) together with other axioms. Some of these axioms are the following.
**Axiom (J0).** If there exist different elements \(u,x,y,v\in V\) such that \(x\in R(u,y)\) and \(y\in R(x,v)\), then \(x\in R(u,v)\).
Figure 2: Forbidden induced subgraphs of \(AT\)-free graphs for \(k\geq 6\) and \(n\geq 1\).
**Axiom (J2).** If there exist elements \(u,v,x\in V\) such that \(R(u,x)=\{u,x\}\), \(R(x,v)=\{x,v\},u\neq v\) and \(R(u,v)\neq\{u,v\}\), then \(x\in R(u,v)\).
**Axiom (J3).** If there exist elements \(u,v,x,y\in V\) such that \(x\in R(u,y)\), \(y\in R(x,v)\), \(x\neq y\) and \(R(u,v)\neq\{u,v\}\), then \(x\in R(u,v)\).
The following axioms from [23] were used to characterize the toll walk transit function of the interval graphs and the AT-free graphs. Here, we correct a small error from [23] and add to Axiom (TW1) two additional conditions that \(u\neq x\) and \(v\neq y\) which are clearly needed.
**Axiom (TW1).** If there exist elements \(u,v,x,y,z\) such that \(x,y\in R(u,v)\), \(u\neq x\neq y\neq v\), \(R(x,z)=\{x,z\},R(z,y)=\{z,y\},R(x,v)\neq\{x,v\}\) and \(R(u,y)\neq\{u,y\}\), then \(z\in R(u,v)\).
**Axiom (TW2).** If there exist elements \(u,v,x,z\) such that \(x\in R(u,v)\), \(R(u,x)\neq\{u,x\}\), \(R(x,v)\neq\{x,v\}\) and \(R(x,z)=\{x,z\}\), then \(z\in R(u,v)\).
**Axiom (TW3).** If there exist different elements \(u,v,x\) such that \(x\in R(u,v)\), then there exist \(v_{1}\in R(x,v),v_{1}\neq x\) with \(R(x,v_{1})=\{x,v_{1}\}\) and \(R(u,v_{1})\neq\{u,v_{1}\}\).
Notice that if \(R(x,v)=\{x,v\}\), then \(v_{1}=v\) when Axiom (TW3) holds.
The next axiom is a relaxation of Axiom (b1).
**Axiom (b1').** If there exist elements \(u,v,x\in V\) such that \(x\in R(u,v),v\neq x\) and \(R(v,x)\neq\{v,x\}\), then \(v\notin R(u,x)\).
The following corollary is from [23]
Corollary 1: _The toll walk transit function \(T\) on a graph \(G\) satisfies Axiom (b1') if and only if \(G\) is \(AT\)-free._
## 3 Toll walk transit function of chordal graphs
We start with a slight modification of the Axioms (TW3) and (J0) to gain characterization of the toll walk function of chordal graphs.
**Axiom (TWC).** If there exist different elements \(u,v,x\) such that \(x\in R(u,v)\), then there exist \(v_{1}\in R(x,v),v_{1}\neq x\) with \(R(x,v_{1})=\{x,v_{1}\}\), \(R(u,v_{1})\neq\{u,v_{1}\}\) and \(x\notin R(v_{1},v)\).
**Axiom (JC).** If there exist different elements \(u,x,y,v\in V\) such that \(x\in R(u,y)\), \(y\in R(x,v)\) and \(R(x,y)=\{x,y\}\), then \(x\in R(u,v)\).
From the definition of Axiom (TWC), it is clear that Axiom (TW3) implies Axiom (TWC), and Axiom (J0) implies Axiom (JC). Furthermore, the Axiom (JC) is symmetric with respect to \(x\) and \(y\) and at the same time \(u\) and \(v\) in the sense that we can exchange them. In addition, it is easy to see that the toll walk transit function does not satisfy the Axiom (TWC) and (JC) of an arbitrary graph. For instance, Axiom (JC) is not fulfilled on a four-cycle
and Axiom (TWC) does not hold on a six-cycle \(uxv_{1}avbu\). The next proposition shows that \(T\) satisfies the axiom (TWC) on the chordal graphs.
Proposition 1: _The toll walk transit function satisfies Axiom (TWC) on chordal graphs._
Proof: Suppose \(x\in T(u,v)\). There exists an induced \(x,v\)-path \(P\) that avoids the neighborhood of \(u\) with the possible exception of \(x\). For the neighbor \(v_{1}\) of \(x\) on \(P\) it follows that \(v_{1}\in T(x,v),v_{1}\neq x\) with \(T(x,v_{1})=\{x,v_{1}\}\) and \(T(u,v_{1})\neq\{u,v_{1}\}\). If \(v_{1}=v\), then clearly \(x\notin T(v,v_{1})=\{v\}\). Similarly, if \(T(v_{1},v)=\{v_{1},v\}\), then \(x\notin T(v_{1},v)\). Consider next that \(T(v_{1},v)\neq\{v_{1},v\}\). We will show that \(x\notin T(v_{1},v)\) for a chordal graph \(G\). To avoid contradiction, assume that \(x\in T(v_{1},v)\). There exists an induced \(x,v\)-path \(Q\) that avoids the neighborhood of \(v_{1}\). Let \(x_{1}\) be the neighbor of \(x\) on \(Q\). Clearly, \(x_{1}v_{1}\notin E(G)\). Let \(v_{2}\neq v\) be the neighbor of \(v_{1}\) on \(P\) that exists since \(T(v_{1},v)\neq\{v_{1},v\}\). Since \(P\) is induced \(T(x,v_{2})\neq\{x,v_{2}\}\). If \(x_{1}\) is adjacent to \(v_{2}\), then \(xv_{1}v_{2}x_{1}x\) is an induced four-cycle. Otherwise, the path \(x_{1}xv_{1}v_{2}\) is part of a larger induced cycle of length greater than four (together with some other vertices of \(P\) or \(Q\)). Both are not possible in chordal graphs. Hence, \(x\notin T(v_{1},v)\) and \(T\) satisfies Axiom (TWC) on chordal graphs.
Theorem 4.1: _The toll walk transit function \(T\) satisfies Axiom (JC) on a graph \(G\) if and only if \(G\) is a chordal graph._
Proof: Suppose that \(G\) contains an induced cycle \(C_{n}\), \(n\geq 4\), with consecutive vertices \(y,x,u,v\) of \(C_{n}\). Clearly \(x\in T(u,y)\), \(y\in T(x,v)\), and \(T(x,y)=\{x,y\}\) but \(x\notin T(u,v)\) since \(uv\) is an edge in \(G\). That is, if \(T\) satisfies Axiom (JC), then \(G\) is \(C_{n}\)-free for \(n\geq 4\).
Conversely, suppose that \(T\) does not satisfy Axiom (JC) on \(G\). There exist distinct vertices \(u,x,y,v\) such that \(x\in T(u,y)\), \(y\in T(x,v)\), \(T(x,y)=\{x,y\}\) and \(x\notin T(u,v)\). Clearly, \(x,y,u,v\) belong to the same connected component and there exists an induced \(u,x\)-path \(P\) and an induced \(v,y\)-path \(Q\). Moreover, by \(x\in T(u,y)\) we may assume that the only neighbor of \(y\) on \(P\) is \(x\). Similarly, by \(y\in T(x,v)\) we may assume that the only neighbor of \(x\) on \(Q\) is \(y\). Now, \(x\notin T(u,v)\) implies that \(N(u)-x\) separate \(x\) from \(v\) or \(N(v)-x\) separate \(u\) from \(x\) by the lemma 1.
By the symmetry of Axiom (JC), we may assume that \(N(u)-x\) separates \(x\) from \(v\). So, every \(x,v\)-path contains at least one neighbor of \(u\). But \(x\) belongs to a \(u,v\)-walk, say \(W\), formed by \(P\), the edge \(xy\) and \(Q\). Since \(x\notin T(u,v)\), there exists a neighbor of \(u\), say \(u_{1}\neq y\), that belongs to \(Q\). We may choose \(u_{1}\) to be the first vertex on \(Q\) that is adjacent to \(u\) after \(y\).
If the cycle \(u\xrightarrow{P}xy\xrightarrow{Q}u_{1}u\) is induced, then \(G\) is not chordal, and we are done. Otherwise, there must be some chords from the vertices of \(P\) to the vertices of \(Q\) different from \(y\). Let \(a\) be the last vertex on \(P\) before \(x\) that is adjacent to some vertex, say \(b\), on the \(u_{1},y\)-subpath of \(Q\). We may choose \(b\) to be closest to \(y\) on \(Q\) among all such vertices. Since \(ab\in E(G)\), we have \(b\neq y\) and \(a\xrightarrow{P}xy\xrightarrow{Q}ba\)
is an induced cycle of length at least four. So, \(G\) is not chordal and we are done again.
Lemma 2: _Let \(R\) be a transit function on a non-empty finite set \(V\) satisfying Axioms (J2), (JC) and (TW2). If \(P_{n}\), \(n\geq 2\), is an induced \(u,v\)-path in \(G_{R}\), then \(V(P_{n})\subseteq R(u,v)\). Moreover, if \(z\) is adjacent to an inner vertex of \(P_{n}\) that is not adjacent to \(u\) or to \(v\) in \(G_{R}\), then \(z\in R(u,v)\)._
Proof: If \(n=2\), then \(P_{2}=uv\) and \(R(u,v)=\{u,v\}\) by the definition of \(G_{R}\). If \(n=3\), then \(P_{3}=ux_{1}v\) and \(x_{1}\in R(u,v)\) by Axiom (J2). For \(n\geq 4\) we continue by induction. For the basis, let \(n=4\) and \(P_{4}=ux_{1}x_{2}v\). By Axiom (J2) we have \(x_{1}\in R(u,x_{2})\) and \(x_{2}\in R(x_{1},v)\). Now, the Axiom (JC) implies that \(x_{1},x_{2}\in R(u,v)\). Let now \(n>4\) and \(P_{n}=ux_{1}x_{2}\ldots x_{n-1}v\). By the induction hypothesis we have \(\{u,x_{1},x_{2},\ldots,x_{n-1}\}\subseteq R(u,x_{n-1})\) and \(\{x_{1},x_{2},\ldots,x_{n-1},v\}\subseteq R(x_{1},v)\). That is, \(x_{i}\in R(u,x_{i+1})\) and \(x_{i+1}\in R(x_{i},v)\) for every \(i\in[n-2]\). By Axiom (JC) we get \(x_{i},x_{i+1}\in R(u,v)\) for every \(i\in[n-2]\).
For the second part, let \(z\) be a neighbor of \(x_{i}\), \(i\in\{2,\ldots,n-2\}\) that is not adjacent to \(u,v\). Clearly, in this case, \(n\geq 5\). By the first part of the proof, we have \(x_{i}\in R(u,v)\) and we have \(z\in R(u,v)\) by Axiom (TW2).
Proposition 2: _Let \(R\) be any transit function defined on a non-empty set \(V\). If \(R\) satisfies \((JC)\) and \((J2)\), then \(G_{R}\) is chordal._
Proof: Let \(R\) be a transit function satisfying \((JC)\) and \((J2)\). Assume on the contrary that \(G_{R}\) contains an induced cycle, say \(C_{k}=u_{1}u_{2}\ldots u_{k}u_{1}\), for some \(k\geq 4\). Let us first \(k=4\). Since \(R(u_{1},u_{2})=\{u_{1},u_{2}\}\) and \(R(u_{2},u_{3})=\{u_{2},u_{3}\}\), we have \(u_{2}\in R(u_{1},u_{3})\) by Axiom (J2). Similar \(u_{3}\in R(u_{2},u_{4})\) holds. Since \(R\) satisfies Axiom \((JC)\) we have \(u_{2}\in R(u_{1},u_{4})\), which is a contradiction as \(R(u_{1},u_{4})=\{u_{1},u_{4}\}\).
Let now \(k\geq 5\). Similar to the above, we have \(u_{i+1}\in R(u_{i},u_{i+2})\) for every \(i\in[n-2]\) and, in particular, \(u_{n-1}\in R(u_{n-2},u_{n})\). By Lemma 2 we have \(u_{n-2}\in R(u_{n-1},u_{1})\). Now, \(u_{n-1}\in R(u_{n-2},u_{n})\), \(u_{n-2}\in R(u_{n-1},u_{1})\) and \(R(u_{n-1},u_{n-2})=\{u_{n-1},u_{n-2}\}\) imply that \(u_{n-1}\in R(u_{1},u_{n})\) by Axiom (JC), a contradiction to \(R(u_{1},u_{n})=\{u_{1},u_{n}\}\).
Theorem 3.1: _If \(R\) is a transit function on a non-empty finite set \(V\) that satisfies Axioms (b2), (J2), (JC), (TW1), (TW2) and (TWC), then \(R=T\) on \(G_{R}\)._
Proof: Let \(u\) and \(v\) be two distinct vertices of \(G_{R}\) and first assume that \(x\in R(u,v)\). We have to show that \(x\in T(u,v)\) on \(G_{R}\). Clearly \(x\in T(u,v)\) whenever \(x\in\{u,v\}\). Moreover, if \(R(u,v)=\{u,v\}\), then \(x\) must be \(u\) or \(v\). So, assume that \(x\notin\{u,v\}\) and that \(uv\notin E(G_{R})\). If \(R(u,x)=\{u,x\}\) and \(R(x,v)=\{x,v\}\), then \(uxv\) is a toll walk of \(G_{R}\) and \(x\in T(u,v)\) follows. Suppose next that \(R(x,v)\neq\{x,v\}\). We will construct an \(x,v\)-path \(Q\) in \(G_{R}\) without a neighbor of \(u\) (except possibly \(x\)). For this, let \(x=v_{0}\). By Axiom (TWC) there exists a neighbor of \(v_{0}\), say \(v_{1}\) and \(v_{1}\in R(v_{0},v)\) with \(R(u,v_{1})\neq\{u,v_{1}\}\) and \(v_{0}\notin R(v_{1},v)\). Since
\(R(v_{0},v)\), we have \(R(v_{1},v)\subseteq R(v_{0},v)\) by Axiom (b2) and since \(v_{0}\notin R(v_{1},v)\) we have \(R(v_{1},v)\subset R(v_{0},v)\). In particular, \(v_{1}\in R(v_{0},v)\subseteq R(u,v)\) by Axiom (b2). If \(v_{1}\neq v\), then we can continue with the same procedure to get \(v_{2}\in R(v_{1},v)\), where \(v_{2}\neq u\), \(R(v_{1},v_{2})=\{v_{1},v_{2}\}\), \(R(u,v_{2})\neq\{u,v_{2}\}\) and \(v_{1}\notin R(v_{2},v)\). Furthermore, \(R(v_{2},v)\subset R(v_{1},v)\subset R(v_{0},v)\) and \(v_{2}\in R(u,v)\). Similarly (when \(v_{2}\neq v\)), we get \(v_{3}\in R(v_{2},v)\) such that \(v_{3}\neq u\), \(R(v_{2},v_{3})=\{v_{2},v_{3}\}\), \(R(u,v_{3})\neq\{v_{1},v_{3}\}\), \(v_{2}\notin R(v_{3},v)\), \(v_{3}\in R(u,v)\) and \(R(v_{3},v)\subset R(v_{2},v)\subset R(v_{1},v)\subset R(v_{0},v)\). Repeating this step, we obtain a sequence of vertices \(v_{0},v_{1},\ldots,v_{q}\), \(q\geq 2\), such that
1. \(R(v_{i},v_{i+1})=\{v_{i},v_{i+1}\},i\in\{0,1,\ldots,q-1\}\),
2. \(R(u,v_{i})\neq\{u,v_{i}\},i\in[q]\),
3. \(R(v_{i+1},v)\subset R(v_{i},v),i\in\{0,1,\ldots,q-1\}\).
This sequence must stop under the last condition because \(V\) is finite. Hence, we may assume that \(v_{q}=v\). Now, if \(R(u,x)=\{u,x\}\), then we have a toll \(u,v\)-walk \(uxv_{1}\ldots v_{q-1}v\) and \(x\in T(u,v)\).
If \(R(u,x)\neq\{u,x\}\), we can symmetrically build a sequence \(u_{0},u_{1},\ldots,u_{r}\), where \(u_{0}=x\), \(u_{r}=u\), and \(u_{0}u_{1}\ldots u_{r}\) is a \(x,u\)-path in \(G_{R}\) that avoids \(N[v]\). Clearly, \(uu_{r-1}u_{r-2}\ldots u_{1}xv_{1}\ldots v_{q-1}v\) is a toll \(u,v\)-walk and \(x\in T(u,v)\).
Now suppose that \(x\in T(u,v)\) and \(x\notin\{u,v\}\). We have to show that \(x\in R(u,v)\). Let \(W\) be a toll \(u,v\)-walk containing \(x\). Clearly, \(W\) contains an induced \(u,v\)-path, say \(Q\). If \(x\) belongs to \(Q\), then \(x\in R(u,v)\) by Lemma 2. So, we may assume that \(x\) does not belong to \(Q\). The graph \(G_{R}\) is chordal by Proposition 2. Let \(x_{1}x_{2}\ldots x_{\ell}\), \(x_{1}\in Q\), \(x_{\ell}=x\) be a subpath of \(W\) where \(x_{1}\) is the only vertex of \(Q\). If \(R(u,x_{1})=\{u,x_{1}\}\) and \(R(x_{1},v)=\{x_{1},v\}\), then we have a contradiction with \(W\) being a toll \(u,v\)-walk containing \(x\). Without loss of generality, we may assume that \(R(x_{1},v)\neq\{x_{1},v\}\). If also \(R(u,x_{1})\neq\{u,x_{1}\}\), then \(x\in R(u,v)\) by continuous application of Axiom (TW2) \(\ell-1\) times on \(x_{2},\ldots x_{\ell}\). So, let now \(R(u,x_{1})=\{u,x_{1}\}\). Since \(x\) and \(v\) are not separated by \(N[u]-\{x\}\) by Lemma 1, there exists an induced \(x,v\)-path \(S\) without a neighbor of \(u\). Let \(S=s_{0}s_{1}\cdots s_{k}\), \(s_{0}=x\), and \(s_{k}=v\) and let \(s_{j}\) be the first vertex of \(S\) that also belongs to \(Q\). Let \(b\) be the neighbor of \(x_{1}\) on \(Q\) different from \(u\). By \(R(x_{1},v)\neq\{x_{1},v\}\) we have \(b\neq v\). Since \(G_{R}\) is \(C_{n}\) free for every \(n\geq 4\), the vertex \(s_{j}\) equals \(b\) and (\(s_{j-1}\) is adjacent to \(a\) or \(x_{2}\) is adjacent to \(b\)). This gives \(s_{j-1}\in R(u,v)\) or \(x_{2}\in R(u,v)\), respectively, by Axiom (TW1). Then by continuous application of Axiom (TW2) we have \(s_{i}\in R(u,v)\) for every \(i\in\{j-2,j-3,\ldots,1,0\}\) or \(x_{i}\in R(u,v)\) for every \(i\in\{3,\ldots,\ell\}\), respectively. Therefore, \(x=s_{0}=x_{\ell}\in R(u,v)\) and the proof is complete.
Proposition 3: _Let \(T\) be the toll walk transit function on a connected graph \(G\). If \(T\) satisfies Axiom (JC) on \(G\), then \(T\) satisfies Axiom (b2)._
Proof: Suppose \(T\) satisfies Axiom (JC). If \(T\) does not satisfy Axiom (b2), then there exists \(u,v,x,y\) such that \(x\in T(u,v)\), \(y\in T(u,x)\) and \(y\notin T(u,v)\) must be distinct. Notice that \(ux\notin E(G)\) because \(y\in T(u,x)\). Since \(x\in T(u,v)\), there exists an induced \(x,v\)-path, say \(P\), without a neighbor of \(u\) and an induced
\(x,u\)-path, say \(Q\), without a neighbor of \(v\) (except possibly \(x\)). Similarly, since \(y\in T(u,x)\), there exists an induced \(u,y\)-path, say \(R\), without a neighbor of \(x\) (except possibly \(y\)) and an induced \(y,x\)-path, say \(S\), without a neighbor of \(u\) (except possibly \(y\)). Since \(y\notin T(u,v)\), a neighbor of \(u\) separates \(y\) from \(v\) or a neighbor of \(v\) separates \(y\) from \(u\) by Lemma 1. But \(y\xrightarrow{S}x\xrightarrow{P}v\) is a \(y,v\)-path that does not contain a neighbor of \(u\). Therefore, the only possibility is that a neighbor of \(v\) separates \(y\) from \(u\). Therefore, \(R\) contains a neighbor of \(v\), say \(v^{\prime}\), which is closest to \(y\). The vertices of \(v^{\prime}\xrightarrow{R}y\xrightarrow{S}x\xrightarrow{P}vv^{\prime}\) contain an induced cycle of length at least four, a contradiction to the Axiom (JC) by Theorem 3.
The Axioms (J2), (TW1) and (TW2) are satisfied for a toll walk transit function on any graph \(G\). By Theorems 3 and 4 and Propositions 1, 2 and 3 we have the following characterization of the toll walk transit function of a chordal graph.
Theorem 4.1: _A transit function \(R\) on a finite set \(V\) satisfies the Axioms (b2), (J2), (JC), (TW1), (TW2), and (TWC) if and only if \(G_{R}\) is a chordal graph and \(R=T\) on \(G_{R}\)._
Trees form a special subclass of chordal graphs. To fully describe the toll walk transit function of trees, we define Axiom (tr), which is a generalization of Axiom (J2), and combine it with Axiom (JC).
#### 4.1.1 Axiom (tr).
If there exist elements \(u,v,x\in V\) such that \(R(u,x)=\{u,x\}\), \(R(x,v)=\{x,v\}\) then \(x\in R(u,v)\).
Lemma 3: _The toll walk transit function \(T\) satisfies Axiom (tr) on a graph \(G\) if and only if \(G\) is a triangle-free graph._
Proof: If \(G\) contains a triangle with vertices \(u,x,v\), then \(T(u,x)=\{u,x\}\) and \(T(x,v)=\{x,v\}\), but \(x\notin T(u,v)\). Therefore, \(T\) does not satisfy Axiom (tr). Conversely, suppose that \(T\) does not satisfy the Axiom (tr) on \(G\). That is, if \(T(u,x)=\{u,x\}\), \(T(x,v)=\{x,v\}\), and \(x\notin T(u,v)\), then the only possibility is \(uv\in E(G)\), which implies that \(u,x,v\) forms a triangle.
By combining Theorem 3 and Lemma 3, we obtain the following theorem on the toll walk function of trees.
Theorem 4.2: _The toll walk transit function \(T\) satisfies the axioms (JC) and (tr) on a graph \(G\) if and only if \(G\) is a tree._
Now, the characterization of the toll walk transit function of the tree can be obtained by replacing Axiom (J2) with Axiom (tr) in Theorem 4.1.1
Theorem 4.3: _A transit function \(R\) on a finite set \(V\) satisfies Axioms (b2), (tr), (JC), (TW1), (TW2) and (TWC) if and only if \(G_{R}\) is a tree and \(R=T\) on \(G_{R}\)
## 4 Toll walk transit function of AT-free graphs
In this Section, we obtain a characterization of the toll function of AT-free graphs. For this we relax Axiom (b2) to (b2') and modify Axiom (J3) to (J4) and (J4'), Axioms (TW1) and (TW2) are generalized by Axiom (TW1') and finally Axiom (TW3) is modified to Axiom (TWA).
**Axiom (b2').** If there exist elements \(u,v,x\in V\) such that \(x\in R(u,v)\) and \(R(u,x)\neq\{u,x\}\), then \(R(x,v)\subseteq R(u,v)\).
**Axiom (J4).** If there exist elements \(u,v,x,y\in V\) such that \(x\in R(u,y),y\in R(x,v),x\neq y\), \(R(u,x)=\{u,x\},R(y,v)=\{y,v\}\) and \(R(u,v)\neq\{u,v\}\), then \(x\in R(u,v)\).
**Axiom (J4').** If there exist elements \(u,v,x,y\in V\) such that \(x\in R(u,y)\), \(y\in R(x,v)\), \(R(u,x)\neq\{u,x\}\), \(R(y,v)\neq\{y,v\}\), \(R(x,y)\neq\{x,y\}\) and \(R(u,v)\neq\{u,v\}\), then \(x\in R(u,v)\).
**Axiom (TWA).** If there exist different elements \(u,v,x\) such that \(x\in R(u,v)\), then there exist \(x_{1}\in R(x,v)\cap R(u,v)\) where \(x_{1}\neq x\), \(R(x,x_{1})=\{x,x_{1}\}\), \(R(u,x_{1})\neq\{u,x_{1}\}\) and \(R(x_{1},v)\subset R(x,v)\).
**Axiom (TW1').** If there exist elements \(u,v,x,w,y,z\) such that \(x,y\in R(u,v)\), \(x\neq u\), \(y\neq v\), \(R(x,v)\neq\{x,v\}\), \(R(u,y)\neq\{u,y\}\), \(R(x,z)=\{x,z\}\), \(R(z,w)=\{z,w\}\), \(R(w,y)=\{w,y\}\) and \(R(u,w)\neq\{u,w\}\), then \(z\in R(u,v)\).
If we have \(y=x\) in Axiom (TW1), then \(x\) is not adjacent to \(u\) nor to \(v\). Furthermore, Axiom (TW1) is a special version of Axiom (TW1') if we set \(w=z\). When both \(w=z\) and \(x=y\), we obtain Axiom (TW2) from Axiom (TW1').
Proposition 4: _The toll walk transit function satisfies Axiom (J4) on any graph \(G\)._
Proof: Assume that \(x\in T(u,y)\), \(y\in T(x,v)\), \(x\neq y\), \(T(u,x)=\{u,x\}\), \(T(y,v)=\{y,v\}\) and \(T(u,v)\neq\{u,v\}\). Since \(x\in T(u,y)\), there exists an \(x,y\)-path \(P\) that avoids all neighbors of \(u\) beside \(x\). Let \(a\) be the neighbor of \(v\) on \(P\) that is closest to \(x\) on \(P\). (Notice that at least \(y\) is a neighbor of \(v\) on \(P\).) The path \(ux\stackrel{{ P}}{{\rightarrow}}av\) is a \(u,v\)-toll walk containing \(x\) and \(x\in T(u,v)\).
Proposition 5: _If \(G\) is an AT-free graph and \(T\) is the toll walk transit function on \(G\), then \(T\) satisfies the axioms (J4') and (b2') on \(G\)._
Proof: First, we show that Axiom (J4') holds. Suppose that \(T\) does not satisfy Axiom (J4') on \(G\). That is \(x\in T(u,y)\), \(y\in T(x,v)\), \(T(u,x)\neq\{u,x\}\), \(T(y,v)\neq\{y,v\}\), \(T(x,y)\neq\{x,y\}\), \(T(u,v)\neq\{u,v\}\) but \(x\notin T(u,v)\). Since \(x\in T(u,y)\) there is a \(u,x\)-path \(P\) without a neighbor of \(y\) and an \(x,y\)-path \(Q\) without a neighbor of \(u\). Again, since \(y\in T(x,v)\), there is a \(y,v\)-path \(R\) without a neighbor of \(x\). Since \(x\notin T(u,v)\) we can assume that \(N[u]\) separates \(x\) from \(v\) (the other possibility from Lemma 1 is symmetric). That is, \(R\) contains a neighbor \(u^{\prime}\) of \(u\) and we
choose \(u^{\prime}\) to be the neighbor of \(u\) on \(R\) that is closest to \(y\). Then \(uu^{\prime}\xrightarrow{R}y\) is a \(u,y\)-path without a neighbor of \(x\), \(P\) is a \(u,x\)-path without a neighbor of \(y\) and \(Q\) is an \(x,y\)-path without a neighbor of \(u\). That is, the vertices \(u,x,y\) form an asteroidal triple, a contradiction.
Suppose now that \(T\) does not satisfy Axiom (b2'). That is, \(x\in T(u,v)\), \(T(u,x)\neq\{u,x\}\), and \(T(x,v)\not\subseteq T(u,v)\). So, there exist \(y\in V(G)\) such that \(y\in T(x,v)\) and \(y\notin T(u,v)\), which means that \(y\neq v\), \(y\neq x\), and \(T(x,v)\neq\{x,v\}\). Since \(x\in T(u,v)\), let \(P\) be an induced \(u,x\)-path without a neighbor of \(v\) (except possibly \(x\)) and let \(Q\) be an induced \(x,v\)-path without a neighbor of \(u\). Since \(y\in T(x,v)\), let \(R\) be an induced \(x,y\)-path without a neighbor of \(v\) (except possibly \(y\)) and \(S\) be an induced \(y,v\)-path without a neighbor of \(x\) (except possibly \(y\)). Furthermore, since \(y\notin T(u,v)\), \(S\) contains a neighbor \(u^{\prime}\) of \(u\). The vertices \(u,x,v\) form an asteroidal triple because \(P\) is a \(u,x\) path without a neighbor of \(v\), \(Q\) is a \(x,v\) path without a neighbor of \(u\) and \(uu^{\prime}\xrightarrow{S}v\) is a \(u,v\) path without a neighbor of \(x\), a contradiction.
Proposition 6: _If \(G\) is an AT-free graph, then the toll walk transit function \(T\) satisfies Axiom (TWA) on \(G\)._
Proof: Suppose that \(x\in T(u,v)\) where \(u,v,x\) are distinct vertices of an AT-free graph \(G\). Let \(P\) be an induced \(u,x\)-path without a neighbor of \(v\) (except possibly \(x\)) and \(Q\) be an induced \(x,v\)-path without a neighbor of \(u\) (except possibly \(x\)). If \(T(x,v)=\{x,v\}\), then we are done for \(x_{1}=v\). Otherwise, consider the neighbor \(x_{1}\) of \(x\) on \(Q\). It follows that \(x_{1}\in T(x,v),x_{1}\neq x\) with \(T(x,x_{1})=\{x,x_{1}\}\) and \(T(u,x_{1})\neq\{u,x_{1}\}\). In addition, \(u\xrightarrow{P}xx_{1}\xrightarrow{Q}v\) is a \(u,v\)-toll walk that contains \(x_{1}\). So, \(x_{1}\in T(u,v)\) and with this \(x_{1}\in T(u,v)\cap T(x,v)\). We still have to prove that \(T(x_{1},v)\subset T(x,v)\). If \(T(x_{1},v)=\{x_{1},v\}\), then \(x_{1}\) is the desired vertex and we may assume in what follows that \(x_{1}\) and \(v\) are not adjacent.
First, we show that \(T(x_{r},v)\subseteq T(x,v)\) holds for some \(x_{r}\) with \(x_{r}\in T(x,v)\cap T(u,v)\), \(T(x_{r},x)=\{x_{r},x\}\), \(x_{r}\neq x\) and \(T(x_{r},u)\neq\{x_{r},u\}\). If \(T(x_{1},v)\subseteq T(x,v)\), then we are done. Otherwise, assume that \(T(x_{1},v)\not\subseteq T(x,v)\) where \(y_{1}\in T(x_{1},v)\) and \(y_{1}\notin T(x,v)\). Clearly, \(y_{1}\) is not on \(Q\). Let \(R_{1}\) be an induced \(x_{1},y_{1}\)-path without a neighbor of \(v\) (except possibly \(y_{1}\)) and \(S_{1}\) be an induced \(y_{1},v\)-path without a neighbor of \(x_{1}\) (except possibly \(y_{1}\)). Since \(y_{1}\notin T(x,v)\)\(S_{1}\) contains a neighbor of \(x\), say \(x_{2}\), which is closest to \(v\) in \(S_{1}\). We claim that \(y_{1}\) is not adjacent to an internal vertex of the \(x_{1},v\)-subpath of \(Q\). If not, then let \(y_{1}^{\prime}\) be a neighbor of \(y_{1}\) on the \(x_{1},v\)-subpath of \(Q\). The walk \(xx_{1}\xrightarrow{R_{1}}y_{1}y_{1}^{\prime}\xrightarrow{Q}v\) is or contains (when \(x\) is adjacent to a vertex of \(R_{1}\) different from \(x_{1}\)) a toll \(x,v\)-walk, a contradiction to \(y_{1}\notin T(x,v)\). Next, we claim that \(T(x_{1},y_{1})=\{x_{1},y_{1}\}\). If not, then \(x_{1},y_{1},v\) form an asteroidal triple since \(T(x_{1},y_{1})\neq\{x_{1},y_{1}\}\) and \(R_{1}\) is an \(x_{1},y_{1}\)-path without a neighbor of \(v\), \(x_{1}\xrightarrow{Q}v\) is an \(x_{1},v\)-path without a neighbor of \(y_{1}\) and \(S_{1}\) is a \(y_{1},v\)-path without a neighbor of \(x_{1}\). Therefore, \(T(x_{1},y_{1})=\{x_{1},y_{1}\}\) and since \(T(u,x_{1})\neq\{u,x_{1}\}\), we have \(y_{1}\neq u\). The next claim is that \(u\) is not adjacent to a vertex, say \(u_{1}\), in the \(x_{2},v\)-subpath of \(S_{1}\). If not, then \(uu_{1}\xrightarrow{S_{1}}v\) is a \(u,v\)-path without a neighbor of \(x_{1}\), \(u\xrightarrow{P}xx_{1}\) is a \(u,x_{1}\)-path without a neighbor of \(v\) and
\(x_{1}\xrightarrow{Q}v\) an \(x_{1},v\)-path without a neighbor of \(u\). This means that \(u\), \(v\), and \(x_{1}\) form an asteroidal triple, a contradiction. Therefore, \(u\) is not adjacent to a vertex on the \(x_{2},v\)-subpath of \(S_{1}\). In particular, \(T(u,x_{2})\neq\{u,x_{2}\}\), \(u\xrightarrow{P}xx_{2}\xrightarrow{S_{1}}v\) is a toll \(u,v\) -walk, and \(x_{2}\in T(u,v)\). If \(T(x_{2},v)=\{x_{2},v\}\), then \(x_{2}\) fulfills Axiom (TWA) and we are done. So, we may assume in what follows that \(v\) and \(x_{2}\) are not adjacent. We next claim that \(x_{2}\) is adjacent to some internal vertex, say \(x_{2}^{\prime}\), of the \(x_{1},v\)-subpath of \(Q\). If not, then \(x_{1},x_{2},v\) form an asteroidal triple (since \(x_{2}\xrightarrow{S_{1}}v\) is an \(x_{2},v\)-path without a neighbor of \(x_{1}\), \(x_{1}\xrightarrow{Q}v\) is an \(x_{1},v\)-path without a neighbor of \(x_{2}\) and \(x_{1}xx_{2}\) is an \(x_{1},x_{2}\)-path without a neighbor of \(v\)). Now, \(xx_{2}x_{2}^{\prime}\xrightarrow{Q}v\) is a \(x,v\)-toll walk containing \(x_{2}\). That is \(x_{2}\in T(x,v)\) and hence \(x_{2}\in T(u,v)\cap T(x,v)\). So, if \(T(x_{2},v)\subseteq T(x,v)\), then \(x_{2}\) is our desired \(x_{1}\). Moreover, \(x_{1}xx_{2}\xrightarrow{S_{1}}v\) is a toll \(x_{1},v\)-walk containing \(x_{2}\). That is \(x_{2}\in T(x_{1},v)\) and together with \(T(x_{1},x_{2})\neq\{x_{1},x_{2}\}\), the Axioms (b2') and (b1') yields that \(T(x_{2},v)\subset T(x_{1},v)\). (Recall that Axiom (b1') holds by Corollary 1 and Axiom (b2') by Proposition 5.)
If not, then there exists \(y_{2}\) (which can be equal to \(y_{1}\)) such that \(y_{2}\in T(x_{2},v)\) and \(y_{2}\notin T(x,v)\). Since \(y_{2}\in T(x_{2},v)\), similar to the above case, let \(R_{2}\) be an induced \(x_{2},y_{2}\)-path without a neighbor of \(v\) (except possibly \(y_{2}\)) and \(S_{2}\) be an induced \(y_{2},v\)-path without a neighbor of \(x_{2}\) (except possibly \(y_{2}\)). On the other hand, \(y_{2}\notin T(x,v)\) implies that \(S_{2}\) contains a neighbor of \(x\), say \(x_{3}\) (note that \(x_{3}\neq x_{2}\) and \(T(x_{2},x_{3})\neq\{x_{2},x_{3}\}\)). As in the above case, \(T(x_{2},y_{2})=\{x_{2},y_{2}\}\), otherwise \(x_{2},y_{2},v\) forms an asteroidal triple. In addition, \(u\) is not adjacent to a vertex in the \(x_{3},v\)- subpath of \(S_{2}\), otherwise \(u,x_{2},v\) forms an asteroidal triple. In particular, \(T(u,x_{3})\neq\{u,x_{3}\}\), \(u\xrightarrow{P}xx_{3}\xrightarrow{S_{2}}v\) is a toll \(u,v\)-walk, and \(x_{3}\in T(u,v)\). If \(T(x_{3},v)=\{x_{3},v\}\), then \(x_{3}\) fulfills Axiom (TWA) and we are done. Therefore, we may assume in what follows that \(v\) and \(x_{3}\) are not adjacent. Now we claim that \(x_{3}\) is adjacent to some internal vertices of both \(x_{2},v\)-subpath of \(S_{1}\) and \(x_{1},v\)-subpath of \(Q\) otherwise \(x_{3},x_{2},v\) or \(x_{3},x_{1},v\), respectively, form an asteroidal triple. For a neighbor \(x_{3}^{\prime}\) of \(x_{3}\) in \(Q\) is \(xx_{3}x_{3}^{\prime}\xrightarrow{Q}v\) a toll \(x,v\)-walk that contains \(x_{3}\). Hence, \(x_{3}\in T(u,v)\cap T(x,v)\). So, if \(T(x_{3},v)\subseteq T(x,v)\), then \(x_{3}\) is our desired \(x_{1}\). If not, then there is \(y_{3}\) (may be \(y_{1}\) or \(y_{2}\)) such that \(y_{3}\in T(x_{3},v)\) and \(y_{3}\notin T(x,v)\). Since \(x_{3}\in T(x_{2},v)\) and \(T(x_{2},x_{3})\neq\{x_{2},x_{3}\}\) we have \(T(x_{3},v)\subset T(x_{2},v)\subset T(x_{1},v)\) by Axioms (b2') and (b1').
Continuing with this procedure, we get a sequence \(x_{1},\ldots,x_{r}\in V\) such that \(x_{r}\in T(u,v)\cap T(x,v)\), \(T(x,x_{r})=\{x,x_{r}\}\), \(T(u,x_{r})\neq\{u,x_{r}\}\) and \(T(x_{r},v)\subseteq T(x,v)\) together with \(T(x_{n},v)\subset\cdots\subset T(x_{2},v)\subset T(x_{1},v)\). This sequence is finite, since \(V\) is finite and we may assume that the mentioned sequence is maximal. This means that there does not exist a vertex \(w\) in \(T(x_{n},v)\) such that \(w\in T(u,v)\cap T(x,v)\), \(T(x,w)\neq\{x,w\}\), \(T(u,w)=\{u,w\}\) and \(T(w,v)\subseteq T(x,v)\).
Now we have to prove that \(x\notin T(x_{r},v)\). If possible suppose that \(x\in T(x_{r},v)\), then there exists an induced \(x,v\)-path, say \(P_{x}\), without a neighbor of \(x_{r}\) (except possibly \(x\)). Let \(v_{1}\) be the neighbor of \(x\) on \(P_{x}\). Now, \(x_{r}xv_{1}\xrightarrow{P_{x}}v\) is a toll \(x_{r},v\)-walk containing \(v_{1}\) so that \(v_{1}\in T(x_{r},v)\). Also, \(T(v_{1},x_{r})\neq\{v_{1},x_{r}\}\) implies that \(T(v_{1},v)\subset T(x_{r},v)\) by the axioms (b2') and (b1'). Moreover, \(T(u,v_{1})\neq\{u,v_{1}\}\)
otherwise \(u,x_{r},v\) form an asteroidal triple. So, we have \(v_{1}\in T(u,v)\cap T(x,v)\), \(T(x,v_{1})=\{x,v_{1}\}\), \(T(u,v_{1})\neq\{u,v_{1}\}\) and \(T(v_{1},v)\subseteq T(x,v)\), a contradiction to the maximal length of sequence \(x_{1}\ldots,x_{r}\). So \(x\notin T(x_{r},v)\) and \(T(x,v)\subset T(x_{r},v)\) and Axiom (TWA) hold for \(x_{r}\).
We continue with a lemma that is similar to Lemma 2 only that we use different assumptions now.
Lemma 4: _Let \(R\) be a transit function on a non-empty finite set \(V\) satisfying Axioms (J2), (J4), (J4') and (TW1'). If \(P_{n}\), \(n\geq 2\), is an induced \(u,v\)-path in \(G_{R}\), then \(V(P_{n})\subseteq R(u,v)\). Moreover, if \(z\) is adjacent to an inner vertex of \(P_{n}\) that is not adjacent to \(u\) or to \(v\) in \(G_{R}\), then \(z\in R(u,v)\)._
Proof: If \(n=2\), then \(P_{2}=uv\) and \(R(u,v)=\{u,v\}\) by the definition of \(G_{R}\). If \(n=3\), then \(P_{3}=uxv\) and \(x\in R(u,v)\) by Axiom (J2). Let now \(n=4\) and \(P_{4}=uxyv\). By Axiom (J2) we have \(x\in R(u,y)\) and \(y\in R(x,v)\). Now, Axiom (J4) implies that \(x,y\in R(u,v)\). If \(n=5\), \(P_{5}=uxx_{2}yv\) and by the previous step, \(x,x_{2}\in R(u,y)\) and \(x_{2},y\in R(x,v)\). By Axiom (J4) \(x,y\in R(u,v)\) and \(x_{2}\in R(u,v)\) hold by Axiom (TW1') when \(z=x_{2}=w\). If \(n=6\), \(P_{6}=uxx_{2}x_{3}yv\), then by case \(n=5\), we have \(\{x,x_{2},x_{3}\}\in R(u,y)\) and \(\{x_{2},x_{3},y\}\in R(x,v)\). By Axiom (J4) \(x,y\in R(u,v)\) and \(x_{2},x_{3}\in R(u,v)\) hold by Axiom (TW1'). For \(n=7\), \(P_{7}=uxx_{2}x_{3}x_{4}yv\) by the case \(n=5\), \(\{x,x_{2},x_{3}\}\in R(u,x_{4})\) and \(\{x_{3},x_{4},y\}\in R(x_{2},v)\). That is \(x_{2}\in R(u,x_{4})\), \(x_{4}\in R(x_{2},v)\), \(R(u,x_{2})\neq\{u,x_{2}\}\), \(R(x_{2},x_{4})\neq\{x_{2},x_{4}\}\), \(R(x_{4},v)\neq\{x_{4},v\}\), and \(R(u,v)\neq\{u,v\}\). By Axiom (J4') we have \(x_{2},x_{4}\in R(u,v)\) and by Axiom (TW1') we have \(x,x_{3},y\in R(u,v)\). For a longer path \(P_{n}=uxx_{2}\ldots x_{n-2}yv\), \(n>7\), we continue by induction. By the induction hypothesis we have \(\{u,x,x_{3},\ldots,x_{n-2},y\}\subseteq R(u,y)\) and \(\{x,x_{3},\ldots,x_{n-2},y,v\}\subseteq R(x,v)\). In particular, \(x_{i}\in R(u,x_{i+2})\) and \(x_{i+2}\in R(x_{i},v)\) for every \(i\in\{2,\ldots,n-4\}\). By Axiom (J4') we get \(x_{i},x_{i+2}\in R(u,v)\) and by Axiom (TW1') we have \(x,x_{i+1},y\in R(u,v)\) for every \(i\in\{2,\ldots,n-2\}\).
For the second part, let \(z\) be a neighbor of \(x_{i}\), \(i\in\{2,\ldots,n-2\}\) that is not adjacent to \(u,v\). Clearly, in this case \(n\geq 5\). By the first part of the proof, we have \(x_{i}\in R(u,v)\) and we have \(z\in R(u,v)\) by Axiom (TW2) which follows from Axiom (TW1').
Theorem 4.1: _If \(R\) is a transit function on a non-empty finite set \(V\) satisfying the Axioms (b1'), (J2), (J4), (J4') and (TW1'), then \(G_{R}\) is \(AT\)-free graph._
Proof: Let \(R\) be a transit function satisfying Axioms (b1'), (J2), (J4), (J4') and (TW1'). Axiom (TW1') implies that also Axioms (TW1) and (TW2) hold. We have to prove that \(G_{R}\) is AT-free. By Theorem 4.1 it is enough to prove that \(G_{R}\) does not contain as an induced subgraph any of the graphs \(C_{k},T_{2},X_{2},X_{3}\), \(X_{30},\ldots,X_{41},XF_{2}^{n+1},XF_{3}^{n},XF_{4}^{n}\), \(k\geq 6\), \(n\geq 1\), depicted on Figure 2. We will show that if \(G_{R}\) contains one of the graphs from Figure 2 as an induced subgraph, then we get a contradiction to Axiom (b1'). For this we need to find vertices \(u,v,x\) such that \(x\in R(u,v)\), \(v\neq x\), \(R(v,x)\neq\{v,x\}\) and \(v\in R(u,x)\). For this,
we use vertices \(u,v,x\) as marked in Figure 2. Notice that in all graphs of Figure 2 we have \(v\neq x\) and \(R(v,x)\neq\{v,x\}\).
First, we show that \(x\in R(v,u)\) holds for all graphs from Figure 2. There exists an induced \(u,v\)-path that contains \(x\) in the graphs \(C_{k}\), \(k\geq 6\), \(X_{37},X_{38},X_{39}\), \(X_{40}\) and \(XF_{4}^{n}\), \(n\geq 1\). By Lemma 4\(x\in R(u,v)\) for these graphs. For graphs \(X_{2},X_{3},X_{30},X_{31},X_{32},X_{33}\), \(X_{35},X_{41}\) and \(XF_{2}^{n+1}\) for \(n\geq 2\) there exists an induced \(u,v\)-path with an inner vertex not adjacent to \(u\) nor to \(v\), but to \(x\). Hence, \(x\in R(u,v)\) by Axiom (TW2). Similarly, we see that \(x\in R(u,v)\) in \(T_{2}\), only that here we use Axiom (TW2) twice. For graphs \(X_{34},X_{36}\) and \(XF_{3}^{n}\), \(n\geq 1\), there exists an induced \(u,v\)-path such that two different inner vertices are both adjacent to \(x\). Thus, \(x\in R(u,v)\) by Axiom (TW1). Finally, for \(XF_{2}^{2}\) we have \(y_{2}\in R(u,v)\) by Axiom (TW1) because it is adjacent to two different inner vertices of an induced \(u,v\)-path. Now, \(x\in R(u,v)\) follows by Axiom (TW2).
It remains to show that \(v\in R(u,x)\) for all graphs in Figure 2. There exists an induced \(u,x\)-path that contains \(v\) in \(X_{37},X_{38}\) and \(C_{k}\), \(k\geq 6\) and \(v\in R(u,x)\) according to the Lemma 4. For graphs \(X_{3},X_{31},X_{32},X_{33},X_{34},X_{35},X_{36},X_{39},X_{40}\) there exists an induced \(u,x\)-path such that two different inner vertices are adjacent to different adjacent vertices, one of them being \(v\). Thus, \(v\in R(u,x)\) by Axiom (TW1'). For graphs \(X_{30},X_{41}\) exists an induced \(u,x\)-path with an inner vertex not adjacent to \(u\) nor to \(x\), but to \(v\). Therefore, \(v\in R(u,x)\) by Axiom (TW2). Similarly, we see that \(v\in R(u,x)\) in \(T_{2}\), only that here we use Axiom (TW2) twice. In \(X_{2}\) we have only one induced \(u,x\)-path \(uabx\). For these four vertices we get \(c,d\in R(u,x)\) by Axiom (TW1'). By Axiom (TW2) we get \(v\in R(u,x)\). We are left with \(XF_{2}^{2},XF_{3}^{n}\) and \(XF_{4}^{n}\), \(n\geq 1\). Here \(p_{1},y_{2}\in R(u,x)\) since \(up_{1}y_{2},x\) is an induced path. Now we use Axiom (TW1) \(n-1\) times to get \(p_{2},\ldots,p_{n}\in R(u,x)\). Finally, \(v\in R(u,x)\) by Axiom (TW2) for \(XF_{2}^{n+1}\) and by Axiom (TW1) for \(XF_{3}^{n}\) and \(XF_{4}^{n}\).
Theorem 4.1: _If \(R\) is a transit function on a non-empty finite set \(V\) satisfying the Axioms (b1'), (b2'), (J2), (J4), (J4'), (TW1') and (TWA) then \(T=R\) on \(G_{R}\)._
Proof: Let \(u\) and \(v\) be two distinct vertices of \(G_{R}\) and first assume that \(x\in R(u,v)\). We have to show that \(x\in T(u,v)\) on \(G_{R}\). Clearly \(x\in T(u,v)\) whenever \(x\in\{u,v\}\). So, assume that \(x\notin\{u,v\}\). If \(R(u,x)=\{u,x\}\) and \(R(x,v)=\{x,v\}\), then \(uv\notin E(G_{R})\) by the definition of \(G_{R}\). Therefore, \(uxv\) is a toll walk of \(G_{R}\) and \(x\in T(u,v)\) follows. Suppose next that \(R(x,v)\neq\{x,v\}\). We will construct an \(x,v\)-path \(Q\) in \(G_{R}\) without a neighbor of \(u\) (except possibly \(x\)). For this, let \(x=x_{0}\). By Axiom (TWA) there exists a neighbor \(x_{1}\) of \(x_{0}\) where \(x_{1}\in R(x_{0},v)\cap R(u,v)\), \(R(u,x_{1})\neq\{u,x_{1}\}\) and \(T(x_{1},v)\subset T(x,v)\). Since \(x_{1}\neq v\) and \(x_{1}\in R(u,v)\), we can continue with the same procedure to get \(x_{2}\in R(u,v)\cap R(x_{1},v)\), where \(R(x_{1},x_{2})=\{x_{1},x_{2}\}\), \(x_{2}\neq x_{1}\), \(R(u,x_{2})\neq\{u,x_{2}\}\), and \(R(x_{2},v)\subset R(x_{1},v)\). If \(x_{2}=v\), then we stop. Otherwise, we continue and get \(x_{3}\in R(u,v)\cap R(x_{2},v)\), where \(R(x_{2},x_{3})=\{x_{2},x_{3}\}\), \(x_{3}\neq x_{2}\), \(R(u,x_{3})\neq\{u,x_{3}\}\) and \(R(x_{3},v)\subset R(x_{2},v)\). By repeating this step we obtain a sequence of vertices \(x_{0},x_{1},\ldots,x_{q}\), \(q\geq 2\), such that
1. \(R(x_{i},x_{i+1})=\{x_{i},x_{i+1}\},i\in\{0,1,\ldots,q-1\},\)
2. \(R(u,x_{i})\neq\{u,x_{i}\},i\in[q],\)
3. \(R(x_{i+1},v)\subset R(x_{i},v),i\in\{0,1,\ldots,q-1\}.\)
Clearly, this sequence should stop by the last condition, because \(V\) is finite. Hence, we may assume that \(x_{q}=v\). Now, if \(R(u,x)=\{u,x\}\), then we have a toll \(u,v\)-walk \(uxx_{1}\ldots x_{q-1}v\) and \(x\in T(u,v)\). Otherwise, \(R(u,x)\neq\{u,x\}\) and we can symmetrically build a sequence \(u_{0},u_{1},\ldots,u_{r}\), where \(u_{0}=x\), \(u_{r}=u\) and \(u_{0}u_{1}\ldots u_{r}\) is an \(x,u\)-path in \(G_{R}\) that avoids \(N[v]\). Clearly, \(uu_{r-1}u_{r-2}\ldots\)\(u_{1}xx_{1}\ldots x_{q-1}v\) is a toll \(u,v\)-walk and \(x\in T(u,v)\).
Suppose now that \(x\in T(u,v)\) and \(x\notin\{u,v\}\). We have to show that \(x\in R(u,v)\). By Lemma 1\(N[u]-x\) does not separate \(x\) and \(v\) and \(N[v]-x\) does not separate \(u\) and \(x\). Let \(W\) be a toll \(u,v\)-walk containing \(x\). Clearly \(W\) contains an induced \(u,v\)-path, say \(Q\). By Lemma 4 we have \(V(Q)\subseteq R(u,v)\). If \(x\) belongs to \(Q\), then \(x\in R(u,v)\). Therefore, we may assume that \(x\) does not belong to \(Q\). Moreover, we may assume that \(x\) does not belong to any induced \(u,v\)-path. The underlying graph \(G_{R}\) is \(AT\)-free by Theorem 4.1. Thus, \(Q\) contains a neighbor of \(x\), say \(x^{\prime}\). If \(R(u,x^{\prime})=\{u,x^{\prime}\}\) and \(R(x^{\prime},v)=\{x^{\prime},v\}\), then we have a contradiction with \(W\) being a toll \(u,v\)-walk containing \(x\). Without loss of generality, we may assume that \(R(x^{\prime},v)\neq\{x^{\prime},v\}\). If also \(R(u,x^{\prime})\neq\{u,x^{\prime}\}\), then \(x\in R(u,v)\) by the second claim of Lemma 4. So, let now \(R(u,x^{\prime})=\{u,x^{\prime}\}\). Since \(x\) and \(v\) are not separated by \(N[u]-\{x\}\) by the Lemma 1, there exists an induced path \(x,v\)\(S\) without a neighbor of \(u\). Let \(S=s_{0}s_{1}\cdots s_{k}\), \(s_{0}=x\) and \(s_{k}=v\) and let \(s_{j}\) be the first vertex of \(S\) that also belongs to \(Q\). Notice that \(s_{j}\) can be equal to \(v\) but it is different from \(x^{\prime}\) and that \(j>0\). If \(j=1\), then \(x\in R(u,v)\) by Axiom (TW1) (which follows from Axiom (TW1')). If \(j=2\), then \(x\in R(u,v)\) by Axiom (TW1'). Hence, \(j>2\). We may choose \(S\) such that it minimally differs from \(Q\). This means that \(s_{0},\ldots,s_{j-2}\) may be adjacent only to \(x^{\prime}\) on \(Q\) before \(s_{j}\).
Suppose now that \(s_{j}\) is adjacent to \(x^{\prime}\). This means that \(s_{j}\neq v\) because \(v\) is not adjacent to \(x^{\prime}\). Let \(s_{i}\) be the last vertex of \(S\) adjacent to \(x^{\prime}\) (\(s_{0}=x\) is adjacent to \(x^{\prime}\)). If \(s_{i}=s_{j-1}\), then \(s_{j-1}\in R(u,v)\) by Axiom (TW1). Clearly, \(R(s_{j-1},u)\neq\{s_{j-1},u\}\) and \(R(s_{j-1},v)\neq\{s_{j-1},v\}\) and we can use Axiom (TW2) (which follows from Axiom TW1') to get \(s_{j-2}\in R(u,v)\). If we continue with the same step \(j-2\) times, then we get \(s_{\ell}\in R(u,v)\), respectively, for \(\ell\in\{j-3,j-4,\ldots,0\}\). So, \(s_{0}=x\in R(u,v)\) and we may assume that \(s_{j}\) is not adjacent to \(x^{\prime}\). Now, \(s_{j}\) can be equal to \(v\). Assume first that \(s_{j}\neq v\). Cycle \(x^{\prime}x\xrightarrow{S}s_{j}\xrightarrow{Q}x^{\prime}\) has at least six vertices and must contain some chords, since \(G\) is AT-free by Theorem 4.1. If \(x^{\prime}s_{j-1}\in E(G)\), then we get \(s_{0}=x\in R(u,v)\) by the same steps as before (when \(s_{j}\) was adjacent to \(x^{\prime}\)). If \(x^{\prime}s_{j-1}\notin E(G)\), then \(x^{\prime}s_{j-2}\in E(G)\) and \(d(x^{\prime},s_{j})=2\), otherwise we have an induced cycle of length at least six, which is not possible in AT-free graphs. Now, \(s_{j-2}\in R(u,v)\) by Axiom (TW1'). Next, we continue \(j-2\) times with Axiom (TW2) to get \(s_{\ell}\in R(u,v)\), respectively, for \(\ell\in\{j-3,j-4,\ldots,0\}\). Again \(s_{0}=x\in R(u,v)\) and we may assume that \(s_{j}\) equals \(v\). Again \(S_{j-1}x^{\prime}\in E(G)\) or \(S_{j-12}x^{\prime}\in E(G)\) because otherwise we have an induced cycle of length at least six, which is not possible. If \(S_{j-1}x^{\prime}\in E(G)\), then there exists an induced path \(u,v\) in \(G\) that contains \(s_{j-1}\) and \(s_{j-1}\in R(u,v)\) by
Lemma 4. We continue as at the beginning of this paragraph, only that we replace \(s_{j}\) with \(s_{j-1}\) (and all the other natural changes) and we get \(x\in R(u,v)\). Finally, if \(S_{j-1}x^{\prime}\notin E(G)\), then \(s_{j-2}x^{\prime}\in E(G)\) again by Lemma 4. We continue \(j-2\) times with Axiom (TW2) to get \(s_{\ell}\in R(u,v)\), respectively, for \(\ell\in\{j-3,j-4,\ldots,0\}\). Again, \(s_{0}=x\in R(u,v)\). This completes the proof because \(s_{0}=x\in R(u,v)\).
It is easy to see that for any graph \(G\), the toll walk transit function satisfies the Axioms (J2) and (TW1'). By Corollary 1, Theorems 4.1 and 4.2 and Proposition 4.1 we have the following characterization of the toll walk transit function of AT-free graph.
Theorem 4.1: _A transit function \(R\) on a finite set \(V\) satisfies the Axioms (b1'), (b2'), (J2), (J4), (J4'), (TW1') and (TWA) if and only if \(G_{R}\) is an AT-free graph and \(R=T\) on \(G_{R}\)._
A four-cycle \(axyva\) together with an edge \(ua\) form a \(P\)-graph and a five-cycle \(axybva\) together with an edge \(ua\) form a 5-pan graph. It is straightforward to check that the toll walk transit function \(T\) does not satisfy Axiom (J3) on \(P\)-graph and 5-pan graph. From the definitions of Axioms (J3), (J4) and (J4'), it is clear that Axiom (J3) implies both Axioms (J4) and (J4'). Therefore, we have the following corollary.
Corollary 2: _A transit function \(R\) on a finite set \(V\) satisfies Axioms (b1'), (b2'), (J2), (J3), (ba), (TW1') and (TWA) if and only if \(G_{R}\) is an (\(P\), \(5\)-pan,AT)-free graph and \(R=T\) on \(G_{R}\)._
## 5 Toll walk transit function of Ptolemaic and distance-hereditary graphs
Kay and Chartrand [16] introduced Ptolemaic graphs as graphs in which the distances obey the Ptolemy inequality. That is, for every four vertices \(u,v,w\) and \(x\) the inequality \(d(u,v)d(w,x)+d(u,x)d(v,w)\geq d(u,w)d(v,x)\) holds. It was proved by Howorka [13] that a graph is Ptolemaic if and only if it is both chordal and distance-hereditary (a graph \(G\) is distance hereditary, if every induced path in \(G\) is isometric). Therefore, Ptolemaic graphs are also defined as chordal graphs that are 3 fan-free in the language of forbidden subgraphs. Consider the following axiom for the characterization of the toll walk transit function of Ptolemaic graphs.
#### 5.0.1 Axiom (pt).
If there exist elements \(u,x,y,v\in V\) such that \(x,z\in R(u,y)\), \(y,z\in R(x,v)\) and \(R(x,y)=\{x,y\}\), then \(R(x,z)\neq\{x,z\}\) and \(R(y,z)\neq\{y,z\}\).
Theorem 5.1: _The toll walk transit function \(T\) on a graph \(G\) satisfies Axioms (JC) and (pt) if and only if \(G\) is a Ptolemaic graph._
Proof: By Theorem 3\(T\) satisfies Axiom (JC) if and only if \(G\) is chordal. If \(G\) contains an induced 3-fan on the path \(uxyv\) and the universal vertex \(z\), then \(x,z\in T(u,y)\), \(y,z\in T(x,v)\), \(T(x,y)=\{x,y\}\), \(T(x,z)=\{x,z\}\) and \(T(y,z)=\{y,z\}\). Hence, Axiom (pt) does not hold. That is, if \(T\) satisfies Axioms (JC) and (pt), then \(G\) is the Ptolemaic graph.
Conversely, \(G\) is chordal by Theorem 3 because \(T\) satisfies Axiom (JC). Suppose that \(T\) does not satisfy the Axiom (pt) on \(G\). There exist distinct vertices \(u,x,y,z,v\) such that \(x,z\in T(u,y)\) and \(y,z\in T(x,v)\), \(T(x,y)=\{x,y\}\) and \((T(x,z)=\{x,z\}\) or \(T(y,z)=\{y,z\})\). Without loss of generality, we may assume that \(T(x,z)=\{x,z\}\). Since \(x,z\in T(u,y)\) and \(y,z\in T(x,v)\) there is an induced \(u,x\)-path \(P\) without a neighbor of \(y\) other than \(x\), an induced \(y,v\)-path \(Q\) without a neighbor of \(x\) other than \(y\), an induced \(u,z\)-path \(R\) without a neighbor of \(y\) (except possibly \(z\)) and an induced \(z,v\)-path \(S\) without a neighbor of \(x\) other than \(z\).
Now, assume that \(z\) belongs to \(P\), which also means that \(z\) is not adjacent to \(y\). Since \(T(x,y)=\{x,y\}\), \(y\) does not belong to \(S\). Let \(a\) be the common vertex of \(P\) and \(S\) that is close to \(z\) as possible and \(b\) be the common vertex of \(Q\) and \(S\) that is close to \(y\) as possible. Note that \(a\) may be the same as \(z\), but \(b\) is distinct from \(y\). On a cycle \(C:a\stackrel{{ P}}{{\rightarrow}}zxy\stackrel{{ Q}}{{\rightarrow}}b\stackrel{{ S}}{{\rightarrow}}a\) is \(y\) eventually adjacent only to vertices from \(S\) between \(a\) and \(b\) (and not to \(a\)). In addition to that, the vertices of \(S\) are not adjacent to \(x\) nor to \(z\). Hence, \(y,x,z\) and the other neighbor of \(z\) on \(C\) are contained in an induced cycle of length at least four, a contradiction because \(G\) is chordal.
So, \(z\) is not on \(P\). We denote by \(x^{\prime}\) a neighbor of \(x\) on \(P\), by \(x^{\prime\prime}\) the other neighbor of \(x^{\prime}\) on \(P\) (if it exists) and by \(z^{\prime}\) the neighbor of \(z\) on \(R\). Let \(a\) be the vertex common to \(R\) and \(P\) closest to \(x\). Notice that \(x^{\prime}\) or \(z^{\prime}\) may be equal to \(a\), but \(z\neq a\neq x\) and \(b\neq y\). If \(zx^{\prime}\notin E(G)\), then \(zxx^{\prime}x^{\prime\prime}\) is part of an induced cycle of length at least four or \(x^{\prime}=a\). As \(G\) is Ptolemaic, \(G\) is also chordal and there are no induced cycles of length four or more in \(G\). So, \(x^{\prime}=a\). Now, if \(za\notin E(G)\), then \(xz^{\prime}\) must be an edge to avoid an induced cycle that contains \(axzz^{\prime}\). In all cases, we obtain a triangle with edge \(xz\): \(zxx^{\prime}z\) or \(zxaz\) or \(zxz^{\prime}z\). We denote this triangle by \(zxwz\).
In addition, let \(z^{\prime\prime}\) be the neighbor of \(z\) on \(S\) and \(y^{\prime}\) the neighbor of \(y\) on \(Q\). If \(zy\notin E(G)\), then \(z,x,y,y^{\prime}\) and maybe some other vertices of \(Q\) or \(S\) induce a cycle of length at least four, which is not possible. So, \(zy\in E(G)\). If \(zy^{\prime}\in E(G)\), then the vertices \(w,x,y,y^{\prime}\) and \(z\) induce a 3 fan, which is not possible in Ptolemaic graphs. Thus, \(zy^{\prime}\notin E(G)\). Similar, if \(z^{\prime\prime}y\in E(G)\), then \(w,x,y,z^{\prime\prime}\) and \(z\) induce a 3 fan. So, \(z^{\prime\prime}y\notin E(G)\). Finally, the vertices \(z^{\prime\prime},z,y,y^{\prime}\) possibly together with some other vertices from \(S\) or \(Q\) form an induced cycle of length at least four, a final contradiction.
From Theorems 5 and 11 we have the following characterization of the toll walk function of the Ptolemaic graph.
Theorem 4.1: _A transit function \(R\) on a finite set \(V\) satisfies Axioms (b2), (J2), (JC), (pt), (TW1), (TW2) and (TWC) if and only if \(G_{R}\) is a Ptolemaic graph and \(R=T\) on \(G_{R}\)._
We continue with the following axioms that are characteristic of the toll walk transit function \(T\) on the distance-hereditary graphs.
**Axiom (dh).** If there exist elements \(u,x,y,v,z\in V\) such that \(x,y,z\in R(u,y)\cap R(x,v)\), \(R(u,v)\neq\{u,v\}\), \(R(x,y)=\{x,y\}\), \(x\neq y\), \(R(u,z)=\{u,z\}\), \(R(v,z)=\{v,z\}\), then \(R(x,z)\neq\{x,z\}\) or \(R(y,z)\neq\{y,z\}\).
**Axiom (dh1).** If there exist elements \(u,x,y,v\in V\) such that \(x\in R(u,y)\)\(y\in R(x,v)\), \(R(x,y)=\{x,y\}\), \(x\neq y\), \(R(u,x)\neq\{u,x\}\), \(R(y,v)\neq\{y,v\}\), then \(x\in R(u,v)\).
Theorem 4.2: _The toll walk transit function \(T\) on a graph \(G\) satisfies Axioms (dh) and (dh1) if and only if \(G\) is a distance-hereditary graph._
Proof: First, we prove that \(T\) satisfies Axiom (dh1) if and only if \(G\) is (\(H\) hole \(D\))-free graph. It is clear from Figure 1 that \(T\) does not satisfy Axiom (dh1) on \(H\), hole and \(D\). Conversely, suppose that \(T\) does not satisfy Axiom (dh1) on \(G\). That is, \(x\in T(u,y)\)\(y\in T(x,v)\), \(T(x,y)=\{x,y\}\), \(x\neq y\), \(T(u,x)\neq\{u,x\}\), \(T(y,v)\neq\{y,v\}\), and \(x\notin T(u,v)\). Since \(x\in T(u,y)\) and \(T(u,x)\neq\{u,x\}\), there is an induced \(u,x\)-path, say \(P=x_{n}x_{n-1}\ldots x_{1}x_{0}\), where \(x_{0}=x\) and \(u=x_{n}\), without a neighbor of \(y\) with the exception of \(x\). Similar, \(y\in T(x,v)\), \(T(y,v)\neq\{y,v\}\) produces an induced \(y,v\)-path, say \(Q:y_{0}y_{1}\ldots y_{n-1}y_{n}\), where \(y_{0}=y\) and \(y_{n}=v\), without a neighbor of \(x\) with the exception of \(y\). Also, since \(x\notin T(u,v)\), without loss of generality, we may assume by Lemma 1 that the path \(Q\) contains a neighbor \(u^{\prime}\) of \(u\). We may choose \(u^{\prime}\) to be the neighbor of \(u\) on \(P\) that is closest to \(y\). Then the sequence of vertices \(u\xrightarrow{P}xy\xrightarrow{Q}u^{\prime}u\) forms a cycle of at least five lengths. There may be chords from the vertices of \(P\) to the vertices of the \(y,u^{\prime}\)-subpath of \(Q\). But \(y\) is not adjacent to any vertex in \(P\) and \(x\) is not adjacent to any vertex in \(Q\). So, some or all vertices in the sequence \(u\xrightarrow{P}xy\xrightarrow{Q}u^{\prime}\) induce a house if \(x_{1}y_{1}\in E(G)\) and \(x_{2}y_{1}\in E(G)\) or \(x_{1}y_{2}\in E(G)\), induce a domino if \(x_{1}y_{1}\in E(G)\) and \(x_{2}y_{2}\in E(G)\) otherwise induce a hole.
Now we have that \(G\) is (\(H\) hole \(D\))-free if and only if \(T\) satisfies Axiom (dh1). Therefore, we have to prove that \(G\) is 3 fan-free if and only if \(T\) satisfies Axiom (dh) according to Theorem 4.1. If \(G\) contains a 3-fan with vertices as shown in Figure 1, then the toll walk transit function does not satisfy Axiom (dh). Conversely, suppose that \(T\) does not satisfy Axiom (dh) on (house, hole, domino)-free graph \(G\). That is \(x,y,z\in T(u,y)\cap T(x,v)\), \(T(x,y)=\{x,y\}\), \(x\neq y\), \(T(u,z)=\{u,z\}\), \(T(z,v)=\{z,v\}\) and \(T(x,z)=\{x,z\}\) and \(T(y,z)=\{y,z\}\). Since \(x\in T(u,y)\), there exists an induced \(u,x\)-path, say \(P=x_{n}x_{n-1}\ldots x_{1}x_{0}\), where \(x_{0}=x\) and \(u=x_{n}\), which avoids the neighbors of \(y\) with the exception of \(x\) and since \(y\in T(x,v)\) there exists an induced \(y,v\)-path, say \(Q:y_{0}y_{1}\ldots y_{n-1}y_{n}\), where \(y_{0}=y\) and \(y_{n}=v\), which avoids the neighbors of \(x\) with the exception of \(y\). Since \(T(u,z)=\{u,z\}\) and \(T(z,v)=\{z,v\}\), \(z\) does not belong to the paths
and \(Q\). Let \(R:uzv\) be the \(u,v\)-induced path containing \(z\). If \(R(u,x)=\{u,x\}\) and \(R(y,v)=\{y,v\}\), then the vertices \(u,x,y,v,z\) induce a \(3\)-fan. If \(R(u,x)\neq\{u,x\}\) or \(R(y,v)\neq\{y,v\}\), since \(G\) is (\(H\) hole \(D\))-free the vertex \(z\) is adjacent to all vertices in the path \(P\) and \(Q\). Then the vertices, \(x,y,y_{1},y_{2},z\) or \(x,y,x_{1},x_{2},z\), respectively, induce a \(3\)-fan graph.
Lemma 5: _Let \(R\) be a transit function on a non-empty finite set \(V\) satisfying the Axioms (J2), (J4), (dh1) and (TW1'). If \(P_{n}\), \(n\geq 2\), is an induced \(u,v\)-path in \(G_{R}\), then \(V(P_{n})\subseteq R(u,v)\). Moreover, if \(z\) is adjacent to an inner vertex of \(P_{n}\) that is not adjacent to \(u\) or to \(v\) in \(G_{R}\), then \(z\in R(u,v)\)._
Proof: If \(n=2\), then \(P_{2}=uv\) and \(R(u,v)=\{u,v\}\) by the definition of \(G_{R}\). If \(n=3\), then \(P_{3}=uxv\) and \(x\in R(u,v)\) by Axiom (J2). Let now \(n=4\) and \(P_{4}=uxyv\). By Axiom (J2) we have \(x\in R(u,y)\) and \(y\in R(x,v)\). Now, Axiom (J4) implies that \(x,y\in R(u,v)\). If \(n=5\), \(P_{5}=uxx_{2}yv\) and by the previous step, \(x,x_{2}\in R(u,y)\) and \(x_{2},y\in R(x,v)\). Then \(x,y\in R(u,v)\) by Axiom (J4) and \(x_{2}\in R(u,v)\) by Axiom (TW1'). If \(n=6\) and \(P_{6}=uxx_{2}x_{3}yv\), then by case \(n=5\) we have \(\{x,x_{2},x_{3}\}\in R(u,y)\) and \(\{x_{2},x_{3},y\}\in R(x,v)\). By Axiom (J4) \(x,y\in R(u,v)\) and by Axiom (TW1'), \(x_{2},x_{3}\in R(u,v)\). For \(n=7\) and \(P_{7}=uxx_{2}x_{3}x_{4}yv\) we have \(x_{2}\in R(u,x_{3})\) and \(x_{3}\in R(x_{2},v)\) by the previous cases, \(R(u,x_{2})\neq\{u,x_{2}\}\), \(R(x_{2},x_{3})=\{x_{2},x_{3}\}\), \(R(x_{3},v)\neq\{x_{3},v\}\) and \(x_{2},x_{3}\in R(u,v)\) follow ba Axiom (dh1). By the same argument we have \(x_{3},x_{4}\in R(u,v)\). By Axiom (J4) we have \(x,y\in R(u,v)\), since \(x\in R(u,y)\) and \(y\in R(x,v)\). For a longer path \(P_{n}=uxx_{2}\ldots x_{n-2}yv\), \(n>7\), we continue by induction. By the induction hypothesis we have \(\{u,x,x_{2},\ldots,x_{n-2},y\}\subseteq R(u,y)\) and \(\{x,x_{3},\ldots,x_{n-2},y,v\}\subseteq R(x,v)\). In particular, \(x_{i}\in R(u,x_{i+1})\) and \(x_{i+1}\in R(x_{i},v)\) for every \(i\in\{2,\ldots,n-2\}\). By Axiom (dh1) we get \(x_{i},x_{i+1}\in R(u,v)\) for every \(i\in\{2,\ldots,n-2\}\) and by Axiom (J4) we have \(x,y\in R(u,v)\).
For the second part, let \(z\) be a neighbor of \(x_{i}\), \(i\in\{2,\ldots,n-2\}\) that is not adjacent to \(u\) and \(v\). Clearly, in this case, \(n\geq 5\). By the first part of the proof, we have \(x_{i}\in R(u,v)\) and we have \(z\in R(u,v)\) by Axiom (TW2).
Proposition 7: _If \(T\) is a toll walk transit function on a distance-hereditary graph \(G\), then \(T\) satisfies Axioms (b2) and (TWC) on \(G\)._
Proof: If \(T\) does not satisfy Axiom (b2), then there exist \(u,v,x,y\) such that \(x\in T(u,v)\), \(y\in T(u,x)\) and \(y\notin T(u,v)\). Since \(x\in T(u,v)\), there exists an induced \(x,v\)-path, say \(P\), without a neighbor of \(u\) (except possibly \(x\)) and an induced \(x,u\)-path, say \(Q\), without a neighbor of \(v\) (except possibly \(x\)). Similarly, since \(y\in T(u,x)\), there exists an induced \(u,y\) path, say \(R\), without a neighbor of \(x\) (except possibly \(y\)) and an induced \(y,x\) path, say \(S\), without a neighbor of \(u\) (except possibly \(y\)). Since \(y\notin T(u,v)\), a neighbor of \(u\) separates \(y\) from \(v\) or a neighbor of \(v\) separates \(y\) from \(u\) by Lemma 1. But \(y\xrightarrow{S}x\xrightarrow{P}v\) is a \(y,v\)-path that does not contain a neighbor of \(u\). So, the only possibility is that a neighbor of \(v\) separates \(y\) from \(u\). Therefore \(R\) contains a neighbor of \(v\), say \(v^{\prime}\), which is closest to \(y\). If \(v_{1}\) lies on both \(R\) and \(S\), then \(S\) contains at least one additional vertex between \(v_{1}\) and \(x\). The vertices, \(u\xrightarrow{R}y\xrightarrow{S}x\xrightarrow{Q}u\) contain a cycle of
length at least five. There may be chords from the vertices of \(Q\) to both the paths, \(R\) and \(S\) and also from the vertices of \(R\) to the vertices of \(S\). Hence, some or all vertices in this sequence will induce a hole, house, domino, or fan graphs so that \(T\) satisfies Axiom (b2) on distance-hereditary graphs.
For Axiom (TWC) let \(x\in T(u,v)\). There exists an induced \(x,v\)-path \(P\) that avoids the neighborhood of \(u\) (except possibly \(x\)). For the neighbor \(v_{1}\) of \(x\) on \(P\) it follows that \(v_{1}\in T(x,v),v_{1}\neq x\) with \(T(x,v_{1})=\{x,v_{1}\}\) and \(T(u,v_{1})\neq\{u,v_{1}\}\). If \(v_{1}=v\), then clearly \(x\notin T(v,v_{1})=\{v\}\). Similarly, if \(T(v_{1},v)=\{v_{1},v\}\), then \(x\notin T(v_{1},v)\). Consider next \(T(v_{1},v)\neq\{v_{1},v\}\). We will show that \(x\notin T(v_{1},v)\) for a distance hereditary graph \(G\). To avoid a contradiction, assume that \(x\in T(v_{1},v)\). There exists an induced \(x,v\)-path \(Q\) that avoids the neighborhood of \(v_{1}\). The edge \(xv_{1}\) together with some vertices of \(P\) and \(Q\) will form a cycle of length at least five. Also, there may be chords from vertices in \(P\) to \(Q\) so that these vertices may induce a hole, house, domino or a 3-fan, a contradiction to Theorem 3.1. So \(x\notin T(v_{1},v)\) and Axiom (TWC) hold.
Using Lemma 5, we can modify Theorem 3.4 stated as the next theorem. For this, notice that Axiom (JC) is replaced by Axioms (J4) (when \(uxyv\) is a path) and (dh1) otherwise and Axioms (TW1) and (TW2) are replaced by stronger Axioms (TW1').
Theorem 5.1: _If \(R\) is a transit function on a non-empty finite set \(V\) that satisfies Axioms (b2), (J2), (J4), (dh1) (TW1') and (TWC), then \(R=T\) on \(G_{R}\)._
Hence, we obtain a characterization of toll walk transit function on distance-hereditary graphs as follows. The proof follows directly by Theorems 3.1 and 3.1, Propositions 4 and 3.2 and since Axioms (J2) and (TW1') always hold for the toll walk transit function \(T\).
Theorem 5.2: _A transit function \(R\) on a finite set \(V\) satisfies Axioms (b2), (J2), (J4), (dh), (dh1) (TW1') and (TWC) if and only if \(G_{R}\) is a distance-hereditary graph and \(R=T\) on \(G_{R}\)._
## 6 Non-definability of the toll walk transit function
Here we show that it is not possible to give a characterization of the toll walk transit function \(T\) of a connected graph using a set of first-order axioms defined on \(R\) as we have done in the previous sections for AT-free graphs, Ptolemaic graphs, distance hereditary graphs, chordal graphs and interval graphs in [23]. In [22], Nebesky has proved that a first order axiomatic characterization of the induced path function of an arbitrary connected graph is impossible. The idea of proof of the impossibility of such a characterization is the following.
First, we construct two non-isomorphic graphs \(G_{d}\) and \(G^{\prime}_{d}\) and a first-order axiom which may not be satisfied by the toll walk transit function \(T\) of an arbitrary connected graph. The following axiom is defined for an arbitrary transit function \(R\) on a non-empty finite set \(V\) and is called the _scant property_ following Nebesky
[22].
#### 3.2.2 Axiom (SP).
If \(R(x,y)\neq\{x,y\}\), then \(R(x,y)=V\) for any \(x,y\in V\).
In our case the toll walk transit function \(T\) will satisfy this first order axiom on \(G_{d}\) but not on \(G_{d}^{\prime}\). Then we prove by the famous \(EF\) game technique of first-order nondefinability that there exists a partial isomorphism between \(G_{d}\) and \(G_{d^{\prime}}\). First, we define certain concepts and terminology of first-order logic [19].
The tuple \(\textbf{X}=(X,\mathcal{S})\) is called a _structure_ when \(X\) is a nonempty set called _universe_ and \(\mathcal{S}\) is a finite set of function symbols, relation symbols, and constant symbols called _signature_. Here, we assume that the signature contains only the relation symbol. The _quantifier rank_ of a formula \(\phi\) is its depth of quantifier nesting and is denoted by \(qr(\phi)\). Let **A** and **B** be two structures with same signatures. A map \(q\) is said to be a _partial isomorphism_ from **A** to **B** if and only if \(dom(q)\subset A\), \(rg(q)\subset B\), \(q\) is injective and for any \(n\)-ary relation \(R\) in the signature and \(a_{0}\),..., \(a_{l-1}\in dom(q)\), \(R^{\mathcal{A}}(a_{0},\ldots,a_{l-1})\) if and only if \(R^{\mathcal{B}}(q(a_{0}),\ldots,q(a_{l-1}))\).
Let \(r\) be a positive integer. The \(r\)_-move Ehrenfeucht-Fraisse Game_ on **A** and **B** is played between 2 players called the _Spoiler_ and the _Duplicator_, according to the following rules.
Each run of the game has \(r\) moves. In each move, Spoiler plays first and picks an element from the universe \(A\) of the structure **A** or from the universe \(B\) of the structure **B**; Duplicator then responds by picking an element from the universe of the other structure. Let \(a_{i}\in A\) and \(b_{i}\in B\) be the two elements picked by the Spoiler and Duplicator in their \(i\)th move, \(1\leq i\leq r\). Duplicator wins the run \((a_{1},b_{1}),\ldots,(a_{r},b_{r})\) if the mapping \(a_{i}\to b_{i}\), where \(1\leq i\leq r\) is a partial isomorphism from the structure **A** to **B**. Otherwise, Spoiler wins the run \((a_{1},b_{1}),\ldots,(a_{r},b_{r})\).
_Duplicator wins the \(r\)-move EF-game on **A** and **B**_ or _Duplicator has a winning strategy for the EF-game on **A** and **B**_ if Duplicator can win every run of the game, no matter how Spoiler plays.
The following theorems are our main tool in proving the inexpressibility results.
Theorem 3.1: _[_19_]_ _The following statements are equivalent for two structures **A** and **B** in a relational vocabulary._
1. \(A\) _and_ \(B\) _satisfy the same sentence_ \(\sigma\) _with_ \(qr(\sigma)\leq n\)_._
2. _The Duplicator has an_ \(n\)_-round winning strategy in the EF game on_ \(A\) _and_ \(B\)._
Theorem 3.2: _[_19_]_ _A property \(\mathrm{P}\) is expressible in first order logic if and only if there exists a number \(k\) such that for every two structures **X** and **Y**, if \(\textbf{X}\in\mathrm{P}\) and Duplicator has a \(k\)-round winning strategy on **X** and **Y** then \(\textbf{Y}\in\mathrm{P}\)._
By a _ternary structure_, we mean an ordered pair \((X,D)\) where \(X\) is a finite nonempty set and \(D\) is a ternary relation on \(X\). So \(D\) is a set of triples \((x,y,z)\) for some \(x,y,z\in X\). We simply write \(D(x,y,z)\) when \((x,y,z)\in D\). Let \(F:\)
\(X\times X\to 2^{X}\) be defined as \(F(x,y)=\{u\in X:D(x,u,y)\}\). So, for any ternary structure \((X,D)\), we can associate the function \(F\) corresponding to \(D\) and vice versa. If a ternary relation \(D\) on \(X\) satisfies the following three conditions for all \(u,v,x\in X\)
1. \(D(u,u,v)\);
2. \(D(u,x,v)\implies D(v,x,u)\);
3. \(D(u,x,u)\implies x=u\),
then the function \(F\) corresponding to \(D\) will be the transit function. Observe that every axiom used in Sections 2-5 have a respective representation in terms of a ternary relation.
By the _underlying graph_ of a ternary structure \((X,D)\) we mean the graph \(G\) with the properties that \(X\) is its vertex set and distinct vertices \(u\) and \(v\) of \(G\) are adjacent if and only if
\[\{x\in X:D(u,x,v)\}\cup\{x\in X:D(v,x,u)\}=\{u,v\}.\]
We call a ternary structure \((X,D)\), 'the \(W\)_structure_ of a graph \(G\), if \(X\) is the vertex set of \(G\) and \(D\) is the ternary relation corresponding to the toll walk transit function \(T\) (that is \((x,y,z)\in D\) if and only if \(y\) lies in some \(x,z\)-toll walk). Obviously, if \((X,D)\) is a \(W\)-structure, then it is the \(W\)-structure of the underlying graph of \((X,D)\). We say that \((X,D)\) is _scant_ if the function \(F\) corresponding to the ternary relation \(D\), satisfies Axiom (SP) and \(F\) is a transit function.
We present two graphs \(G_{d}\) and \(G^{\prime}_{d}\) such that the \(W\)-structure of one of them is scant and the other is not. Moreover, the proof will settle, once we prove that Duplicator wins the EF game on \(G_{d}\) and \(G^{\prime}_{d}\).
For \(d\geq 2\) let \(G_{d}\) be a graph with vertices and edges (indices are via modulo \(4d\)) as follows:
\[V(G_{d})=\{u_{1},u_{2},\ldots,u_{4d},v_{1},v_{2},\ldots,v_{4d},x\}\mbox{ and}\]
\[E(G_{d})=\{u_{i}u_{i+1},v_{i}v_{i+1},u_{i}v_{i},v_{1}x,v_{2d+1}x:i\in[4d]\}.\]
For \(d\geq 2\) let \(G^{\prime}_{d}\) be a graph with vertices and edges as follows:
\[V(G_{d})=\{u^{\prime}_{1},u^{\prime}_{2},\ldots,u^{\prime}_{4d},v^{\prime}_{1 },v^{\prime}_{2},\ldots,v^{\prime}_{4d},x^{\prime}\}\mbox{ and}\]
\[E(G^{\prime}_{d})=\{u^{\prime}_{1}u^{\prime}_{2d},u^{\prime}_{i}u^{\prime}_{i+ 1},u^{\prime}_{2d+1}u^{\prime}_{4d},u^{\prime}_{2d+i}u^{\prime}_{2d+i+1},v^{ \prime}_{1}v^{\prime}_{2d},v^{\prime}_{i}v^{\prime}_{i+1},v^{\prime}_{2d+1}v^{ \prime}_{4d},\]
\[v^{\prime}_{2d+i}v^{\prime}_{2d+i+1},u^{\prime}_{j}v^{\prime}_{j},v^{\prime}_ {1}x^{\prime},v^{\prime}_{2d+1}x^{\prime}:i\in[2d-1],j\in[4d]\}.\]
Graphs \(G_{d}\) and \(G^{\prime}_{d}\) are shown in Figures 3 and 4, respectively.
**Lemma 6**: _The \(W\)-structure of \(G_{d}\) is a scant and the \(W\)-structure of \(G^{\prime}_{d}\) is not a scant for every \(d\geq 2\)._
Proof: It is easy to observe that \(W\)-structure of \(G^{\prime}_{d}\) is not a scant, since \(T(v^{\prime}_{2},x^{\prime})=\{v^{\prime}_{2},v^{\prime}_{1},x^{\prime}\}\). For \(G_{d}\) let \(z,y\in V(G_{d})=U\cup V\cup X\), where \(U=\{u_{1},u_{2},\ldots,u_{4d}\}\), \(V=\{v_{1},v_{2},\ldots,v_{4d}\}\), \(X=\{x\}\) and \(d(z,y)\geq 2\). We have to show that \(T(z,y)=V(G_{d})\).
**Case 1.**\(z,y\in U\). Let \(z=u_{i}\) and \(y=u_{j}\). Both \(z,y\)-paths on \(U\) are toll walks and \(U\in T(z,y)\). If we start with edge \(u_{i}v_{i}\), continue on both \(v_{i},v_{j}\)-paths on \(V\) and end with \(u_{j}v_{j}\) we get two toll \(z,y\)-walks that contain \(V\). For \(x\) notice that at least one of \(z,u_{1}\)-path or \(z,u_{2d+1}\)-path on \(U\) contains no neighbor of \(y\). We may assume that \(z,u_{1}\)-path \(P\) in \(U\) is such. Denote by \(Q\) the \(v_{2d+1},v_{j}\)-path on \(V\). Now, \(zx\stackrel{{ P}}{{\longrightarrow}}u_{1}v_{1}rv_{2d+1} \stackrel{{ Q}}{{\longrightarrow}}v_{j}y\) is a toll walk and \(T(z,y)=V(G_{d})\).
**Case 2.**\(z,y\in V\). Let \(z=v_{i}\) and \(y=v_{j}\). By the symmetric reason as in Case 1 we have \(U,V\in T(z,y)\). Again we may assume by symmetry that \(z,v_{1}\)-path \(P\) on \(V\) contains no neighbor of \(y\). If \(z\notin\{v_{2d},v_{2d+2}\}\), then there always exists a \(v_{2d+1},y\)-path \(Q\) on \(V\) without a neighbor of \(z\). Path \(z\stackrel{{ P}}{{\longrightarrow}}v_{1}rv_{2d+1}\stackrel{{ Q}}{{\longrightarrow}}y\) is a toll walk. Otherwise, if \(z\in\{v_{2d},v_{2d+2}\}\), say \(z=v2d\), then \(zv_{2d+1}rv_{1}\stackrel{{ Q}}{{\longrightarrow}}y\) is a
toll walk if \(y\neq v_{2d+2}\). So, let \(z=v2d\) and \(y=v_{2d+2}\). Now, \(z\xrightarrow{P}v_{1}xv_{1}\xrightarrow{Q}y\) is a toll \(z,y\)-walk and we have \(T(z,y)=V(G_{d})\).
**Case 3.**\(z=x\) and \(y\in V\). Let \(y=v_{j}\) where \(j\notin\{1,2d+1\}\). Without loss of generality, let \(2\leq j\leq 2d\). Now consider the following \(x,v_{j}\)-walks:
* \(xv_{1}v_{2}\cdots v_{j}\),
* \(xv_{2d+1}v_{2d}\cdots v_{j}\),
* \(xv_{1}u_{1}u_{2}\cdots u_{j}v_{j}\) or \(xv_{2d+1}u_{2d+1}u_{2d}\cdots u_{j}v_{j}\),
* \(xv_{1}u_{1}u_{4d}u_{4d-1}\cdots u_{j}v_{j}\) or \(xv_{2d+1},u_{2d+1},u_{2d+2},\cdots,u_{4d},u_{1},u_{2}\cdots u_{j},v_{j}\),
* \(x_{1},v_{1},v_{4d},v_{4d-1},\cdots,v_{2d+2},u_{2d+2},u_{2d+1},u_{2d},\cdots,u_ {j},v_{j}\) or \(x_{1},v_{2d+1},v_{2d+2},\cdots,v_{4d},u_{4d},u_{1},u_{2}\cdots u_{j},v_{j}\)
Notice, that in the last three items only one of the mentioned walks is a toll walk when \(y\in\{v_{2},v_{2d}\}\). However, every vertex in \(V(G_{d})\) belongs to at least one toll \(z,y\)-walk, and \(T(x,y)=V(G_{d})\) follows.
**Case 4.**\(z=x\) and \(y\in U\). Since \(u_{j}v_{j}\) is an edge, this case can be treated similarly as Case 3.
**Case 5.**\(z\in U\) and \(y\in V\). First, let \(d(z,y)=2\) and we prove \(T(u_{1},v_{2})=V(G_{d})\). The following \(u_{1}v_{2}\)-toll walks contains every vertex of \(G_{d}\) at least once:
* \(u_{1}u_{2}v_{2}\);
* \(u_{1}v_{1}v_{2}\);
* \(u_{1}u_{4d}u_{4d-1}\cdots u_{3}v_{3}v_{2}\);
* \(u_{1}u_{4d}v_{4d}v_{4d-1}\cdots v_{2d+1}xv_{2d+1}v_{2d}v_{2d-1}\cdots v_{3}v_ {2}\).
Similarly, usually even easier, we obtain toll walks from \(z\) to \(y\), which will cover all vertices of \(G_{d}\) for all the other choices of \(z\in U\) and \(y\in V\), also when \(d(z,y)>2\).
Lemma 7: _Let \(n\geq 1\) and \(d>2^{n+1}\). If \((X_{1},D_{1})\) and \((X_{2},D_{2})\) are scant ternary structures such that the underlying graph of \((X_{1},D_{1})\) is \(G_{d}\) and the underlying graph of \((X_{2},D_{2})\) is \(G_{d}^{\prime}\), then \((X_{1},D_{1})\) and \((X_{2},D_{2})\) satisfy the same sentence \(\psi\) with \(qr(\psi)\leq n\)._
Proof: Let \(X_{1}=\{u_{1},u_{2},\ldots,u_{4d},v_{1},v_{2},\ldots,v_{4d},x\}\) and let \(X_{2}=\{u^{\prime}_{1},u^{\prime}_{2},\ldots,\)\(u^{\prime}_{4d},v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{4d},x^{ \prime}\}\). Let \(U=\{u_{1},u_{2},\ldots,u_{4d}\}\), \(V=\{v_{1},v_{2},\ldots,v_{4d}\}\) and \(X=\{x\}\). Also, let \(U^{\prime}=\{u^{\prime}_{1},u^{\prime}_{2},\ldots,u^{\prime}_{4d}\}\), \(V^{\prime}=\{v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{4d}\}\) and \(X^{\prime}=\{x^{\prime}\}\). Clearly, \(X_{1}=U\cup V\cup X\) and \(X_{2}=U^{\prime}\cup V^{\prime}\cup X^{\prime}\). Let \(d^{*}\) and \(d^{\prime}\) denote the distance function of \(G_{d}\) and \(G_{d}^{\prime}\) respectively.
We will show that the Duplicator wins the \(n\)-move EF-game on \(G_{d}\) and \(G_{d}^{\prime}\) using induction on \(n\). In the \(i^{th}\) move of the \(n\)-move game on \(G_{d}\) and \(G_{d}^{\prime}\), we use \(a_{i}\) and \(b_{i}\), respectively, to denote points chosen from \(G_{d}\) and \(G_{d}^{\prime}\). Clearly, \(a_{i}\) will be an element in \(X_{1}\) and \(b_{i}\) an element in \(X_{2}\). Note that, during the game, the elements of \(U\) (respectively, \(V\) and \(X\)) will be mapped to element of \(U^{\prime}\) (respectively, \(V^{\prime}\) and \(X^{\prime}\)).
Let \(H_{1}\) be the subgraph of \(G_{d}\) induced by \(U\) and \(H_{1}^{\prime}\) the subgraph of \(G_{d}^{\prime}\) induced by \(U^{\prime}\). Since \((X_{1},D_{1})\) and \((X_{2},D_{2})\) are scant ternary structures, Duplicator must preserve the edges in \(G_{d}\) and \(G_{d}^{\prime}\) to win the game.
We claim that for \(1\leq j,l\leq i\leq n\), Duplicator can play in \(G_{d}\) and \(G_{d}^{\prime}\), in a way that ensures the following conditions after each round.
\[\text{(1) If }d^{*}(a_{j},a_{l})\leq 2^{n-i},\text{ then }d^{\prime}(b_{j},b_{l})=d^{*}(a_{j},a_{l}).\] \[\text{(2) If }d^{*}(a_{j},a_{l})>2^{n-i},\text{ then }d^{\prime}(b_{j},b_{l})>2^{n-i}.\]
Obviously, to win the game, the following correspondence must be preserved by Duplicator:
\[u_{1}\mapsto u_{1}^{\prime},v_{1}\mapsto v_{1}^{\prime},x\mapsto x^{\prime},u_ {2d+1}\mapsto u_{2d+1}^{\prime},v_{2d+1}\mapsto v_{2d+1}^{\prime}.\]
For \(i=1\), (1) and (2) hold trivially. Suppose that they hold after \(i\) moves and that the Spoiler makes his \((i+1)^{th}\) move. Let the Spoiler pick \(a_{i+1}\in X_{1}\) (the case of \(b_{i+1}\in X_{2}\) is symmetric). If \(a_{i+1}=a_{j}\) for some \(j\leq i\), then \(b_{i+1}=b_{j}\) and conditions (1) and (2) are preserved. Otherwise, find two previously chosen vertices \(a_{j}\) and \(a_{\ell}\) closest to \(a_{i+1}\) so that there are no other previously chosen vertices on the \(a_{j},a_{\ell}\)-path of \(G_{d}\) that passes through \(a_{i+1}\).
**Case 1.**\(a_{j},a_{\ell},a_{i+1}\in U\).
First, we consider the case where \(d^{*}(a_{j},a_{\ell})=d_{H_{1}}(a_{j},a_{\ell})\). (This was proved in Case 1 considered in Lemma 2 in [15], so we revisit the proof here.) If \(d^{*}(a_{j},a_{\ell})\)\(\leq 2^{n-i}\), then by the induction assumption there will be vertices \(b_{j}\) and \(b_{\ell}\) in \(G_{d}^{\prime}\) with \(d^{\prime}(b_{j},b_{\ell})\leq 2^{n-i}\). The Duplicator can select \(b_{i+1}\) so that \(d^{*}(a_{j},a_{i+1})=d^{\prime}(b_{j},b_{i+1})\) and \(d^{*}(a_{i+1},a_{\ell})=d^{\prime}(b_{i+1},b_{\ell})\). Clearly, the conditions (1) and (2) will be satisfied. On the other hand, if \(d^{*}(a_{j},a_{\ell})>2^{n-i}\), then by the induction assumption \(d^{\prime}(b_{j},b_{\ell})>2^{n-i}\). There are two cases. (i) If \(d^{*}(a_{j},a_{i+1})>2^{n-(i+1)}\) and \(d^{*}(a_{i+1},a_{\ell})>2^{n-(i+1)}\) and fewer than \(n\)-rounds of the game have been played, then there exists a vertex in \(G_{d}^{\prime}\) at a distance larger than \(2^{n-(i+1)}\) from all previously played vertices. (ii) If \(d^{*}(a_{j},a_{i+1})\leq 2^{n-(i+1)}\) or \(d^{*}(a_{i+1},a_{\ell})\leq 2^{n-(i+1)}\) and suppose that \(d^{*}(a_{j},a_{i+1})\leq 2^{n-(i+1)}\), then \(d^{*}(a_{i+1},a_{\ell})>2^{n-(i+1)}\). So, the Duplicator can select \(b_{i+1}\) with \(d^{\prime}(b_{j},b_{i+1})=d^{*}(a_{j},a_{i+1})\) and \(d^{\prime}(b_{i+1},b_{\ell})>2^{n-(i+1)}\).
Now, suppose that \(d^{*}(a_{j},a_{\ell})\neq d_{H_{1}}(a_{j},a_{\ell})\). This case occurs when \(a_{j},a_{\ell}\)-shortest path contains \(u_{1},u_{2d+1},v_{1},v_{2d+1}\) and \(x\). We may assume that
\[min\{d^{*}(a_{j},a_{i+1}),d^{*}(a_{\ell},a_{i+1})\}=d^{*}(a_{j},a_{i+1}).\]
Now, choose \(b_{i+1}\) so that \(d^{*}(a_{j},a_{i+1})=d^{\prime}(b_{j},b_{i+1})\).
**Case 2.**\(a_{j},a_{\ell},a_{i+1}\in V\).
Let \(a_{j}=v_{r}\), \(a_{\ell}=v_{s}\), \(a_{i+1}=v_{t}\) and find the elements \(u_{r}\), \(u_{s}\) and \(u_{t}\) in \(U\) and use case 1 to find the response of Duplicator when Spoiler chooses \(u_{t}\). If \(u_{t}\mapsto u_{z}^{\prime}\), then choose \(b_{i+1}=v_{z}^{\prime}\).
Similarly, for the other cases (when \(a_{j}\) belongs to \(U\) or \(V\), \(a_{\ell}\) belongs to \(V\) or \(U\) and \(a_{i+1}\) belongs to \(V\) or \(U\)) we can make all the vertices lying in \(U\) as in case 2 and it is possible to find a response from the Duplicator. Evidently, in
all the cases, the conditions (1) and (2) hold. Therefore, after \(n\) rounds of the game, the Duplicator can preserve the partial isomorphism. Thus, Duplicator wins the \(n\)-move EF-game on \(G_{d}\) and \(G_{d}^{\prime}\). Hence, by Theorem 4.1, we obtain the result.
From Lemma 6 and Lemma 7, we can conclude the following result.
Theorem 6.1: _There exists no sentence \(\sigma\) of the first-order logic of vocabulary \(\{D\}\) such that a connected ternary structure is a \(W\)-structure if and only if it satisfies \(\sigma\)._
For \(n\geq 1\), \(d\geq 2^{n+1}\), let us consider the cycles \(C_{2d}\) and \(C_{2d+1}\). It is evident that the \(W\)-structure of both \(C_{2d}\) and \(C_{2d+1}\) is scant. Furthermore, Duplicator can maintain the conditions (1) and (2) in \(C_{2d}\) and \(C_{2d+1}\) and this will ensure Duplicator winning an \(n\) move \(EF\) game in \(C_{2d}\) and \(C_{2d+1}\). Since \(C_{2d}\) is bipartite and \(C_{2d+1}\) is not, by Theorem 6.1 we arrive at the following theorem.
Theorem 6.2: _Let \((X,D)\) be a W-structure. Then the bipartite graphs cannot be defined by a first-order formula \(\phi\) over \((X,D)\)._
## 7 Concluding Remarks
First, we present several examples that show the independence of the axioms used in this contribution. In all the examples we have \(R(a,a)=\{a\}\) for every \(a\in V\).
Example 1: There exists a transit function that satisfies Axioms (b2'), (J2), (J4), (J4'), (TW1') and (TWA), but not Axioms (b1') and (b1).
Let \(V=\{u,v,z,x,y\}\) and define a transit function \(R\) on \(V\) as follows: \(R(u,v)=R(z,v)=V\), \(R(x,v)=\{x,y,v\}\), \(R(u,z)=\{u,x,z\}\), \(R(u,y)=\{u,x,y\}\), \(R(z,y)=\{z,x,y\}\) and \(R(a,b)=\{a,b\}\) for all the other pairs of different elements \(a,b\in V\). It is straightforward but tedious to see that \(R\) satisfies Axioms (b2'), (J2), (J4), (J4'), (TW1') and (TWA). In addition \(z\in R(u,v)\), \(R(u,z)\neq\{u,z\}\), and \(u\in R(z,v)\) and \(R\) do not satisfy Axiom (b1'). Therefore, \(R\) does not also satisfy Axiom (b1).
Example 2: There exists a transit function that satisfies Axioms (b1'), (J2), (J4), (J4'), (TW1') and (TWA), but not Axioms (b2') and (b2).
Let \(V=\{u,v,w,x,y,z\}\) and define a transit function \(R\) on \(V\) as follows: \(R(u,v)=\{u,y,x,v\},R(u,y)=\{u,x,y\},R(u,w)=\{u,y,w\},R(y,v)=\{y,z,x,v\}\), \(R(u,z)=\{u,x,z\}\), \(R(w,v)=\{w,z,v\}\), and \(R(a,b)=\{a,b\}\) for all the other pairs of different elements \(a,b\in V\). It is straightforward but tedious to see that \(R\) satisfies Axioms (b1'), (J2), (J4), (J4'), (TW1') and (TWA). On the other hand \(y\in R(u,v)\), \(R(u,y)\neq\{u,y\}\), \(z\in R(y,v)\) and \(z\notin R(u,v)\), so \(R\) does not satisfy Axiom (b2') hence \(R\) does not satisfy Axiom (b2).
Example 3: There exists a transit function that satisfies Axioms (b1'), (b2'), (J2), (J4'), (TW1') and (TWA), but not Axioms (J4) and (JC).
Let \(V=\{u,v,x,y,z\}\) and define a transit function \(R\) on \(V\) as follows: \(R(u,v)=\{u,z,v\}\), \(R(u,y)=\{u,x,y\}\), \(R(x,v)=\{x,y,v\}\) and \(R(a,b)=\{a,b\}\) for all the other pairs of different elements \(a,b\in V\). It is straightforward but tedious to see that \(R\) satisfies Axioms (b1'), (b2'), (J2), (J4'), (TW1') and (TWA). In addition \(x\in R(u,y)\), \(y\in R(x,v)\), \(R(u,v)\neq\{u,v\}\) and \(x\notin R(u,v)\), so \(R\) does not satisfy Axioms (J4) and (JC).
Example 4: There exists a transit function that satisfies Axioms (b1'), (b2'), (J2), (J4), (TW1') and (TWA), but not Axiom (J4').
Let \(V=\{u,v,x,y,z_{1},z_{2},z_{3}\}\) and define a transit function \(R\) on \(V\) as follows: \(R(u,v)=\{u,z_{1},z_{2},z_{3},v\}\), \(R(u,y)=\{u,x,z_{1},z_{2},y\}\), \(R(x,v)=\{x,z_{2},z_{3},y,v\}\), \(R(u,x)=\{u,z_{1},x\}\), \(R(x,y)=\{x,z_{2},y\}\), \(R(y,v)=\{y,z_{3},v\}\), \(R(z_{1},y)=\{z_{1},z_{2},y\}\), \(R(z_{3},x)=\{z_{3},z_{2},x\}\) and \(R(a,b)=\{a,b\}\) for all the other pairs of different elements \(a,b\in V\). It is straightforward but tedious to see that \(R\) satisfies Axioms (b1'), (b2'), (b1'), (J2), (J4), (TW1'), and (TWA). But \(x\in R(u,y)\), \(y\in R(x,v)\), \(R(u,v)\neq\{u,v\}\), \(R(u,x)\neq\{u,x\}\), \(R(x,y)\neq\{x,y\}\), \(R(y,v)\neq\{y,v\}\), and \(x\notin R(u,v)\), so \(R\) does not satisfy Axiom (J4').
Example 5: There exists a transit function that satisfies Axioms (b1'), (b2'), (J2), (J4), (J4'), and (TWA), but not Axiom (TW1').
Let \(V=\{u,v,w,x,y,z\}\) and define a transit function \(R\) on \(V\) as follows: \(R(u,v)=\{u,y,x,v\}\), \(R(u,y)=\{u,x,y\}\), \(R(u,w)=\{u,x,w\}\), \(R(x,v)=\{x,y,v\}\), \(R(u,z)=\{u,x,z\}\), \(R(z,v)=\{z,y,v\}\), \(R(w,v)=\{w,y,v\}\) and \(R(a,b)=\{a,b\}\) for all the other pairs of different elements \(a,b\in V\). It is straightforward but tedious to see that \(R\) satisfies Axioms (b1'), (b2'), (J2), (J4), (J4') and (TWA).
In addition, \(x,y\in R(u,v)\), \(x\neq u\), \(y\neq v\), \(R(x,v)\neq\{x,v\}\), \(R(u,y)\neq\{u,y\}\), \(R(x,z)=\{x,z\}\), \(R(z,w)=\{z,w\}\), \(R(w,y)=\{w,y\}\) and \(R(u,w)\neq\{u,w\}\), but \(z\notin R(u,v)\), so \(R\) does not satisfy Axiom (TW1').
Example 6: There exists a transit function that satisfies Axioms (b1'), (b2'), (J2), (J4), (J4'), and (TW1'), but not Axioms (TWA) and (TWC).
Let \(V=\{u,v,x,y\}\) and define a transit function \(R\) on \(V\) as follows: \(R(u,v)=V\), \(R(x,v)=\{x,y,v\}\) and \(R(a,b)=\{a,b\}\) for all the other pairs of different elements \(a,b\in V\). It is straightforward but tedious to see that \(R\) satisfies Axioms (b1'), (b2'), (b2'), (J2), (J4), (J4') and (TW1'). In additions \(x\in R(u,v)\), but there does not exist \(x_{1}\in R(x,v)\cap R(u,v)\) where \(x_{1}\neq x\), \(R(x,x_{1})=\{x,x_{1}\}\), \(R(u,x_{1})\neq\{u,x_{1}\}\) and \(R(x_{1},v)\subset R(x,v)\). Therefore, \(R\) does not satisfy Axiom (TWA) and also (TWC).
Example 7: There exists a transit function that satisfies Axioms (b1'), (b2'), (J4), (J4'), (TWA) and (TW1'), but not Axioms (J2) and (tr).
Let \(V=\{u,v,x,y\}\) and define a transit function \(R\) in \(V\) as follows: \(R(u,v)=\{u,x,v\}\) and \(R(a,b)=\{a,b\}\) for all the other pairs of different elements \(a,b\in V\). It is straightforward but tedious to see that \(R\) satisfies Axioms (b1'), (b2'), (J4), (J4'), (TWA) and (TW1'). In addition \(R(u,y)=\{u,y\}\), \(R(y,v)=\{y,v\}\)
\(R(u,v)\neq\{u,v\}\) but \(y\notin R(u,v)\) Therefore \(R\) does not satisfy Axioms (J2) and (tr).
Example 8: There exists a transit function that satisfies Axioms (b2), (J2), (J4), (dh), (TW1), (TW2) and (TWC), but not Axioms (dh1) and (JC).
Let \(V=\{u,v,w,x,y,z\}\) and define a transit function \(R\) in \(V\) as follows: \(R(u,v)=\{u,v\},R(u,y)=\{u,z,x,y\},R(u,x)=\{u,z,x\},R(u,w)=\{u,z,x,y,w\},\)\(R(z,y)\)\(=\{z,x,y\},\)\(R(z,w)=\{z,x,y,w\},\)\(R(z,v)=\{z,x,y,w,v\},\)\(R(x,w)=\{x,y,w\},\)\(R(x,v)=\{x,y,w,v\},\)\(R(y,v)=\{y,w,v\},\) and \(R(a,b)=\{a,b\}\) for all other pairs of different elements \(a,b\in V.\) It is straightforward but tedious to see that \(R\) satisfies Axioms (b2), (J2), (J4), (dh), (TW1), (TW2), and (TWC). In addition \(x\in R(u,y),\)\(y\in R(x,v),\)\(R(u,x)\neq\{u,x\},\)\(R(y,v)\neq\{y,v\},\)\(R(x,y)=\{x,y\},\) and \(x\notin R(u,v),\) so \(R\) does not satisfy Axioma (dh1) and (JC).
Example 9: There exists a transit function that satisfies Axioms (b2), (J2), (J4), (JC), (dh1) (TW1), (TW2) and (TWC), but not Axioms (dh) and (pt).
Let \(G\) be a 3-fan, \(V=V(G)\) and define a transit function \(R=T\) on \(V.\) It is straightforward but tedious to see that \(R\) satisfies Axioms (b2), (J2), (J4), (JC), (dh1), (TW1), (TW2), and (TWC). In addition, \(T\) does not satisfy the Axioms (dh) and (pt) on a 3-fan.
We conclude by observing some interesting facts about the well-known transit functions in a connected graph \(G,\) namely, the interval function \(I\) and the induced path function \(J\), and the toll walk function \(T\), the topic of this paper. It easily follows that \(I(u,v)\subseteq J(u,v)\subseteq T(u,v),\) for every pair of vertices \(u,v\) in \(G.\) It is proved by Mulder and Nebesky in [21] that the interval function of a connected graph \(G\) possesses an axiomatic characterization in terms of a set of first-order axioms framed on an arbitrary transit function. From [5], it follows that an arbitrary bipartite graph also has this characterization. Further in [4], Chalopine et al. provided a first-order axiomatic characterization of \(I\) of almost all central graph families in metric graph theory, such as the median graphs, Helly graphs, partial cubes, \(\ell_{1}\)-graphs, bridged graphs, graphs with convex balls, Gromov hyperbolic graphs, modular and weakly modular graphs, and classes of graphs that arise from combinatorics and geometry, namely basis graphs of matroids, even \(\Delta\)-matroids, tope graphs of oriented matroids, dual polar spaces. Also in [4], it is proved that the family of chordal graphs, dismantlable graphs, Eulerian graphs, planar graphs, and partial Johnson graphs do not possess a first-order axiomatic characterization using the interval function \(I.\) The list of non-definable graph families is extended in [14] by including the following graphs, namely perfect, probe-chordal, wheels, odd-hole free, even-hole free, regular, \(n\)-colorable and \(n\)-connected (\(n\geq 3\)). It may be noted that the all-paths function \(A\) also possesses an axiomatic characterization similar to that of the interval function \(I\)[8].
In [22], Nebesky proved that the induced path function of an arbitrary connected graph does not possess such a characterization, whereas in [6], it is proved that the family of chordal graphs, Ptolemaic graphs, (\(HholeP\))-free graphs, (\(HholeD\))-free graphs, distance-hereditary graphs, etc. possess first-order axiomatic characterization.
In this paper, we have shown that the toll function \(T\) does not have a first-order axiomatic characterization for an arbitrary connected graph and a bipartite graph, whereas chordal graphs, trees, \(AT\)-free graphs, distance hereditary graphs, and Ptolemaic graphs possess such a characterization. Graphs that possess first-order characterization also include the family of interval graphs and \((HC_{5}PAT)\)-free graphs [23].
Therefore, the behavior of these graph transit functions is strange and may not be comparable as far as axiomatic characterization is concerned. In this sense, we observe that the behavior of the induced path function may be comparable to the toll function to some extent. Since most of the classes of graphs that we have provided axiomatic characterizations in terms of the toll function are related to \(AT\)-free graphs, we believe that the following problem will be relevant.
**Problem.** It would be interesting to check whether some of the maximal subclasses of \(AT\)-free graphs like \(AT\)-free \(\cap\) claw-free, strong asteroid-free graphs and the minimal superclasses of \(AT\)-free graphs like the dominating pair graphs and the probe \(AT\)-free graphs possess a first-order axiomatic characterization in terms of the toll function \(T\)?
**Acknowledgments**: L.K.K.S acknowledges the financial support of CSIR, Government of India, for providing CSIR Senior Research Fellowship (CSIR-SRF) (No 09/102(0260)/2019-EMR-I ). J.J acknowledges the financial support of the University of Kerala, India, for providing University JRF (No: 445/2020/UOK, 391/2021/UOK, 3093/2022/ UOK, 4202/2023/UOK). I.P. was partially supported by Slovenian Research and Inovation Agency by research program number P1-0297.
| $w_1\neq w_k$ and $w_2$ and $w_{k-1}$ are the only neighbors of $w_1$ and $w_k$, respectively, on $W$ in a graph $G$
$T(u,v)$ contains all the vertices that belong to a toll walk between $u$ and $v$
$T$ is a toll walk transit function from $V(G)\times V(G)\rightarrow 2^{V(G)}$
We represent several axioms that characterize the toll walk transit function among chordal graphs, trees, asteroidal triple-free graphs, Ptolemaic graphs, and distance hereditary graphs.
We also show that the toll walk transit function cannot be described in the language of first-order logic for an arbitrary graph. |
2309.05117 | Model discovery for nonautonomous translation-invariant problems | Discovery of mathematical descriptors of physical phenomena from
observational and simulated data, as opposed to from the first principles, is a
rapidly evolving research area. Two factors, time-dependence of the inputs and
hidden translation invariance, are known to complicate this task. To ameliorate
these challenges, we combine Lagrangian dynamic mode decomposition with a
locally time-invariant approximation of the Koopman operator. The former
component of our method yields the best linear estimator of the system's
dynamics, while the latter deals with the system's nonlinearity and
non-autonomous behavior. We provide theoretical estimators (bounds) of
prediction accuracy and perturbation error to guide the selection of both rank
truncation and temporal discretization. We demonstrate the performance of our
approach on several non-autonomous problems, including two-dimensional
Navier-Stokes equations. | Hongli Zhao, Daniel M. Tartakovsky | 2023-09-10T19:37:25 | http://arxiv.org/abs/2309.05117v2 | # Model discovery for nonautonomous translation-invariant problems+
###### Abstract
Discovery of mathematical descriptors of physical phenomena from observational and simulated data, as opposed to from the first principles, is a rapidly evolving research area. Two factors, time-dependence of the inputs and hidden translation invariance, are known to complicate this task. To ameliorate these challenges, we combine Lagrangian dynamic mode decomposition with a locally time-invariant approximation of the Koopman operator. The former component of our method yields the best linear estimator of the system's dynamics, while the latter deals with the system's nonlinearity and non-autonomous behavior. We provide theoretical estimators (bounds) of prediction accuracy and perturbation error to guide the selection of both rank truncation and temporal discretization. We demonstrate the performance of our approach on several non-autonomous problems, including two-dimensional Navier-Stokes equations.
D ymamic mode decomposition, reduced-order model, advection-diffusion, Lagrangian framework, time-dependent coefficient
35K57, 37C60
## 1 Introduction
With the advent of machine learning applications in the engineering sciences, the need for pattern recognition and predictions has become increasingly pronounced in order to assist the study of temporally evolving natural phenomena [29]. Direct-solution methods, which often relies on deep neural networks (DNN) to encode an input-output relationship, are hindered by the high requirement on both quantity and quality of data and thus sensitive to parametric changes of the underlying system [33]. On the other hand, equation discovery [57] supplements partial knowledge with optimal predictions/parameter inference to reproduce the governing laws. Well-known methods belonging to this class include symbolic regression [58], numerical Gaussian processes [50, 51], sparse identification of nonlinear dynamics (SINDy) [8], physics-informed neural networks (PINN) [13] and Kalman filters [14], along with combinations of these strategies to accommodate different physical scenarios or achieve computational improvements [26, 11, 24, 9].
In the context of system identification with complete absence of physics, equation-free methods are adopted to reconstruct the observed processes through a purely data-driven surrogate. Instead of prescribing a set of dictionary terms, equation-free methods seek to approximate the flow map/operator that incorporates differential information. Deep neural networks (DNN) and dynamic mode decompositions (DMD) are two prominent classes of methods for operator learning. Including the well-known DeepONet [37], DNN architectures possess high expressiveness and are capable of serving as nonlinear surrogates of PDE-based models to arbitrary accuracy given sufficient training samples [49, 12]. On the other hand, DMD provides an optimal linear approximation of the model and relies on the Koopman operator to account for nonlinearity [31, 41, and references therein]. In the absence of precise error estimators
for DNN surrogates, their performance on any given problem cannot be guaranteed _a priori_. In contrast, being a linear surrogate, DMD is better understood and equipped with theoretical estimates of prediction accuracy, e.g., [35]. Its various generalizations are designed to handle advection-dominated phenomena [34], shock waves and discontinuous solutions [36], inhomogeneity of differential operators [37] and a problem's parametric dependence [39].
Physical constraints in the PDE model, such as translation invariance and time-dependent coefficients, pose challenges for both DNNs and DMD. For instance, direct-solution DNNs using soft regularization to enforce advection and mass conservation may lead to ill-conditioning during training [30]. Operator DNNs have also been observed to yield poor performance when the finite data samples are not representative of global transport phenomena [61, 59]. Likewise, standard DMD is also not devoid of shortcomings and fails to cope with sharp gradients [4, 34]. Furthermore, its construction is based on the assumption of time homogeneity (e.g., parameters and/or source terms do not vary in time), which is not suitable for nonautonomous problems.
A prime example of the twin challenges to model discovery is advection-diffusion problems which encapsulate conservation of momentum [54, 56], thermal energy [6], and probability mass [53]. In the diffusion-dominated and intermediary regimes, these problems have been successfully treated via standard reduced-order basis methods including DMD [35, 45] and POD [21]. The advection-dominated regime, characterized by, e.g., high Peclet and Reynolds numbers, complicates not only numerical solution of advection-diffusion equations but also discovery of these equations (or corresponding equation-free models) from observations. Although its convergence properties has been well-studied [28], standard DMD yields quantitatively and qualitatively incorrect solutions, spurring the development of Lagrangian DMD [34].
Reduced-order surrogate models of nonautonomous dynamical systems require either an appropriate global spatio-temporal basis or a time-dependent parameterization (e.g. via Fourier spectral expansion) [40, 43]. Examples of such modifications of the standard DMD include streaming DMD [22], time-varying DMD [60], and more generally, nonautonomous Koopman operators for (quasi)periodic time-dependent inputs [42]. We build upon these developments to construct a DMD framework for translation-invariant (e.g., advection-dominated) problems with time-dependent inputs. Our approach is to reformulate a governing partial differential equation in the Lagrangian frame of reference and to deploy a piece-wise constant approximation of the nonautonomous Koopman operator in the resulting Lagrangian DMD [34].
In section 2, we formulate a class of parabolic translation-invariant PDEs with time-dependent inputs and, upon spatial discretization, express them as a nonautonomous dynamical system. Section 3 contains a brief description of the Koopman operator theory and the DMD framework for construction of reduced-order representations of PDE-based models. In section 4, we present a local Lagrangian DMD, whose implementation shares relevant features of the time-varying DMD [60] and the Lagrangian DMD [34] to effectively represent translation-invariant PDEs with time-dependent inputs. Upper bounds of both the prediction error of our method and the operator norm error are derived in section 5, as functions of reduction of rank and number of collected snapshots. This theoretical analysis demonstrates that the local Lagrangian DMD is more accurate than either time-varying DMD or Lagrangian DMD alone. A series of numerical experiments, reported in section 6, serve to demonstrate our approach and to verify the tightness of these error bounds. Main conclusions drawn from our study are summarized in section 7, accompanied by a discussion of the method's limitations and future research.
## 2 Problem Formulation
We are concerned with the following class of partial differential equations (PDE) with variable coefficients for a quantity of interest \(u(t,\mathbf{x})\), with \(\mathbf{x}\in\Omega\subset\mathbb{R}^{d}\):
\[\frac{\partial u}{\partial t}+\nabla_{\mathbf{x}}\cdot(G(t,\mathbf{x},u)u)= \nabla_{\mathbf{x}}\cdot(D(t,\mathbf{x},u)\nabla_{\mathbf{x}}u),(t,\mathbf{x}) \in(0,t_{f}]\times\Omega \tag{1}\]
\[u(t_{0},\mathbf{x})=u_{0}(\mathbf{x})\]
We consider a semi-discrete method to simulate equation (1) by discretizing in the spatial variables \(\mathbf{x}\). For simplicity, we assume the number of gird points is \(n\) for each of the \(d\) spatial dimensions. We arrive at a nonautonomous dynamical system of general form:
\[\frac{d\mathbf{u}}{dt}=\mathbf{N}(t,\mathbf{u})\]
\[\mathbf{u}(0)=\mathbf{u}_{0} \tag{2}\]
whose right-hand side describes the dynamics of the PDE in (1) with an explicit time-dependence. With respect to construction of ROMs, we will be primarily concerned with the discretized equations (2). Let \(\mathbf{u}\in\mathcal{M}\subset\mathbb{R}^{n^{d}}\) denote the numerical solution, and \(\mathbf{N}:\mathbb{R}^{+}\times\mathbb{R}^{n^{d}}\to\mathbb{R}^{n^{d}}\) is the discretized PDE operator.
Let the temporal domain \([0,t_{f}]\) be discretized uniformly with step size \(\Delta t\), and define \(t_{i}=i\Delta t\), for \(0=t_{0}<t_{1}<\cdots<t_{m}=t_{f}\). Furthermore, define \(\mathbf{\Phi}_{\Delta t}(\cdot;t_{i}):\mathbb{R}^{n}\to\mathbb{R}^{n}\) as the discrete flow map associated with the system (2), and similarly the continuous flow map is denoted as \(\Phi_{t}(\cdot;s)\), such that for any \(t\leq t^{\prime}\):
\[\mathbf{u}(t^{\prime})=\Phi_{t^{\prime}}(\mathbf{u}(t);t):=\mathbf{u}(t)+\int_ {t}^{t^{\prime}}\mathbf{N}(s,\mathbf{u}(s))ds \tag{3}\]
Furthermore,
\[\mathbf{u}_{i+1}=\mathbf{\Phi}_{\Delta t}(\mathbf{u}_{i};t_{i}):=\mathbf{u}(t _{i})+\int_{t_{i}}^{t_{i+1}}\mathbf{N}(s,\mathbf{u}(s))ds \tag{4}\]
where we define \(\mathbf{u}_{i}=\mathbf{u}(t_{i})\).
## 3 Review of DMD Algorithms
For the general dynamical system (4), the associated family of Koopman operators evolve a set of observables along its flow. More precisely, given an observable function \(g:\mathbb{R}^{n}\to\mathbb{R}^{N}\), the Koopman operator \(\mathcal{K}_{t}^{t^{\prime}}\) is defined such that:
\[\mathcal{K}_{t}^{t^{\prime}}g(\mathbf{u}(t)):=g(\mathbf{u}(t^{\prime})) \tag{5}\]
For the discrete-time description (4), we similarly define the associated Koopman operator \(\mathcal{K}_{i}^{\Delta t}\), such that:
\[\mathcal{K}_{i}^{\Delta t}g(\mathbf{u}_{i})=g(\mathbf{u}_{i+1}) \tag{6}\]
Both \(\mathcal{K}_{t}^{t^{\prime}},\mathcal{K}_{i}^{\Delta t}\) are infinite-dimensional operators on the Hilbert space of all observable functions \(g\). In addition, they are linear maps despite potential nonlinearity of the original system.
Dynamic mode decomposition (DMD) is a celebrated algorithm that attempts to approximate the eigenmodes of the Koopman operator to identify dominant frequencies and reconstruct the underlying dynamics from discrete observations. Let a training dataset containing \(m\) collected snapshots be denoted as \(\mathcal{S}=\{(\mathbf{g}_{i},\mathbf{g}_{i+1})\}\}_{i=1}^{m}\), with \(\mathbf{g}_{i}=g(\mathbf{u}_{i})\). In line with (11), we would like to construct a best-fit linear operator \(\mathbf{K}\) such that:
\[\mathbf{g}_{i+1}\approx\mathbf{K}\mathbf{g}_{i} \tag{12}\]
for all \(i=1,2,\ldots,m\).
### Standard DMD
The standard DMD algorithm attempts to reconstruct directly in solution space, i.e. \(g(\mathbf{u}_{i})=\mathbf{u}_{i}\) and \(\mathbf{K}\) is constructed via minimizing the mean squared error (MSE):
\[L_{\mathcal{S}}(\mathbf{K})=\frac{1}{m}\sum_{i=1}^{m}||\mathbf{y}_{i}-\mathbf{ K}\mathbf{x}_{i}||_{2}^{2} \tag{13}\]
where the pairs \((\mathbf{x}_{i},\mathbf{y}_{i})=(\mathbf{u}_{i},\mathbf{u}_{i+1})\) form the data matrices of size \(n\times m\):
\[\mathbf{X}=\left[\begin{array}{c|ccc}&\cdots&\\ \mathbf{u}_{1}&\mathbf{u}_{2}&\cdots&\mathbf{u}_{m}\\ \big{|}&\cdots&\end{array}\right],\mathbf{Y}=\left[\begin{array}{c|ccc}& &\cdots&\\ \mathbf{u}_{2}&\mathbf{u}_{3}&\cdots&\mathbf{u}_{m+1}\\ \big{|}&\cdots&\end{array}\right] \tag{14}\]
The minimizer of (13) can be explicitly derived as:
\[\mathbf{K}=\mathbf{Y}\mathbf{X}^{\dagger} \tag{15}\]
where \(\dagger\) denotes the Moore-Penrose pseudoinverse, \(\mathbf{X}^{\dagger}=(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{*}\). In order to compute \(\mathbf{X}^{\dagger}\) stably and tractably, a truncated singular value decomposition (SVD) is often applied on the data matrix \(\mathbf{X}\):
\[\mathbf{X}\approx\mathbf{U}_{r}\mathbf{\Sigma}_{r}\mathbf{V}_{r}^{*} \tag{16}\]
where the subscript \(r\) denotes a pre-specified rank typically determined on a Frobenius-norm error threshold, with \(\mathbf{U}_{r}\in\mathbb{R}^{n\times r},\mathbf{V}_{r}\in\mathbb{R}^{m\times r },\mathbf{\Sigma}_{r}\in\mathbb{R}^{r\times r}\) is a diagonal matrix containing the singular values \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{r}\) of \(\mathbf{X}\), in non-increasing order. Furthermore, the columns of \(\mathbf{U}_{r}\) span a \(r\) dimensional subspace of \(\mathbb{R}^{n}\), making it a computationally efficient strategy to first project the observations, compute predictions on the lower-dimensional space, and transform back to the original state space [1]. The procedure is summarized in Algorithm 1, which provides a continuous and fully data-driven model satisfying (13). The standard DMD algorithm serves as the foundation of a wide range of DMD algorithms that incorporate additional control parameters [47, 38].
### Physics-Aware DMD
To account for fundamental physical constraints for problems describing conservative advection (i.e. non-negativity of solutions, mass conservation), reduced-order models in a Lagrangian frame of reference are first discussed in [44] based on principal orthogonal decomposition (POD). In the data-driven Koopman operator formulation, the physics-aware DMD (or Lagrangian DMD) was developed in [34] for advection-dominated phenomena, where standard DMD fails. The main idea is to include the moving Lagrangian grid as observables in addition to
a high-fidelity numerical solution. More explicitly, we consider the PDE (2.1) along the characteristic lines:
(3.15)
with initial conditions:
(3.16)
where \(\mathcal{X}_{i}\) denotes the \(i\)th point in the Lagrangian moving grid at which the solution to (2.1) is evaluated, denoted as \(\tilde{u}(t,\mathcal{X}(t))\). The starting grid is assumed to be the
same spatial discretization as that of (2). In particular, \(\tilde{u}(t,\mathcal{X}(t))\) differs from the solution \(u(t,x)\) of (2), which is in the Eulerian frame of reference. The solution on the Lagrangian grid can be interpolated to the Eularian grid, and vice versa [34].
After discretizing (30), the Lagrangian system (30) yields a dynamical system of general form (2) with state variables:
\[\mathbf{w}(t)=\begin{bmatrix}\boldsymbol{\mathcal{X}}(t)\\ \mathbf{u}(t)\end{bmatrix}\in\mathbb{R}^{N} \tag{31}\]
where the effective state dimension \(N=dn+n^{d}\), including the discretized solution \(u(\mathbf{x}_{i})\) at each spatial grid points and re-ordered into a vector, along with a one-dimensional grid for each of the \(d\) spatial dimensions. The physics-aware DMD then considers the observables defined by \(g(\mathbf{u}_{i})=\mathbf{w}_{i}\), and the associated data matrices are:
\[\mathbf{X}=\begin{bmatrix}\big{|}&\big{|}&\cdots&\big{|}\\ \mathbf{w}_{1}&\mathbf{w}_{2}&\cdots&\mathbf{w}_{m}\\ \big{|}&\big{|}&\cdots&\big{|}\end{bmatrix},\mathbf{Y}=\begin{bmatrix} \big{|}&\big{|}&\cdots&\big{|}\\ \mathbf{w}_{2}&\mathbf{w}_{3}&\cdots&\mathbf{w}_{m+1}\\ \big{|}&\big{|}&\cdots&\big{|}\end{bmatrix} \tag{32}\]
**Remark 1**: The formulation of state vector \(\mathbf{w}(t)\) in (31) suffers from the so-called curse of dimensionality as the PDE solution is defined on a \(d\)-dimensional spatial grid. Furthermore, the interpolation from \(\tilde{u}(t,\mathbf{x})\) to \(u(t,\mathcal{X}(t))\) requires the formation of meshgrids at each time step \(t\). As observed in [34], the Lagrangian DMD for advection-dominated phenomena is restricted to the use of low-dimensional problems. Although possible model order reduction techniques exist, such as using tensor-network based methods [52, 10], the discussion of high-dimensional PDE solutions is out of the scope of this paper.
### Time-Varying DMD
The time-varying DMD algorithm divides the temporal domain \([0,t_{f}]\) into \(p\) sub-intervals, \([t_{0},t_{1}],\ldots,[t_{p-1},t_{p}]\), with \(t_{0}=0,t_{p}=t_{f}\). For simplicity, we assume each sub-interval contains \(r\) snapshots and \(m=pr\). The time-varying DMD model introduces a time dependence to the linear operator, such that:
\[\mathbf{g}_{i+1}\approx\mathbf{K}(t_{i})\mathbf{g}_{i} \tag{33}\]
which approximates the nonautonomous Koopman operator (18). A common construction of \(\mathbf{K}(t)\) is piecewise constant in time, considered in this work, via solving \(p\) minimization problems:
\[\min_{\mathbf{K}_{1},\ldots,\mathbf{K}_{p}}L_{\mathcal{S}}(\mathbf{K}(t))= \min_{\mathbf{K}_{1},\ldots,\mathbf{K}_{p}}\sum_{i=1}^{p}L_{\mathcal{S}_{i}}( \mathbf{K}_{i}) \tag{34}\]
with \(\mathcal{S}_{i}\) being the snapshots collected from \([t_{i-1},t_{i}]\), and \(\mathcal{S}=\bigcup_{i=1}^{p}\mathcal{S}_{i}\). The linear operator \(\mathbf{K}^{(i)}\) can be interpreted as a local best-fit given by a standard DMD procedure on interval \([t_{i-1},t_{i}]\).
\[\mathbf{K}(t)=\sum_{i=1}^{p}\mathbf{K}^{(i)}\delta_{[t_{i-1},t_{i}]}(t) \tag{35}\]
where \(\delta_{[t_{i-1},t_{i}]}\) is the indicator function for time interval \([t_{i-1},t_{i}]\). It is also possible to construct other parameterized models of \(\mathbf{K}(t)\), such as basis polynomials or a universal function approximator [48].
## 4 Proposed Methodology
Both the standard DMD model and the physics-aware DMD model assume the underlying dynamical system (2) is autonomous or periodic, such that the Koopman operator (3) may be captured on a time-invariant manifold given sufficient observations. Furthermore, the standard DMD algorithm tends to perform poorly on phenomena with advective mass and sharp gradients due to oscillatory DMD modes [34]. Although the physics-aware DMD is sufficient for prediction of spatially-dependent advection phenomena, the inherent assumption of time homogeneity gives rise to model misspecification and degradation of accuracy for time-dependent advection problems (1). To address the inaccuracies introduced by both standard DMD and physics-aware DMD, we consider the following procedure, summarized in Algorithm 1, which effectively introduces a time-dependence to the Lagrangian reduced order model.
Algorithm 1 provides an elementary implementation of the (temporal) piece-wise constant Koopman operator in (31). Upon appropriate modifications of \(\mathbf{K}(t)\) to allow superpositions of DMD frequencies in each time interval, it is possible to recover other forms of DMD strategies, such as the multi-resolution DMD of [32] or the windowed DMD of [3].
In terms of computational complexity, it is possible to consider the incremental SVD updates with adaptive rank truncation to directly update \(\mathbf{K}^{(i)}\) to \(\mathbf{K}^{(i+1)}\) in low-rank format [7]. However, due to the inclusion of Lagrangian moving grids in the formulation of (27), it is assumed that the data matrices have dimensions \(N\gg m\) and are of full column rank. The size constraint is especially true in high-dimensional PDE problems. In our numerical experiments, we did not observe a significant computational advantage of applying incremental SVD update to computed operators \(\mathbf{K}^{(1)},\ldots,\mathbf{K}^{(i)}\). In particular, a direct pseudoinverse computation in standard DMD involves \(O(m^{2}N)\) runtime complexity, which is asymptotically more expensive than \(p\) separate SVD computations, yielding \(O(pr^{2}N)=O(mrN)\), with \(m=pr\). A small computational saving may be achievable if the highest rank of data matrices during each time interval of collected snapshots is bounded by some \(r^{\prime}<r\), in which case the runtime complexity is \(O(p\cdot rr^{\prime}N)=O(mr^{\prime}N)\), by applying incremental SVD updates.
## 5 Theoretical Analysis
The judicious choice of subintervals in the time-varying DMD formulation 3 is crucial for prediction accuracy. As general guideline, we first present in Section 5.1 pointwise and average error upper bounds for the time-varying DMD in (31). In Section 5.2, we compute upper bounds of perturbations to the learned operator in terms of \(L^{2}\) operator norm under truncation of frequencies and deletion of training data. Furthermore, for classes of linear dynamical systems, the bounds can be refined by analyzing the norm of time-shifted training data \(\mathbf{Y}\) in relation to that of \(\mathbf{X}\). For general nonlinear dynamical systems, we refer the reader to the analysis provided in the analysis given in Section 3 of [48]
### Prediction Error
We first consider the pointwise prediction error of time-varying DMD strategy:
**Proposition 5** **(Pointwise error for time-varying DMD)**: Assume the system in equation (2) and the time-varying DMD in (30) satisfy the following properties:
1. \(\mathbf{N}(t,\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is uniformly Lipschitz (in time) with constant \(L>0\).
2. \(\sup_{s\in[t_{0},t_{f}]}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;s)-\mathbf{K}(s )\right\|_{L^{\infty}(\mathbb{R}^{n})}<+\infty\), where \(\mathbf{K}(s)\) is piecewise constant on each interval \(s\in[t_{0},t_{1}],[t_{1},t_{2}],\ldots,[t_{p-1},t_{p}]\). \(\mathbf{K}_{1},\mathbf{K}_{2},\ldots,\mathbf{K}_{p}\) are respective solutions of the standard problem of minimizing (4) on each in
_interval \([t_{0},t_{1}],[t_{1},t_{2}],\ldots,[t_{p-1},t_{p}]\)._
3. _All reconstructed solutions_ \(\mathbf{x}_{DMD}\) _belong to the solution manifold, defined as:_ (5.1) \[\mathcal{M}_{\Delta t}=\{\mathbf{x}\in\mathcal{M}:\mathbf{\Phi}_{\Delta t}( \mathbf{x};t_{i})\in\mathcal{M}\}\]
_Define the error of incremental DMD at time step \(t_{n}\) to be:_
\[\mathcal{E}^{n}=\|\mathbf{x}_{n}-\widehat{\mathbf{x}}_{n}\|_{2}^{2} \tag{5.2}\]
_where \(\mathbf{x}_{n}=\mathbf{x}(t_{n})\) is exact, and \(\widehat{\mathbf{x}}_{n}\) is the approximation given by DMD. Rewritting_
the model expression:_
\[\widehat{\mathbf{x}}_{k+1}=\mathbf{K}(t_{k})\widehat{\mathbf{x}}_{k}=\widehat{ \mathbf{x}}_{k}+\mathbf{A}(t_{k})\widehat{\mathbf{x}}_{k} \tag{10}\]
_where:_
\[\mathbf{A}(t):=\mathbf{K}(t)-\mathbf{I}_{N} \tag{11}\]
_then the pointwise error of time-vary DMD is:_
\[\mathcal{E}^{n}\leq(1+e^{L\Delta t})^{m}\mathcal{E}_{0}+\sum_{j=1}^{p}\sum_{l= 0}^{r}(1+e^{L\Delta t})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(jr-l)}) -\mathbf{A}_{k}\right\|_{L^{\infty}(\mathcal{M}_{\Delta t})}^{2} \tag{12}\]
Proof: For ease of presentation, we omit the time dependence in the flow map and let \(\mathbf{\Phi}_{\Delta t}(\mathbf{x}(t))=\mathbf{\Phi}_{\Delta t}(\mathbf{x}(t);t)\), and \(\left\|\cdot\right\|_{\infty}=\left\|\cdot\right\|_{L^{\infty}(\mathcal{M}_{ \Delta t})}\). By Gronwall's inequality along with Lipschitz continuity, we have for any time \(t\) and solutions \(\mathbf{x},\widehat{\mathbf{x}}\in\mathcal{M}_{\Delta t}\):
\[\left\|\mathbf{\Phi}_{\Delta t}(\mathbf{x}(t))-\mathbf{\Phi}_{\Delta t}( \widehat{\mathbf{x}}(t))\right\|_{2}\leq e^{\tau L}\left\|\mathbf{x}(t)- \widehat{\mathbf{x}}(t)\right\|_{2},\tau\in[0,\Delta t] \tag{13}\]
Then by repeated applications of triangle inequality:
\[\begin{array}{l}\mathcal{E}^{n}=\left\|\mathbf{x}_{n-1}+\mathbf{\Phi}_{ \Delta t}(\mathbf{x}_{n-1})-(\widehat{\mathbf{x}}_{n-1}+\mathbf{A}(t_{n-1}) \widehat{\mathbf{x}}_{n-1})\right\|_{2}^{2}\\ \leq\left\|\mathbf{x}_{n-1}-\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}+\left\| \mathbf{\Phi}_{\Delta t}(\mathbf{x}_{n-1})-\mathbf{A}(t_{n-1})\widehat{ \mathbf{x}}_{n-1}\right\|_{2}^{2}\\ =\left\|\mathbf{x}_{n-1}-\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}+\left\| \mathbf{\Phi}_{\Delta t}(\mathbf{x}_{n-1})-\mathbf{\Phi}_{\Delta t}(\widehat{ \mathbf{x}}_{n-1})+\mathbf{\Phi}_{\Delta t}(\widehat{\mathbf{x}}_{n-1})- \mathbf{A}(t_{n-1})\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}\\ \leq\left\|\mathbf{x}_{n-1}-\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}+\left\| \mathbf{\Phi}_{\Delta t}(\mathbf{x}_{n-1})-\mathbf{\Phi}_{\Delta t}(\widehat{ \mathbf{x}}_{n-1})\right\|_{2}^{2}+\left\|\mathbf{\Phi}_{\Delta t}(\widehat{ \mathbf{x}}_{n-1})-\mathbf{A}(t_{n-1})\widehat{\mathbf{x}}_{n-1}\right\|_{2}^ {2}\\ \leq\mathcal{E}^{n-1}+e^{\Delta tL}\mathcal{E}^{n-1}+\left\|\mathbf{\Phi}_{ \Delta t}(\cdot;t_{n})-\mathbf{A}_{p}\right\|_{\infty}^{2}\\ \leq(1+e^{\Delta tL})\mathcal{E}^{n-2}+(1+e^{\Delta tL})\left\|\mathbf{\Phi}_{ \Delta t}(\cdot;t_{n-1})-\mathbf{A}_{p}\right\|_{\infty}^{2}+\left\|\mathbf{ \Phi}_{\Delta t}(\cdot;t_{n})-\mathbf{A}_{p}\right\|_{\infty}^{2}\\ \leq\cdots\leq(1+e^{\Delta tL})^{\mathcal{E}}\mathcal{E}^{n-r}+\sum_{l=0}^{r} (1+e^{\Delta tL})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(r-l)})- \mathbf{A}_{m}\right\|_{\infty}^{2}\\ \leq\cdots\leq(1+e^{\Delta tL})^{2r}\mathcal{E}^{n-2r}+\sum_{l=0}^{r}(1+e^{ \Delta tL})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(r-l)})-\mathbf{A}_{ m}\right\|_{\infty}^{2}+\cdots\\ \sum_{l=0}^{r}(1+e^{\Delta tL})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(2 r-l)})-\mathbf{A}_{m-1}\right\|_{\infty}^{2}\\ \leq\cdots\leq(1+e^{\Delta tL})^{m}\mathcal{E}_{0}+\sum_{j=0}^{p}\sum_{l=0}^{w }(1+e^{L\Delta t})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(jr-l)})- \mathbf{A}_{k}\right\|_{\infty}^{2}\\ \end{array}\]
**Remark 5.2**: If \(\mathbf{K}(t)\equiv\mathbf{K}\) is constant in time, we recover the upper bound investigated in Theorem 4.3 of [49] and subsequently that in equation (3.11) of [37].
**Corollary 5.3**: _The time-varying DMD of (3.21) is at least as accurate in the MSE sense as the standard DMD of (13)._
Proof: The property can be intuitively interpreted from the fact that a stepwise constant (in time) approximation is always at least as good on average as a constant approximation. More precisely, let \(\mathbf{K},\mathbf{K}(t)\) denote the solutions of standard DMD and time-varying DMD, respectively, we may rewrite the minimization problem in (11):
\[L_{\mathcal{S}}(\mathbf{K})=\min_{\mathbf{K}}\frac{1}{m}\sum_{i=1}^{m}\left\| \mathbf{y}_{i}-\mathbf{Kx}_{i}\right\|_{2}^{2}=\min_{\mathbf{K}}\frac{1}{p} \sum_{i=1}^{p}\frac{1}{w}\sum_{j=1}^{w}\left\|\mathbf{y}_{n-(ir-j)}-\mathbf{Kx }_{n-(ir-j)}\right\|_{2}^{2}\]
and by definition of minimum, we conclude:
\[L_{\mathcal{S}}(\mathbf{K})\geq\frac{1}{p}\sum_{i=1}^{p}\min_{\mathbf{K}_{i}} \frac{1}{w}\sum_{j=1}^{w}\left\|\mathbf{y}_{n-(iw-j)}-\mathbf{K}_{i}\mathbf{x}_{ n-(iw-j)}\right\|_{2}^{2}=L_{\mathcal{S}}(\mathbf{K}(t))\]
### Perturbation Analysis
With the DMD algorithms introduced in Section 3, we provide an operator 2-norm error bound on the DMD solution for two cases of common operations in engineering: (1) truncation of singular value decomposition (SVD) rank in data matrix \(\mathbf{X}\) and, (2) deletion of most recent snapshots in both \(\mathbf{X},\mathbf{Y}\). In particular, we connect the error bound with a case study of nonautonomous linear dynamical system with the following form:
\[\begin{cases}\frac{d\mathbf{u}(t)}{dt}=\mathbf{C}(t)\mathbf{u}(t)+\mathbf{f}(t) \\ \mathbf{u}(0)=\mathbf{u}_{0}\end{cases} \tag{10}\]
whose solution is provided:
\[\mathbf{u}(t)=\Phi_{t}(\mathbf{u}_{0};0)=\exp\bigg{(}\int_{0}^{t}\mathbf{C}(s) ds\bigg{)}\mathbf{u}_{0}+\int_{0}^{t}\exp\bigg{(}\int_{s}^{t}\mathbf{C}(\tau) dr\bigg{)}\mathbf{f}(s)ds \tag{11}\]
We first present the results without assumptions on the underlying system.
**Proposition 1**: _(Operator norm error under rank truncation) Let the SVD of data matrix \(\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}\). \(\mathbf{\Sigma}\) contains the singular values arranged in non-increasing order, i.e. \(\sigma_{1}=\sigma_{\text{max}}\geq\sigma_{2}\geq\cdots\geq\sigma_{\text{min}}= \sigma_{\text{rank}(\mathbf{X})}\). Let a truncated SVD with \(r\leq\text{rank}(\mathbf{X})\) be denoted as \(\mathbf{X}_{r}=\mathbf{U}_{r}\mathbf{\Sigma}_{r}\mathbf{V}_{r}^{T}\) where only the first \(r\) columns are retained in \(\mathbf{U}_{r},\mathbf{V}_{r}\), and the first \(r\) singular values are retained in \(\mathbf{\Sigma}_{r}\). Then the operator norm error has the following upper bound:_
\[\left\|\mathbf{A}-\mathbf{A}_{r}\right\|_{2}\leq\frac{\sigma_{\text{max}}( \mathbf{Y})}{\sigma_{\text{min}}(\mathbf{X})} \tag{12}\]
\[\|\mathbf{K}-\mathbf{K}_{r}\|_{2}^{2}=\left\|\mathbf{Y}\mathbf{X}^{\dagger}- \mathbf{Y}\mathbf{X}_{r}^{\dagger}\right\|_{2}^{2}\leq\left\|\mathbf{Y}\right\| _{2}^{2}\cdot\left\|\mathbf{X}^{\dagger}-\mathbf{X}_{r}^{\dagger}\right\|_{2} ^{2}=\frac{\sigma_{\text{max}}^{2}(\mathbf{Y})}{\sigma_{\text{min}}^{2}( \mathbf{X})} \tag{13}\]
_Remark 2_: The bound presented in Proposition 1 is an upper bound in the sense that it does not depend on the rank-\(r\) due to the pseudoinverse operation. More granular bounds can be derived by analyzing instead the pointwise error for a specific observation \(\mathbf{x}\):
\[\left\|\mathbf{K}\mathbf{x}-\mathbf{K}_{r}\mathbf{x}\right\|_{2}^{2}\leq \sigma_{\text{max}}^{2}(\mathbf{Y})\left\|\sum_{k=r}^{\text{rank}(\mathbf{X})} -\frac{1}{\sigma_{k}(\mathbf{X})}(\mathbf{u}_{k}^{T}\mathbf{x})\mathbf{v}_{k} \right\|_{2}^{2} \tag{14}\]
\[=\sum_{k=r}^{\text{rank}(\mathbf{X})}\frac{\sigma_{\text{max}}^{2}(\mathbf{Y})} {\sigma_{k}^{2}(\mathbf{X})}(\mathbf{u}_{k}^{T}\mathbf{x})^{2}\]
Under different assumptions of \(\mathbf{x}\) in relations to the column space of data matrix \(\mathbf{X}\), the bound (14) can be tightened [55].
To analyze the time-varying DMD strategy in Section 3.3, one may view the individual solutions \(\mathbf{K}_{i}\) on time interval \([t_{i-1},t_{i}]\) as a standard DMD solution with fewer observations. To provide a benchmark on the effect of adding/deleting observations in the training data and investigate dependencies, we illustrate the operator norm perturbation that occurs by deleting the most recent observation. The general case of deleting \(r\) most recent observations can be analogously derived using the Sherman-Morrison-Woodbury update formula. For the pseudoinverse of data matrices, the following result holds:
**Lemma 5.6**: _(Updating pseudoinverse) Suppose \(N\geq m\) and \(\mathbf{X}_{m}\in\mathbb{R}^{N\times m}\) has full column rank, Furthermore, let \(\mathbf{u}\in\mathbb{R}^{N}\) be a newly collected snapshot, the pseudoinverse of \(\mathbf{X}=[\mathbf{X}_{m},\mathbf{u}]\in\mathbb{R}^{N\times(m+1)}\) is given by:_
\[\mathbf{X}^{\dagger} =\left[\mathbf{X}_{m}^{\dagger}+c\mathbf{X}_{m}^{\dagger} \mathbf{u}\mathbf{u}^{T}(\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger})^{T}-c \mathbf{X}_{m}^{\dagger}\mathbf{u}\mathbf{u}^{T}\right]\] \[=\left[\mathbf{X}_{m}^{\dagger}\right]-c\left[\mathbf{X}_{m}^{ \dagger}\mathbf{u}\right]((\mathbf{I}-\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger}) \mathbf{u})^{T}\]
_where:_
\[c=\frac{1}{\left\|\mathbf{u}\right\|_{2}^{2}-\mathbf{u}^{T}\mathbf{X}_{m}( \mathbf{X}_{m}^{T}\mathbf{X}_{m})^{-1}\mathbf{X}_{m}^{T}\mathbf{u}}\geq\frac{ 1}{\left\|\mathbf{u}\right\|_{2}^{2}} \tag{12}\]
_The lower bound is attained if \(\mathbf{u}\) is orthogonal to the range of \(\mathbf{X}_{m}\). \({}_{\Box}\)_
Proof: We directly apply the block matrix inverse formula [18] to \((\mathbf{X}^{T}\mathbf{X})^{-1}\):
\[(\mathbf{X}^{T}\mathbf{X})^{-1} =\begin{bmatrix}\mathbf{X}_{m}^{T}\mathbf{X}_{m}&\mathbf{X}_{m}^{ T}\mathbf{u}\\ \mathbf{u}^{T}\mathbf{X}_{m}&\left\|\mathbf{u}\right\|_{2}^{2}\end{bmatrix}^{-1}\] \[=\begin{bmatrix}(\mathbf{X}_{m}^{T}\mathbf{X}_{m})^{-1}+c\mathbf{ X}_{m}^{\dagger}\mathbf{u}\mathbf{u}^{T}(\mathbf{X}_{m}^{\dagger})^{T}&-c \mathbf{X}_{m}^{\dagger}\mathbf{u}\\ -c\mathbf{u}^{T}(\mathbf{X}_{m}^{\dagger})^{T}&c\end{bmatrix}\]
and multiply the result to \(\mathbf{X}^{T}=\begin{bmatrix}\mathbf{X}_{m}^{T}\\ \mathbf{u}^{T}\end{bmatrix}\). \({}_{\Box}\)
**Proposition 5.7**: _(Operator 2-norm perturbation under column deletion) \({}_{\Box}\)_
_Let \(\mathbf{X}=[\mathbf{X}_{m},\mathbf{u}]\in\mathbb{R}^{N\times(m+1)}\), \(\mathbf{Y}=[\mathbf{Y}_{m},\mathbf{v}]\in\mathbb{R}^{N\times(m+1)}\), and \(\mathbf{X}_{m},\mathbf{Y}_{m}\in\mathbb{R}^{N\times m}\), with \(N\geq m\). We further assume that \(\mathbf{X}_{m}\) has full column rank. Then, the operator norm error satisfies the following upper bound:_
\[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}\leq\sqrt{c^{2}\left\|\mathbf{u} \right\|_{2}^{2}\left(1+\frac{\left\|\mathbf{u}\right\|_{2}^{2}}{\sigma_{min}^ {2}(\mathbf{X}_{m})}\right)(\sigma_{max}^{2}(\mathbf{Y}_{m})+\left\|\mathbf{v }\right\|_{2}^{2})+\frac{\left\|\mathbf{v}\right\|_{2}^{2}}{\sigma_{min}^{2}( \mathbf{X}_{m})}} \tag{13}\]
_with \(c\) defined in Lemma 5.6. In particular, if \(\mathbf{u}\) is orthogonal to the range of \(\mathbf{X}_{m}\), the bound is tightened to:_
\[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}\leq\sqrt{\frac{\sigma_{max}^{2}( \mathbf{Y}_{m})+\left\|\mathbf{v}\right\|_{2}^{2}}{\left\|\mathbf{u}\right\|_{ 2}^{2}}+\frac{\sigma_{max}^{2}(\mathbf{Y}_{m})+2\left\|\mathbf{v}\right\|_{2}^ {2}}{\sigma_{min}^{2}(\mathbf{X}_{m})}} \tag{14}\]
_Proof._
\[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}^{2}=\left\|\mathbf{Y}\mathbf{X}^ {\dagger}-\mathbf{Y}_{m}\mathbf{X}_{m}^{\dagger}\right\|_{2}^{2}=\left\| \mathbf{Y}\mathbf{X}^{\dagger}-\mathbf{Y}\widehat{\mathbf{X}_{m}}^{\dagger}+ \mathbf{Y}\widehat{\mathbf{X}_{m}}^{\dagger}-\widehat{\mathbf{Y}_{m}}\widehat {\mathbf{X}_{m}}^{\dagger}\right\|_{2}^{2}\]
where we define:
\[\widehat{\mathbf{X}_{m}}^{\top}:=\begin{bmatrix}\mathbf{X}_{m}^{\dagger}\\ \mathbf{0}_{1\times N}\end{bmatrix}\in\mathbb{R}^{(m+1)\times N},\widehat{ \mathbf{Y}_{m}}=\begin{bmatrix}\mathbf{Y}_{m}&\mathbf{0}_{N\times 1}\end{bmatrix}\in \mathbb{R}^{N\times(m+1)} \tag{15}\]
then by triangle inequality:
\[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}^{2}\leq\left\|\mathbf{Y}\right\|_{ 2}^{2}\left\|\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{\dagger}\right\|_{ 2}^{2}+\left\|\widehat{\mathbf{X}_{m}}^{\dagger}\right\|_{2}^{2}\left\| \mathbf{Y}-\widehat{\mathbf{Y}_{m}}\right\|_{2}^{2}\]
where \(\left\|\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{-\dagger}\right\|_{2}\) needs to be further bounded. Using Lemma 5.6, we have:
\[\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{\dagger}=-c\begin{bmatrix} \mathbf{X}_{m}^{\dagger}\mathbf{u}\\ 1\end{bmatrix}((\mathbf{I}-\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger})\mathbf{u})^ {T} \tag{5.16}\]
Furthermore, we have:
\[\left|\left|\begin{bmatrix}\mathbf{X}_{m}^{\dagger}\mathbf{u}\\ 1\end{bmatrix}\right|\right|_{2}^{2}\leq 1+\frac{\left\|\mathbf{u}\right\|_{2}^{2 }}{\sigma_{min}^{2}(\mathbf{X}_{m})} \tag{5.17}\]
and as a projection matrix:
\[\left\|\mathbf{I}-\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger}\right\|_{2}^{2}\leq 1 \tag{5.18}\]
Then we may conclude:
\[\left\|\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{-\dagger}\right\|_{2}^ {2}\leq c^{2}\left\|\mathbf{u}\right\|_{2}^{2}\left(1+\frac{\left\|\mathbf{u} \right\|_{2}^{2}}{\sigma_{min}^{2}(\mathbf{X}_{m})}\right) \tag{5.19}\]
Putting everything together, we conclude that:
\[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}^{2}\leq c^{2}\left\|\mathbf{u} \right\|_{2}^{2}\left(1+\frac{\left\|\mathbf{u}\right\|_{2}^{2}}{\sigma_{min} ^{2}(\mathbf{X}_{m})}\right)(\sigma_{max}^{2}(\mathbf{Y}_{m})+\left\|\mathbf{ v}\right\|_{2}^{2})+\frac{\left\|\mathbf{v}\right\|_{2}^{2}}{\sigma_{min}^{2}( \mathbf{X}_{m})} \tag{5.20}\]
Under the assumption of \(\mathbf{u}\) being orthogonal to \(\mathrm{range}(\mathbf{X}_{m})\), the last conclusion follows by the reduction of lower bound for \(c\) presented in Lemma 5.6.
Figure 1 provides a verification of the bound in Theorem 5.7 using random Gaussian matrices, averaged over 10 random seeds. The results obtained in Theorem 5.4 and Theorem 5.7 only rely on general linear algebra operations.
With explicit form of the dynamical system, such as the system in equation (5.7), more insights can be gained by leveraging the dependence of time-shifted data matrix \(\mathbf{Y}\) on \(\mathbf{X}\) via the flow map \(\mathbf{\Phi}_{\Delta t}\), as we now present in the following proposition:
Figure 1: Operator norm error bound (5.7) under deletion of most recent observation, for random Gaussian data matrices. The comparison of true operator norm error and upper bounds are averaged over 10 seeds.
**Proposition 5.8**: _(Time-shift norm upper bound, for system (5.7)) Assume that \(\mathbf{C}(t)\) is diagonalizable for all \(t\), and \(\mathbf{C}(t)\), \(\mathbf{f}(t)\) are piecewise continuous on all intervals \([t_{0},t_{1}],\ldots,[t_{m-1},t_{m}]\). Then we have that the norm of \(\mathbf{Y}\) is connected with the norm of \(\mathbf{X}\) as the following, with \(f,\gamma\) defined in equation (5.29):_
\[\left\|\mathbf{Y}\right\|_{2}\leq\exp\left(\frac{1}{2}\gamma^{2}\Delta t \right)\sqrt{\frac{mf^{2}}{\gamma^{2}}+\sum_{i=1}^{m}\sigma_{i}^{2}(\mathbf{ X})}\]
For convenience of notations, define the time-dependent matrix:
\[\mathbf{M}_{\Delta t}^{(i)}=\mathbf{M}_{\Delta t}(t_{i}):=\exp\bigg{(}\int_{t _{i}}^{t_{i}+\Delta t}\mathbf{C}(s)ds\bigg{)} \tag{5.21}\]
and the time-dependent vector:
\[\mathbf{g}_{\Delta t}^{(i)}=\mathbf{g}_{\Delta t}(t_{i})=\int_{t_{i}}^{t_{i} +\Delta t}\exp\bigg{(}\int_{s}^{t_{i}+\Delta t}\mathbf{C}(\tau)d\tau\bigg{)} \mathbf{f}(s)ds \tag{5.22}\]
then we have by the explicit flow map (5.7) that:
\[\mathbf{v}=\mathbf{M}_{\Delta t}^{(m+1)}\mathbf{u}+\mathbf{g}_{\Delta t}^{(m +1)} \tag{5.23}\]
Iteratively applying the recurrence (5.23) to \(\mathbf{Y}\), we have the explicit dependence for each column \(1\leq i\leq m\):
\[\mathbf{y}_{i}=\mathbf{M}_{\Delta t}^{(i)}\mathbf{x}_{i}+\mathbf{g}_{\Delta t} ^{(i)} \tag{5.24}\]
and therefore:
\[\mathbf{Y}=\begin{bmatrix}\begin{array}{cccc}\begin{array}{cccc}\begin{array} []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array}[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array[]{cccc}\begin{array}{cccc}\array[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\array[]{cccc}\end{array}[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array}[]
Under the assumption of piecewise continuity on each \([t_{i},t_{i+1}]\), the attainability of \(\gamma_{i}\) is given by considering the spectra of \(\mathbf{C}(t)\) as a continuous map of time [19, 5]. Furthermore,
\[\left\|\mathbf{g}_{\Delta t}^{(i)}\right\|_{2}^{2}=\left\|\int_{t_{i}}^{t_{i}+ \Delta t}\exp\bigg{(}\int_{s}^{t_{i}+\Delta t}\mathbf{C}(\tau)d\tau\bigg{)} \mathbf{f}(s)ds\right\|_{2}^{2}\]
\[\leq f_{i}^{2}\int_{t_{i}}^{t_{i}+\Delta t}\exp\bigg{(}\gamma_{i}^{2}(t_{i}+ \Delta t-s)\bigg{)}ds=\frac{f_{i}^{2}}{\gamma_{i}^{2}}(\exp(\gamma_{i}^{2} \Delta t)-1)\]
where we define:
\[f_{i}:=\max_{t_{i}\leq s\leq\gamma_{i+1}}\left\|\mathbf{f}(s)\right\|_{2} \tag{5.28}\]
which is attainable due to the piecewise continuous assumption of \(\mathbf{f}(t)\).
Finally, define:
\[\gamma:=\max_{1\leq i\leq m}\gamma_{i},f:=\max_{1\leq i\leq m}f_{i} \tag{5.29}\]
We conclude the following result as desired:
\[\left\|\mathbf{Y}\right\|_{2}^{2}\leq\exp(\gamma^{2}\Delta t)\sum _{i=1}^{m}\sigma_{i}^{2}(\mathbf{X})+\frac{mf}{\gamma}(\exp(\gamma^{2}\Delta t )-1)\] \[\leq\exp(\gamma^{2}\Delta t)\bigg{(}\frac{mf^{2}}{\gamma^{2}}+ \sum_{i=1}^{m}\sigma_{i}^{2}(\mathbf{X})\bigg{)}\]
_Remark 5.9_.: In the special case where \(\mathbf{C}(t)\equiv\mathbf{C}\) with eigendecomposition \(\mathbf{C}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}\) and largest eigenvalue \(\lambda_{1}\), and \(\mathbf{f}\equiv\mathbf{0}\). We have that the solution has the form:
\[\mathbf{x}(t)=\mathbf{Q}\exp(t\mathbf{\Lambda})\mathbf{Q}^{-1}\mathbf{x}_{0} \tag{5.30}\]
Under the same conditions, the upper bound in Proposition 5.8 can be tightened to:
\[\left\|\mathbf{Y}\right\|_{2}\leq\kappa_{2}(\mathbf{Q})\exp\big{(}\lambda_{1} \Delta t\big{)}\sigma_{max}(\mathbf{X})\]
where \(\kappa_{2}(\cdot)\) denotes the 2-norm condition number.
We provide a verification of the upper bounds in Proposition 5.8 in Figure 2 using the time-varying linear system of [60], Example 5.2:
\[\frac{d\mathbf{x}(t)}{dt}=\mathbf{C}(t)\mathbf{x}(t) \tag{5.31}\] \[\mathbf{x}(0)=[1,0]^{T}\]
where:
\[\mathbf{C}(t)=\begin{bmatrix}0&1+\epsilon t\\ -1-\epsilon t&0\end{bmatrix} \tag{5.32}\]
with \(\epsilon=0.1\) on the temporal domain \(t\in[0,1]\), with \(\Delta t=10^{-3}\). Furthermore, we also provide the upper bounds for the two advection-dominated examples with the
Figure 2: Time shift data matrix 2 norm upper bounds (5.8) compared to actual 2 norms, with respect to number of collected snapshots. Top: linear system (5.31) with \(N=2,\Delta t=10^{-3}\). Middle: time-varying advection in 1d (6.7) with \(N=400,\Delta t=0.01\). Bottom: time-varying advection-diffusion in 2d (6.9) with \(N=2500\) and \(\Delta t=0.01\).
parameter setups described in Section 6.2 and Section 6.3. In particular, the example system (13) is especially useful for the consideration of numerical solutions to the linear PDE (1), where the matrix \(\mathbf{C}(t)\) may be seen as the finite difference or finite element stiffness matrix with time-varying coefficients, and \(\mathbf{f}(t)\) as the inhomogeneous source term.
The interpretations of the results obtained in Proposition 5.2 and Proposition 5.3 are two-folds. In cases where the learning is agnostic of underlying physics (i.e. with data only available as images and the underlying system is unknown), such as the cases considered in [20], perturbations in the DMD operator will strictly be estimable as the perturbation in collected data snapshots alone. However, with additional information of the underlying system, such as (13), one may incorporate physical knowledge and refine the bound by considering columns of \(\mathbf{X},\mathbf{Y}\) as ordered time-shifts of the initial condition along the flow map. Nevertheless, both of the results serve as a priori estimates of operator norm perturbation to help guide the selection of hyperparameters in DMD algorithms.
## 6 Numerical Experiments
In the following numerical examples, we test the accuracy of Algorithm 1 for a variety of time-varying advection phenomena. In particular, for advection-dominated linear conservation laws (Sections 6.2 and 6.3), we make the procedure fully data-driven by assuming that the advection velocity in equation (11) is unknown, and is estimated from tracking the trajectory of the mode.
Given a temporal discretization \(0=t_{0}<t_{1}<\ldots,<t_{n}=t_{f}\), we measure the performance of DMD algorithms via relative prediction error defined as the following:
\[\epsilon(t)=\frac{\left\|\mathbf{u}_{\mathrm{DMD}}(t)-\mathbf{u}(t)\right\|_{ 2}}{\left\|\mathbf{u}(t)\right\|_{2}} \tag{14}\]
where \(\mathbf{u},\mathbf{u}_{\mathrm{DMD}}\), are respectively the exact solution and the DMD prediction at time \(t\), with the error computed in the \(L^{2}(\mathbb{R}^{d})\) sense. To construct the reduced order model in each experiment, an SVD and projection to POD modes are applied at a prespecified rank determined based on a relative accuracy tolerance level. The exact setup of each numerical simulations is reported separately.
We first consider the Navier-Stokes equations to test the accuracy of the base time-varying DMD algorithm in reconstructing complex and nonlinear dynamics without Lagrangian information, presented in Section 6.1. For each experiment of Section 6.2 and 6.3, we compare four different strategies: the standard DMD and time-varying DMD using only \(\mathbf{u}(t)\) as observables, the physics-aware DMD in Section 3.2 without recomputations, and Algorithm 1.
### Incompressible Navier-Stokes equations
We consider the flow field of a two-dimensional incompressible fluid with density \(\rho=1\) and dynamic viscosity \(\nu=1/600\) (kg/(m\(\cdot\)s)). With a rectangular domain \(\mathcal{D}=[0,2]\times[0,1]\), the fluid enters from the left boundary with fixed velocity and flows around an impermeable cylinder centered at \(\mathbf{x}_{\mathrm{circ}}=[0.3,0.5]^{T}\). The dynamics of fluid pressure \(p(t,\mathbf{x})\), horizontal velocity component \(u(t,\mathbf{x})\) and vertical velocity component \(v(t,\mathbf{x})\) follow the Navier
Stokes (NS) equation:
\[\begin{cases}\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+v\frac{ \partial u}{\partial y}=-\frac{1}{\rho}\frac{\partial p}{\partial x}+\nu\bigg{(} \frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}} \bigg{)}\\ \frac{\partial v}{\partial t}+u\frac{\partial v}{\partial x}+v\frac{ \partial v}{\partial y}=-\frac{1}{\rho}\frac{\partial p}{\partial y}+\nu \bigg{(}\frac{\partial^{2}v}{\partial x^{2}}+\frac{\partial^{2}v}{\partial y ^{2}}\bigg{)}\\ \frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0\end{cases} \tag{10}\]
subject to the following initial-boundary conditions:
\[p(t,2,y)=0,\frac{\partial p}{\partial\mathbf{n}}\big{|}_{\partial\mathcal{D} \setminus\{x=2\}}=0,\frac{\partial u(t,2,y)}{\partial\mathbf{n}}=0,\frac{ \partial v(t,2,y)}{\partial\mathbf{n}}=0 \tag{11}\]
\[u(t,0,y)=1,v(t,0,y)=0, \tag{12}\]
\[u(t,x,0)=u(t,x,1)=0,v(t,x,0)=v(t,x,1)=0 \tag{13}\]
We define the quantity of interest as the magnitude of our velocity field:
\[w(t,x,y):=\sqrt{u(t,x,y)^{2}+v(t,x,y)^{2}} \tag{14}\]
and simulate the nonlinear system (10) with a custom MATLAB library. The problem is solved in conservative form with finite difference method on a staggered grid [16], with discretization levels \(\Delta x=\Delta y=0.02\), and time step size \(\Delta t=0.001\).
Under this setting, we focus on reconstructing the dynamics during formation of vortex street on the time domain \(t\in[0,3.0]\), yielding effective state dimensions \(N=5000\) and \(m=3000\) snapshots. For each DMD strategy, we set the SVD truncation level to \(\epsilon=1.0\times 10^{-2}\). Figures 4 and Figure 3 shows a comparison of predicted solutions between standard DMD and time-varying DMD along with their relative \(L^{2}\)-errors from the reference numerical solution. As expected, standard DMD places an invariant manifold assumption and yields an inaccurate reduced-order model under rapid time changes. The time-varying DMD more accurately represents the solution by updating the operator at different time intervals. Finally, we visualize the dominant frequency variations during the time domain \([0,2.5]\) and observe that standard DMD begins to accumulate errors after \(t=0.05\), failing to capture the rapid frequency changes.
### 1d time-varying advection
As a test problem for a comprehensive comparison understanding of standard DMD, time-varying DMD (without Lagrangian information), physics-aware DMD (without temporal updates), and time-varying DMD with Lagrangian moving grid information, we consider the following conservation law under pure advection (\(D\equiv 0\)):
\[\begin{cases}\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}[c\sin( \omega t)u]=0\\ u(0,x)=u_{0}(x)=\exp(-0.5x^{2})\end{cases} \tag{15}\]
where we choose the advection speed \(c\equiv 2\) and frequency \(\omega\equiv\pi/2\). The snapshots are simulated using an upwind numerical scheme on a temporal grid of \(t\in[0,8]\) with discretization \(\Delta t=0.01\), yielding \(m=800\) training data points. The spatial grid is
taken to be \(x\in[-10,10]\) with discretization \(\Delta x=0.05\). By construction, the initial concentration \(u_{0}\) does not change shape, and is advected in a oscillatory manner over time. As a fully data driven model, we consider estimating the velocity as a function of time directly from observations. Figure 5 shows a visualization of the advection velocity as a function of time, estimated from tracking the mode of the solution, defined by viewing the conserved solution \(u\) as a density, and computing the average:
\[\overline{x}(t):=\frac{1}{\int_{x_{l}}^{x_{r}}u(t,x)dx}\int_{x_{l}}^{x_{r}}xu(t,x)dx \tag{10}\]
where for (11), \(x_{l}=-10,x_{r}=10\). Then the estimated velocity can be computed using a centered difference of \(\overline{x}(t)\) at discrete time points, which was then used as an approximation to the velocity in the Lagrangian reference frame of (15).
Figure 4: Reconstructed velocity magnitudes to the 2d Navier-Stokes equation (10) at time steps \(t=0.15,0.25,0.5\). Top row: reference solution from high-fidelity simulation. Middle row: standard DMD predictions. Bottom row: time-varying DMD predictions (\(r=50\)).
Figure 3: Left: comparison of standard DMD and time-varying DMD in terms of prediction relative errors. Middle: real part of top 3 dominant frequencies, computed from time-varying DMD modes, as a function of time. Right: imaginary part of top 3 dominant frequencies as a function of time.
We present the predicted solutions, compared with the reference numerical solution, at time steps \(t=0,\pi/4,\pi/2,\pi\). In this experiment, we set the tolerance for SVD truncation for all DMD strategies to be \(\epsilon=10^{-6}\). Furthermore, for time-varying DMD strategies, the size of the subintervals are chosen to be \(r=5\).
Figures 6, 7, 8, and 9 show the behavior of predicted solutions under different DMD strategies. The relative errors are plotted on log scale, presented in Figure 10. In particular, we observe increased error fluctuations for time-homogeneous DMD strategies (i.e. standard DMD and physics-ware DMD) at regions of high velocity speed. The advection of the solution mode is also not captured. This is to be expected as standard DMD and physics-aware DMD are assumed to be constant in time, and would incur larger errors where such dependence is stronger. In the case of the time-varying DMD without Lagrangian information, we observe that the modal information is captured and advects through time. However, unphysical oscillations are still present. Out of the tested DMD strategies, Algorithm 1 provides the most faithful reconstruction of the time-varying advection behavior.
### Advection-dominated Equation in 2d
We consider a two-dimensional linear advection-diffusion equation with time-varying velocity components, defined on the spatio-temporal domain: \((t,x,y)\in[0,10]\times[-10,10]\times[-10,10]\).
\[\begin{cases}\frac{\partial u}{\partial t}+v_{x}(t)\frac{\partial u}{ \partial x}+v_{y}(t)\frac{\partial u}{\partial y}=D\bigg{(}\frac{\partial^{2 }u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}\bigg{)}\\ u(0,x,y)=\exp(-(x^{2}+y^{2}))\\ v_{x}(t)=\frac{1}{2}\cos(t),v_{y}=-\frac{2}{5}\sin(t),D\equiv 0.001\end{cases} \tag{12}\]
In this example, we let the number of spatial nodes in each direction be \(N_{x}=N_{y}=50\). The temporal domain is discretized with a step size of \(\Delta t=0.01\). The PDE is numerically solved using a modified centered time, centered space method (Du-Fort Frankel method) presented in [23]. The above discretization yields state dimension \(N=2500\) and number of snapshots \(M=1000\).
Similar to the 1-dimensional problem (11), the advection velocity can be estimated in a fully data-driven manner by tracking the mode of the solution snapshots
Figure 5: Estimated advection velocity for (11) by tracking the mode of numerical solutions on the time domain \([0,10]\).
by defining, analogously to (6.8):
\[\mathfrak{X}(t):=\begin{bmatrix}\overline{x}(t)\\ \overline{y}(t)\end{bmatrix}=\frac{1}{\int_{x_{l}}^{x_{r}}\int_{y_{b}}^{y_{t}}u(t,x,y)dxdy}\int_{x_{l}}^{x_{r}}\int_{y_{b}}^{y_{t}}\begin{bmatrix}x\\ y\end{bmatrix}\cdot u(t,x,y)dydx \tag{6.10}\]
and numerically differentiating in time with centered difference. We visualize the predicted solutions for three of the DMD strategies in Figures 11 and 12, corresponding respectively to the standard DMD, physics-aware DMD, and time-varying DMD with Lagrangian moving grid, constructed with a subinterval size \(r=30\). We predict the solutions up to \(t=8\) and compare with the baseline numerical solution. Finally, the prediction errors (6.1) for all four DMD strategies are presented in Figure 13. Due to presence of small diffusion, a time-varying DMD strategy without Lagrangian moving grid is able to achieve comparable accuracy to that with Lagrangian information. The standard DMD shows significant degradation in accuracy over time. The physics-aware DMD and time-varying DMD with physics still possess model misspecification that results in a growth of error over time, albeit at a reduced rate than standard DMD. In contrast, the results given by Algorithm 4.1 shows controlled error growth, similar to the case observed in (6.2).
## 7 Conclusions
In this work, we investigated a method for learning time-dependent advection-dominated phenomena using DMD algorithms. In particular, when the PDE parameters vary in time, we demonstrated that the characteristic lines of the PDE are an important observable to include in order to improve the accuracy of reconstructions, as verified with 1d and 2d advection-diffusion equations with time-varying coefficients. We further provided prediction error guarantee for the time-dependent approximation to the Koopman operator. In addition, we analyzed the effect of SVD truncation and number of data snapshots on operator norm error, and verified such upper bounds in both model-free and model-dependent cases. The method adopted in this work provides a possibility for real-time control in advection-dominated systems.
One of the possible future directions concerns the identification of closures for characterizing the time-evolution of a quantity of interest that depends on the states of another dynamical system [15]. Instead of relying on an equation-free model, deriving and learning explicit forms of the reduced-order dynamics provides a principled analysis tool for uncertainty propagation and control system design, as well as extrapolation capabilities. Furthermore, we briefly investigated the possibility of a full data-driven model by assuming the advection coefficients are unknown and estimated by mode-tracking. Although such a method is effective in capturing the macroscopic behavior, it is far from being sufficient for velocities that have nonlinear dependence
Figure 7: 1d time-varying advection: time-varying DMD predictions (\(r=5\), without Lagrangian grid), at \(t=0\), \(t=\pi/4\), \(t=\pi/2\), \(t=\pi\).
in both spatial variables and the solution itself. Future explorations will focus on parameterizations for the advection and diffusion coefficients, which are identified simultaneously as the optimal linear operator is constructed. Such a scenario can be potentially considered in a constrained optimization [46] or Bayesian inversion setting [25]. Reduction of computational complexity is another possible path of future exploration due to the curse of dimensionality for advection-dominated problems associated with moderate- to high-dimensional datasets. An added layer of dimensionality reduction must be adopted in such cases where storing and operating with data snapshots and the Lagrangian moving grid are intractable. A potential solution in the DMD setting is by using low-rank tensor-networks to approximate multidimensional linear operators [27, 17].
## Acknowledgments
We would like to thank Dr. Hannah Lu and Dr. Tyler Maltba for useful discussions and manuscript review. The research was partially supported by the Air Force Office of Scientific Research under grant FA9550-21-1-0381, by the National Science Foundation under award 2100927, by the Office of Advanced Scientific Computing Research (ASCR) within the Department of Energy Office of Science under award number DE-SC0023163, and by the Strategic Environmental Research and Development Program (SERDP) of the Department of Defense under award RC22-3278.
Figure 8: 1d time-varying advection: physics-aware DMD predictions at \(t=0\), \(t=\pi/4\), \(t=\pi/2\), \(t=\pi\). | 物理現象の記述的量を、観察データやシミュレートデータから、初原理からではなく、導出することで、その研究分野は急速に発展している。入力の時間依存性と隠れた翻訳不変性という2つの要因が、このタスクを複雑にする。この課題を克服するために、ラグランジアン動的モード分解を、 Koopman演算子の局所時間不変近似と組み合わせる。この手法の最初の構成要素は、システムのダイナミクスを最高の線形推定量を提供し、後者はシステムの非線形性と非自治性を処理する。私たちは、予測精度とPerturbationエラーの理論的推定値を提供することで、ランク切断と時系列離散化の選択をガイドする。私たちの方法は、非自治問題、特に2次元Navier-Stokes方程式を含む複数の非自治問題の実行を検証する。 |
2309.15304 | On $2$-superirreducible polynomials over finite fields | We investigate $k$-superirreducible polynomials, by which we mean irreducible
polynomials that remain irreducible under any polynomial substitution of
positive degree at most $k$. Let $\mathbb F$ be a finite field of
characteristic $p$. We show that no $2$-superirreducible polynomials exist in
$\mathbb F[t]$ when $p=2$ and that no such polynomials of odd degree exist when
$p$ is odd. We address the remaining case in which $p$ is odd and the
polynomials have even degree by giving an explicit formula for the number of
monic 2-superirreducible polynomials having even degree $d$. This formula is
analogous to that given by Gauss for the number of monic irreducible
polynomials of given degree over a finite field. We discuss the associated
asymptotic behaviour when either the degree of the polynomial or the size of
the finite field tends to infinity. | Jonathan W. Bober, Lara Du, Dan Fretwell, Gene S. Kopp, Trevor D. Wooley | 2023-09-26T23:06:54 | http://arxiv.org/abs/2309.15304v3 | # On \(2\)-superirreducible polynomials over finite fields
###### Abstract.
We investigate \(k\)-superirreducible polynomials, by which we mean irreducible polynomials that remain irreducible under any polynomial substitution of positive degree at most \(k\). Let \(\mathbb{F}\) be a finite field of characteristic \(p\). We show that no \(2\)-superirreducible polynomials exist in \(\mathbb{F}[t]\) when \(p=2\) and that no such polynomials of odd degree exist when \(p\) is odd. We address the remaining case in which \(p\) is odd and the polynomials have even degree by giving an explicit formula for the number of monic \(2\)-superirreducible polynomials having even degree \(d\). This formula is analogous to that given by Gauss for the number of monic irreducible polynomials of given degree over a finite field. We discuss the associated asymptotic behaviour when either the degree of the polynomial or the size of the finite field tends to infinity.
Key words and phrases:Irreducibility, finite fields, polynomial compositions 2020 Mathematics Subject Classification: 11T06, 12E05, 11S05 The fourth author is supported by NSF grant DMS-2302514. The fifth author is supported by NSF grants DMS-1854398 and DMS-2001549. The first, third, and fourth authors are also supported by the Heilbronn Institute for Mathematical Research.
Superirreducibility has in fact been studied in the past, although not by name. Strengthening the above pedestrian observation concerning \(f(t+f(t))\), it follows from work of Schinzel [6, Lemma 10] that a polynomial of degree \(d\geq 3\) lying in \(\mathbb{Q}[t]\) cannot be \((d-1)\)-superirreducible. More recently, Bober et al. [1] have considered superirreducibility as a potential limitation to the understanding of smooth integral values of polynomials. These authors show, inter alia, that \(2\)-superirreducible polynomials exist in \(\mathbb{Q}[t]\) having degree \(6\) (see [1, SS6]). Moreover, in work contemporaneous with that reported on herein, Du [3, Theorem 1.3] (see also [2]) has exhibited \(2\)-superirreducible polynomials in \(\mathbb{Q}[t]\) of degree \(4\), such as the simple example \(t^{4}+2\).
With a potential local-global principle in mind, it might be expected that insights into the superirreducibility of polynomials over \(\mathbb{Z}\) and over \(\mathbb{Q}\) might be gained by examining corresponding superirreducibility properties over the \(p\)-adic integers \(\mathbb{Z}_{p}\) and \(p\)-adic numbers \(\mathbb{Q}_{p}\). Such considerations lead in turn to an investigation of the superirreducibility of polynomials over finite fields. We finish our paper by disappointing the reader in SS4 with the news that if \(k\geq 2\) and \(p\) is any prime number, then \(k\)-superirreducible polynomials exist over neither \(\mathbb{Z}_{p}\) nor \(\mathbb{Q}_{p}\).
## 2. Basic lemmas
In this section we prove the basic lemmas that provide the infrastructure for our subsequent discussions concerning superirreducibility. Recall the definition of \(k\)-superirreducibility provided in our opening paragraph. We begin by expanding on the observation that there are no weakly \(k\)-superirreducible polynomials of degree \(k\) or larger.
**Lemma 2.1**.: _Let \(R\) be a commutative domain with unity, and let \(f\in R[t]\) be a polynomial of degree \(d\geq 2\). Then \(f(t)\) is not weakly \(k\)-superirreducible for any \(k\geq d\)._
Proof.: For each non-negative integer \(r\), consider the degree \(d+r\) substitution \(g(t)=t+t^{r}f(t)\). We have
\[f(g(t))=f(t+t^{r}f(t))\equiv f(t)\equiv 0\pmod{f(t)}.\]
Thus, we see that \(f(g(t))\) is divisible by \(f(t)\), and it is hence reducible. It follows that \(f\) is not weakly \(k\)-superirreducible for \(k\geq d\).
The next lemma is a mild generalization of [1, Proposition 3.1] to arbitrary fields. The latter proposition is restricted to the rational field \(\mathbb{Q}\), and we would be remiss were we not to record that Schinzel [7, Theorem 22] attributes this conclusion to Capelli.
**Lemma 2.2**.: _Let \(K\) be a field. Suppose that \(f(x)\in K[x]\) is a monic irreducible polynomial, let \(\alpha\) be a root of \(f\) lying in a splitting field extension for \(f\) over \(K\), and put \(L=K(\alpha)\). Then, for any non-constant polynomial \(g(t)\in K[t]\), the polynomial \(f(g(t))\) is reducible in \(K[t]\) if and only if \(g(t)-\alpha\) is reducible in \(L[t]\)._
Proof.: We consider the \(K\)-algebra \(A=K[x,t]/(f(x),g(t)-x)\) from two perspectives. First, on noting that \(f(x)\) is irreducible over \(K[x]\), we find that \(K[x]/(f(x))\cong K[\alpha]=K(\alpha)=L\). Thus, on the one hand,
\[A\cong\frac{K[x,t]/(f(x))}{(g(t)-x)}\cong L[t]/(g(t)-\alpha).\]
Here, of course, we view \((g(t)-x)\) as being an ideal in \(K[x,t]/(f(x))\). On the other hand, similarly,
\[A\cong\frac{K[x,t]/(g(t)-x)}{(f(x))}\cong K[t]/(f(g(t))).\]
Thus, we obtain a \(K\)-algebra isomorphism
\[K[t]/(f(g(t)))\cong L[t]/(g(t)-\alpha). \tag{2.1}\]
Hence \(K[t]/(f(g(t)))\) is a field if and only if \(L[t]/(g(t)-\alpha)\) is a field, and thus \(f(g(t))\) is irreducible in \(K[t]\) if and only if \(g(t)-\alpha\) is irreducible in \(L[t]\). The desired conclusion follows.
We take the opportunity to record a further consequence of the relation (2.1), since it may be of use in future investigations concerning superirreducibility.
**Lemma 2.3**.: _Let \(K\) be a field. Suppose that \(f(x)\in K[x]\) is a monic irreducible polynomial, and let \(g(t)\in K[t]\) be any non-constant polynomial. Then, for any polynomial divisor \(h(t)\) of \(f(g(t))\), we have \(\deg(f)|\deg(h)\)._
Proof.: The relation (2.1) shows that \(K[t]/(f(g(t))\) has the structure of an \(L\)-algebra. Any ring quotient of an \(L\)-algebra is still an \(L\)-algebra. Thus, we see that \(K[t]/(h(t))\) is an \(L\)-algebra, and in particular a vector space over \(L\). Consequently, one has
\[\deg(h)=\dim_{K}K[t]/(h(t))=[L:K]\left(\dim_{L}K[t]/(h(t))\right)=\deg(f) \left(\dim_{L}K[t]/(h(t))\right),\]
and thus \(\deg(f)|\deg(h)\).
We also provide a trivial lemma explaining the relationship between our definitions of superirreducibility and weak superirreducibility for different values of \(k\).
**Lemma 2.4**.: _Let \(R\) be a commutative domain with unity, and let \(f(x)\in R[x]\) and \(k\in\mathbb{N}\). The polynomial \(f(x)\) is \(k\)-superirreducible if and only if it is weakly \(\ell\)-superirreducible for all natural numbers \(\ell\leq k\). The polynomial \(f(x)\) is weakly \(k\)-superirreducibble if and only if it is weakly \(\ell\)-superirreducible for all natural numbers \(\ell\) dividing \(k\)._
Proof.: All of the implications follow formally from the definitions except for the statement that, if \(f(x)\) is weakly \(k\)-superirreducible and \(\ell|k\), then \(f(x)\) is weakly \(\ell\)-superirreducible. To prove this, write \(k=\ell m\) and consider a polynomial \(g(t)\) of degree \(\ell\). The substitution \(f(g(t^{m}))\) is thus irreducible, and hence so is \(f(g(t))\).
It follows that "\(2\)-superirreducible" and "weakly \(2\)-superirreducible" are synonyms.
## 3. Counting \(2\)-superirreducible polynomials over finite fields
Recall that when \(1\leq k<d\), we write \(s_{k}(q,d)\) for the number of monic weakly \(k\)-superirreducible polynomials lying in \(\mathbb{F}_{q}[t]\) having degree \(d\). In particular, \(s_{2}(q,d)\) is the number of monic \(2\)-superirreducible polynomials in \(\mathbb{F}_{q}[t]\) having degree \(d\), because "\(2\)-superirreducible" and "weakly \(2\)-superirreducible" are equivalent conditions by Lemma 2.4. Our goal in this section is to establish formulae for \(s_{2}(q,d)\) that deliver the conclusions recorded in Theorem 1.1.
### Elementary cases
We begin by confirming that when \(q\) is a power of \(2\), and also when \(d\) is odd, one has \(s_{2}(q,d)=0\). In fact, rather more is true, as we now demonstrate.
**Proposition 3.1**.: _Let \(p\) be a prime. Then for all natural numbers \(\ell\) and \(d\), one has \(s_{p}(p^{\ell},d)=0\)._
Proof.: Consider a polynomial \(f\in\mathbb{F}_{p^{\ell}}[t]\) having degree \(d\). Write \(f(x)=\sum_{j=0}^{d}a_{j}x^{j}\), and note that \(a_{j}=a_{j}^{p^{\ell}}\) for each index \(j\). Thus, we have
\[f(t^{p})=\sum_{j=0}^{d}a_{j}^{p^{\ell}}t^{pj}=\left(\sum_{j=0}^{d}a_{j}^{p^{\ell -1}}t^{j}\right)^{p},\]
and it follows that \(f(x)\) is not weakly \(p\)-superirreducible. Consequently, one has \(s_{p}(p^{\ell},d)=0\).
The special case \(p=2\) of Proposition 3.1 shows that \(s_{2}(q,d)=0\) when \(q\) is a power of \(2\). Next, we turn to polynomials of odd degree over \(\mathbb{F}_{q}\).
**Proposition 3.2**.: _When \(d\) is an odd natural number, one has \(s_{2}(q,d)=0\)._
Proof.: In view of the case \(p=2\) of Proposition 3.1, there is no loss of generality in assuming that \(q\) is odd. Let \(f(x)\in\mathbb{F}_{q}[x]\) be a monic irreducible polynomial of degree \(d\). The polynomial \(f\) has a root \(\alpha\) lying in \(\mathbb{F}_{q^{d}}\), and \(\mathbb{F}_{q^{d}}=\mathbb{F}_{q}(\alpha)\). By virtue of Lemma 2.2, if we are able to find a quadratic polynomial \(g(t)\in\mathbb{F}_{q}[t]\) having the property that \(g(t)-\alpha\) has a root in \(\mathbb{F}_{q^{d}}\), then we may infer that \(f(g(t))\) is reducible. This will confirm that \(f(x)\) is not \(2\)-superirreducible, delivering the desired conclusion.
We may divide into two cases:
1. Suppose first that \(\alpha=\beta^{2}\) for some \(\beta\in\mathbb{F}_{q^{d}}\). Then we put \(g(t)=t^{2}\), and observe that the polynomial \(g(t)-\alpha\) has the root \(\beta\in\mathbb{F}_{q^{d}}\).
2. In the remaining cases, we may suppose that \(\alpha\) is not the square of any element of \(\mathbb{F}_{q^{d}}\). Since \(q\neq 2\), there exists an element \(b\in\mathbb{F}_{q}\) which is not the square of any element of \(\mathbb{F}_{q}\). On recalling our assumption that \(d\) is odd, we find that \(b\) is not the square of any element in \(\mathbb{F}_{q^{d}}\). Thus, we may infer that \(b^{-1}\alpha=\beta^{2}\) for some \(\beta\in\mathbb{F}_{q^{d}}\). We now put \(g(t)=bt^{2}\) and observe that the polynomial \(g(t)-\alpha\) has the root \(\beta\in\mathbb{F}_{q^{d}}\).
In either case, our previous discussion shows that \(f(x)\) is not \(2\)-superirreducible, and this implies the desired conclusion.
The conclusion of Proposition 3.2 combines with that of Proposition 3.1 to confirm the first assertion of Theorem 1.1. These cases of Theorem 1.1 help to explain the example noted in the introduction demonstrating that weak \((k-1)\)-superirreducibility is not necessarily inherited from the corresponding property of weak \(k\)-superirreducibility. Expanding a little on that example, we observe that by making use of commonly available computer algebra packages, one finds the following examples of polynomials weakly \(3\)-superirreducible over \(\mathbb{F}_{2}[x]\) yet not \(2\)-superirreducible over \(\mathbb{F}_{2}[x]\):
\[x^{6}+x^{5}+x^{3}+x^{2}\ +1,\] \[x^{8}+x^{6}+x^{5}+x^{3}\ +1,\] \[x^{10}+x^{9}+x^{7}+x^{2}\ +1,\] \[x^{10}+x^{9}+x^{8}+x^{4}+x^{3}+x^{2}\ +1,\] \[x^{10}+x^{9}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}\ +1.\]
In each of these examples of a polynomial \(f\in\mathbb{F}_{2}[x]\), the failure of \(2\)-superirreducibility follows from Proposition 3.1. Meanwhile, a direct computation confirms that the polynomial \(f(g(t))\) is irreducible over \(\mathbb{F}_{2}[t]\) for each of the \(8\) possible monic cubic polynomials \(g(t)\) lying in \(\mathbb{F}_{2}[t]\). No
analogous odd degree examples are available, of course, by virtue of Proposition 3.2, though examples of larger even degrees are not too difficult to identify.
### Heuristics
We next address the problem of determining a formula for the number \(s_{k}(q,d)\) of monic weakly \(k\)-superirreducible polynomials of degree \(d\) over \(\mathbb{F}_{q}\). The simplest situation here with \(k=1\) is completely resolved by celebrated work of Gauss, since \(1\)-superirreducibility is equivalent to irreducibility. Thus, as is well-known, it follows from Gauss [4, page 602] that
\[s_{1}(q,d)=\frac{1}{d}\sum_{e|d}\mu\bigg{(}\frac{d}{e}\bigg{)}q^{e},\]
whence, as \(d\to\infty\), one has the asymptotic formula
\[s_{1}(q,d)=\frac{q^{d}}{d}+O\bigg{(}\frac{1}{d}q^{d/2}\bigg{)}.\]
The corresponding situation with \(k\geq 2\) is more subtle. We now motivate our proof of an asymptotic formula for \(s_{2}(q,d)\) with a heuristic argument that addresses the cases remaining to be considered, namely those where \(d\) is even and \(q\) is odd. The heuristic argument is based on the following lemma, which will also be used in the proof.
**Lemma 3.3**.: _Let \(q\) be an odd prime power, and let \(f(x)\in\mathbb{F}_{q}[x]\) be a monic irreducible polynomial of even degree \(d\). Let \(\alpha\in\mathbb{F}_{q^{d}}\) be a root of \(f(x)\). The polynomial \(f(x)\) is \(2\)-superirreducible if and only if \(\alpha+c\) is not a square in \(\mathbb{F}_{q^{d}}\) for all \(c\in\mathbb{F}_{q}\)._
Proof.: As a consequence of Lemma 2.2, the polynomial \(f(x)\) is \(2\)-superirreducible in \(\mathbb{F}_{q}[x]\) if and only if \(g(t)-\alpha\) is irreducible in \(\mathbb{F}_{q^{d}}[t]\) for all quadratic polynomials \(g\in\mathbb{F}_{q}[t]\). Since this condition is invariant under all additive shifts mapping \(t\) to \(t+v\), for \(v\in\mathbb{F}_{q}\), it suffices to consider only the quadratic polynomials of the shape \(g(t)=at^{2}-b\), with \(a,b\in\mathbb{F}_{q}\). Moreover, the assumption that \(d\) is even ensures that \(a\) is a square in \(\mathbb{F}_{q^{d}}\), and hence we may restrict our attention further to polynomials of the shape \(g(t)=t^{2}-c\) with \(c\in\mathbb{F}_{q}\). So \(f(x)\) is \(2\)-superirreducible if and only if the equation \(t^{2}-c=\alpha\) has no solution in \(\mathbb{F}_{q^{d}}\) for any \(c\in\mathbb{F}_{q}\).
For heuristic purposes, we now model the behaviour of these elements \(\alpha+c\) as if they are randomly distributed throughout \(\mathbb{F}_{q^{d}}\). Since roughly half the elements of \(\mathbb{F}_{q^{d}}\) are squares, one should expect that the condition that \(\alpha+c\) is not a square is satisfied for a fixed choice of \(c\) with probability close to \(\frac{1}{2}\). Treating the conditions for varying \(c\in\mathbb{F}_{q}\) as independent events, we therefore expect that \(f(x)\) is \(2\)-superirreducible with probability close to \(1/2^{q}\). Multiplying this probability by the number of choices for monic irreducible polynomials \(f(x)\) of degree \(d\), our heuristic predicts that when \(d\) is even and \(q\) is odd, one should have
\[s_{2}(q,d)\approx\frac{q^{d}}{d2^{q}}.\]
We shall see in the next subsection that this heuristic accurately predicts the asymptotic behaviour of \(s_{2}(q,d)\) as \(d\to\infty\) through even integers \(d\).
### The large \(d\) limit
The asymptotic formula predicted by the heuristic described in the previous subsection will follow in the large \(d\) limit from Weil's resolution of the Riemann hypothesis for curves over finite fields. We make use, specifically, of the Weil bound for certain higher autocorrelations of the quadratic character generalizing Jacobi sums. Our goal in this subsection is the proof
of the estimate for \(s_{2}(q,d)\) supplied by the following theorem, an immediate consequence of which is the asymptotic formula (1.1) supplied by Theorem 1.1.
**Theorem 3.4**.: _When \(q\) is odd and \(d\) is even, one has_
\[\Big{|}s_{2}(q,d)-\frac{q^{d}}{d2^{q}}\Big{|}<\frac{q}{2d}q^{d/2}.\]
The proof of this estimate is based on a more rigorous version of the heuristic argument given in Section 3.2, and it employs character sums that we now define.
**Definition 3.5**.: Let \(q\) be an odd prime power, and write \(\chi_{q}\) for the nontrivial quadratic character \(\chi_{q}:\mathbb{F}_{q}^{\times}\to\{1,-1\}\), extended to \(\mathbb{F}_{q}\) by setting \(\chi_{q}(0)=0\). We define the _order \(n\) autocorrelation of \(\chi_{q}\)_ with offsets \(u_{1},\ldots,u_{n}\in\mathbb{F}_{q}\) to be the sum
\[a_{q}(u_{1},\ldots,u_{n})=\sum_{\beta\in\mathbb{F}_{q}}\chi_{q}(\beta+u_{1}) \cdots\chi_{q}(\beta+u_{n}).\]
Noting that this definition is independent of the ordering of the arguments, when \(U=\{u_{1},\ldots,u_{n}\}\) is a subset of \(\mathbb{F}_{q}\), we adopt the convention of writing \(a_{q}(U)\) for \(a_{q}(u_{1},\ldots,u_{n})\).
Note that \(a_{q}(U)\in\mathbb{Z}\) for all subsets \(U\) of \(\mathbb{F}_{q}\). When \(|U|=1\) it is apparent that \(a_{q}(U)=0\). Meanwhile, in circumstances where \(|U|=2\), so that \(U=\{u_{1},u_{2}\}\) for some elements \(u_{1},u_{2}\in\mathbb{F}_{q}\) with \(u_{1}\neq u_{2}\), the autocorrelation \(a_{q}(U)=a_{q}(u_{1},u_{2})\) is a quadratic Jacobi sum. Thus, in this situation, we have \(a_{q}(u_{1},u_{2})=\pm 1\) (see [5, Chapter 8]). The higher-order correlations become more complicated, but we will see that they can easily be bounded. First, we relate the autocorrelations of \(\chi_{q}\) to the number \(s_{2}(q,d)\) of monic \(2\)-superirreducible polynomials of degree \(d\) in \(\mathbb{F}_{q}[x]\).
**Proposition 3.6**.: _Let \(q\) be an odd prime power and \(d\) be even. Then_
\[s_{2}(q,d)=\frac{1}{d2^{q}}\sum_{\begin{subarray}{c}e\mid d\\ d/e\ odd\end{subarray}}\mu\Big{(}\frac{d}{e}\Big{)}\bigg{(}q^{e}+\sum_{ \emptyset\neq U\subseteq\mathbb{F}_{q}}(-1)^{|U|}a_{q^{e}}(U)\bigg{)}.\]
Proof.: Consider a monic irreducible polynomial \(f(x)\in\mathbb{F}_{q}[x]\) of degree \(d\), and let \(\alpha\) be a root of \(f(x)\) in \(\mathbb{F}_{q^{d}}\). It follows from Lemma 3.3 that \(f(x)\) is \(2\)-superirreducible if and only if \(\alpha+c\) is not a square in \(\mathbb{F}_{q^{d}}\) for each \(c\in\mathbb{F}_{q}\). Since the latter condition is equivalent to the requirement that \(\chi_{q^{d}}(\alpha+c)=-1\) for all \(c\in\mathbb{F}_{q}\), we see that
\[\prod_{c\in\mathbb{F}_{q}}\frac{1}{2}\left(1-\chi_{q^{d}}(\alpha+c)\right)= \begin{cases}1,&\text{if $f$ is $2$-superirreducible},\\ 0,&\text{otherwise}.\end{cases}\]
This relation provides an algebraic formulation of the indicator function for \(2\)-superirreducibility. Instead of summing this quantity over monic irreducible polynomials, we can instead sum over elements \(\alpha\in\mathbb{F}_{q^{d}}\) not lying in any proper subfield, dividing by \(d\) to account for overcounting. Thus, we find that
\[s_{2}(q,d)=\frac{1}{d}\sum_{\begin{subarray}{c}\alpha\in\mathbb{F}_{q^{d}}\\ \alpha\notin\mathbb{F}_{q^{e}}\ (c<d\text{ and }e\mid d)\end{subarray}}\prod_{c\in \mathbb{F}_{q}}\frac{1}{2}\left(1-\chi_{q^{d}}(\alpha+c)\right).\]
The condition on \(\alpha\) in the first summation of this relation may be encoded using the Mobius function. Thus, we obtain
\[s_{2}(q,d)=\frac{1}{d2^{q}}\sum_{e|d}\mu\Big{(}\frac{d}{e}\Big{)}\sum_{\alpha\in \mathbb{F}_{q^{e}}}\prod_{c\in\mathbb{F}_{q}}\left(1-\chi_{q^{d}}(\alpha+c) \right).\]
When \(d/e\) is even, the quadratic character \(\chi_{q^{d}}\) on \(\mathbb{F}_{q^{d}}\) restricts to the trivial character on \(\mathbb{F}_{q^{e}}\), and when \(d/e\) is odd it instead restricts to \(\chi_{q^{e}}\). We therefore deduce that
\[s_{2}(q,d)=\frac{1}{d2^{q}}\sum_{\begin{subarray}{c}e|d\\ d/e\text{ odd}\end{subarray}}\mu\Big{(}\frac{d}{e}\Big{)}\sum_{\alpha\in \mathbb{F}_{q^{e}}}\prod_{c\in\mathbb{F}_{q}}\left(1-\chi_{q^{e}}(\alpha+c) \right),\]
and the desired formula for \(s_{2}(q,d)\) follows on observing that
\[\sum_{\alpha\in\mathbb{F}_{q^{e}}}\prod_{c\in\mathbb{F}_{q}}\left(1-\chi_{q^{ e}}(\alpha+c)\right)=q^{e}+\sum_{\emptyset\neq U\subseteq\mathbb{F}_{q}}(-1)^{|U| }a_{q^{e}}(U).\]
We next establish a bound on the autocorrelations \(a_{q^{e}}(U)\).
**Lemma 3.7**.: _Let \(q\) be an odd prime power. Suppose that \(U\) is a non-empty subset of \(\mathbb{F}_{q}\) with \(|U|=n\). Then for each positive integer \(e\), one has \(|a_{q^{e}}(U)|\leq(n-1)q^{e/2}\)._
Proof.: Observe that
\[a_{q^{e}}(U)=\sum_{\beta\in\mathbb{F}_{q^{e}}}\chi_{q^{e}}(h(\beta)),\]
where \(h(t)=(t+u_{1})\cdots(t+u_{n})\) is a polynomial in \(\mathbb{F}_{q}[t]\) having roots \(-u_{1},\ldots,-u_{n}\). Since \(u_{1},\ldots,u_{n}\) are distinct and \(\chi_{q^{e}}\) is a multiplicative character of order \(2\), it follows from a version of Weil's bound established by Schmidt that \(|a_{q^{e}}(U)|\leq(n-1)q^{e/2}\) (see, for example, Theorem 2C' on page 43 of [8, Chapter 2]).
Now we complete the proof of Theorem 3.4. In this proof, we expend a little extra effort to achieve a more attractive conclusion.
Proof of Theorem 3.4.: We begin by observing that, in view of Lemma 3.7, one has
\[\bigg{|}\sum_{\emptyset\neq U\subseteq\mathbb{F}_{q}}(-1)^{|U|}a _{q^{e}}(U)\bigg{|} \leq\sum_{n=1}^{q}\binom{q}{n}(n-1)q^{e/2}\] \[=q^{e/2}\bigg{(}q\sum_{n=2}^{q}\binom{q-1}{n-1}-\sum_{n=2}^{q} \binom{q}{n}\bigg{)}\] \[=q^{e/2}\big{(}q(2^{q-1}-1)-(2^{q}-q-1)\big{)}. \tag{3.1}\]
We note next that since \(d\) is assumed to be even, then whenever \(e\) is a divisor of \(d\) with \(d/e\) odd, then \(e\) is even. Moreover, if it is the case that \(e<d\), then \(e\leq d/3\). The first constraint on \(e\) here
conveys us from (3.1) to the upper bound
\[\sum_{\begin{subarray}{c}e|d\\ d/e\text{ odd}\end{subarray}}\left|\sum_{\emptyset\neq U\subseteq\mathbb{F}_{q}}(-1 )^{|U|}a_{q^{e}}(U)\right| \leq\big{(}2^{q-1}(q-2)+1\big{)}\sum_{m=0}^{d/2}q^{m}\] \[<\frac{q}{q-1}\big{(}2^{q-1}(q-2)+1\big{)}q^{d/2}.\]
Meanwhile, making use also of the second constraint on \(e\), we obtain the bound
\[\sum_{\begin{subarray}{c}e|d\\ e<d\text{ and }d/e\text{ odd}\end{subarray}}q^{e}\leq\sum_{0\leq m\leq d/3}q^{m} <\frac{q}{q-1}q^{d/2}.\]
By applying these bounds in combination with Proposition 3.6, we deduce that
\[\left|s_{2}(q,d)-\frac{q^{d}}{d2^{q}}\right|<\frac{1}{d2^{q}}\big{(}(q-1)2^{q- 1}-2^{q-1}+2\big{)}\frac{q}{q-1}q^{d/2}\leq\frac{q}{2d}q^{d/2}.\]
This completes the proof of Theorem 3.4.
### Vanishing in the large \(q\) limit
We turn our attention next to the behaviour of \(s_{2}(q,d)\) when \(d\) is fixed and \(q\) is large. It transpires that \(s_{2}(q,d)=0\) for large enough prime powers \(q\). This conclusion follows from Lemma 3.3 once we confirm that for every primitive element \(\alpha\in\mathbb{F}_{q^{d}}\), there exists an element \(c\in\mathbb{F}_{q}\) for which \(\chi_{q^{d}}(\alpha+c)=1\).
**Lemma 3.8**.: _Suppose that \(q\) is an odd prime power and \(\alpha\in\mathbb{F}_{q^{d}}\) is a primitive element. Then, whenever \(q>(d-1)^{2}\), one has_
\[\left|\sum_{c\in\mathbb{F}_{q}}\chi_{q^{d}}(\alpha+c)\right|<q.\]
Proof.: Consider the \(d\)-dimensional commutative \(\mathbb{F}_{q}\)-algebra \(\mathbb{F}_{q^{d}}=\mathbb{F}_{q}[\alpha]\). Put \(\beta=-\alpha\), and observe that the character \(\chi_{q^{d}}\) is not trivial on \(\mathbb{F}_{q}[\beta]=\mathbb{F}_{q^{d}}\). Then it follows from Wan [9, Corollary 2.2] that
\[\left|\sum_{c\in\mathbb{F}_{q}}\chi_{q^{d}}(c-\beta)\right|\leq(d-1)q^{1/2}.\]
Provided that \(q>(d-1)^{2}\), one has \((d-1)q^{1/2}<q\), and thus the desired conclusion follows.
We are now equipped to establish the final conclusion of Theorem 1.1.
**Theorem 3.9**.: _Let \(d\) be an even integer, and suppose that \(q\) is an odd prime power with \(q>(d-1)^{2}\). Then \(s_{2}(q,d)=0\)._
Proof.: Suppose that \(f(x)\in\mathbb{F}_{q}[x]\) is a \(2\)-superirreducible polynomial of degree \(d\) over \(\mathbb{F}_{q}\), and consider a root \(\alpha\in\mathbb{F}_{q^{d}}\) of \(f\). By Lemma 3.3, we must have \(\chi_{q^{d}}(\alpha+c)=-1\) for every \(c\in\mathbb{F}_{q}\), and hence
\[\sum_{c\in\mathbb{F}_{q}}\chi_{q^{d}}(\alpha+c)=-q.\]
This contradicts the estimate supplied by Lemma 3.8, since we have assumed that \(q>(d-1)^{2}\). Consequently, there can be no \(2\)-superirreducible polynomials of degree \(d\) over \(\mathbb{F}_{q}\)
## 4. Relationship to rational and \(p\)-adic superirreducibility
Fix a rational prime number \(p\). Then, any monic polynomial \(f\in\mathbb{Z}[x]\) that is irreducible modulo \(p\) is also irreducible over \(\mathbb{Q}[x]\). One might guess that this familiar property extends from irreducibility to superirreducibility. Thus, if the monic polynomial \(f(x)\) reduces to a weakly \(k\)-superirreducible polynomial modulo \(p\), one might expect that \(f(x)\) is itself weakly \(k\)-superirreducible over \(\mathbb{Z}\), and perhaps also over \(\mathbb{Q}\). We find that such an expectation is in fact excessively optimistic. Indeed, there are \(2\)-superirreducible polynomials over \(\mathbb{F}_{3}\) with integral lifts that are not \(2\)-superirreducible over \(\mathbb{Z}\).
**Example 4.1**.: Consider the polynomial \(f(x)\in\mathbb{Z}[x]\) given by
\[f(x)=x^{4}-12x^{3}+2x^{2}-39x+71.\]
Then, we have \(f(x)\equiv x^{4}-x^{2}-1\pmod{3}\), and it is verified by an exhaustive check that \(x^{4}-x^{2}-1\) is \(2\)-superirreducible in \(\mathbb{F}_{3}[x]\). However, one has
\[f(3t^{2}+t)=(t^{4}+3t^{3}+2t^{2}-1)(81t^{4}-135t^{3}-27t^{2}+39t-71),\]
so that \(f(x)\) is not \(2\)-superirreducible over \(\mathbb{Z}\).
Despite examples like the one above, one may still hope that the assumption of additional congruential properties involving higher powers of \(p\) might suffice to exclude such problematic examples, thereby providing a means to lift superirreducible polynomials over \(\mathbb{Z}_{p}\) to superirreducible polynomials over \(\mathbb{Z}\). The following proposition reveals a major obstruction to any such lifting process, since it shows that for each natural number \(k\geq 2\), there are no \(p\)-adic weakly \(k\)-superirreducible polynomials.
**Proposition 4.2**.: _Let \(p\) be a prime number. When \(k\geq 2\), there are no weakly \(k\)-superirreducible polynomials over \(\mathbb{Z}_{p}\) or over \(\mathbb{Q}_{p}\)._
Proof.: Suppose, if possible, that \(f\in\mathbb{Q}_{p}[x]\) is a weakly \(k\)-superirreducible polynomial. There is no loss of generality in assuming that \(f\) is an irreducible polynomial lying in \(\mathbb{Z}_{p}[x]\). Let \(\alpha\) be a root of \(f\) lying in a splitting field extension for \(f\) over \(\mathbb{Q}_{p}\), and let \(e=1+|v_{p}(\alpha)|\), where \(v_{p}(\alpha)\) is defined in such a manner that \(|\alpha|_{p}=p^{-v_{p}(\alpha)}\). Let \(h\in\mathbb{Z}_{p}[t]\) be any polynomial of degree \(k\), put \(g(t)=p^{e}h(t)+t\), and consider the equation \(g(\beta)=\alpha\). Since \(|g(\alpha)-\alpha|_{p}<1\) and \(|g^{\prime}(\alpha)|_{p}=|1+p^{e}h^{\prime}(\alpha)|_{p}=1\), an application of Hensel's lemma demonstrates that the equation \(g(\beta)=\alpha\) has a solution \(\beta\in\mathbb{Q}_{p}(\alpha)\). Thus, the equation \(\alpha=p^{e}h(\beta)+\beta\) has a solution \(\beta\in\mathbb{Q}_{p}(\alpha)\), and by appealing to Lemma 2.2, we conclude that the polynomial \(f(p^{e}h(t)+t)\) is reducible over \(\mathbb{Q}_{p}[t]\). Since \(p^{e}h(t)+t\in\mathbb{Z}_{p}[t]\), we see that \(f\) is neither weakly \(k\)-superirreducible over \(\mathbb{Z}_{p}\) nor over \(\mathbb{Q}_{p}\), and we arrive at a contradiction. The desired conclusion follows.
The discussion of this section appears to show, therefore, that superirreducibility over \(\mathbb{F}_{p}\), and indeed superirreducibility over \(\mathbb{Z}_{p}\) and \(\mathbb{Q}_{p}\), is not closely connected to corresponding superirreducibility over \(\mathbb{Z}\) and \(\mathbb{Q}\).
| ```
$k$次不分岐多項式について調査し、これは、任意の次数が$k$以下の正の整数の多項式置換により不分岐である多項式を意味します。 $\mathbb{F}$ を特性$p$の有限体とする。$p = 2$ の場合、$2$次不分岐多項式の存在を証明し、$p$ が奇数の場合には奇数の次数を持つ$2$次不分岐多項式の存在を証明します。残りの場合、$p$ が奇数で偶数次数を持つ多項式について、その数に関連する定理を導出します。この定理は、有限体における既知の次数を持つ単項不分岐多項数の数に相当します。有限体の大きさや多項式の次数が無限大に近づくと、その結果に関連する漸近的な性質について議論します |
2309.03566 | P4R-Type: a Verified API for P4 Control Plane Programs (Technical
Report) | Software-Defined Networking (SDN) significantly simplifies programming,
reconfiguring, and optimizing network devices, such as switches and routers.
The de facto standard for programmming SDN devices is the P4 language. However,
the flexibility and power of P4, and SDN more generally, gives rise to
important risks. As a number of incidents at major cloud providers have shown,
errors in SDN programs can compromise the availability of networks, leaving
them in a non-functional state. The focus of this paper are errors in
control-plane programs that interact with P4-enabled network devices via the
standardized P4Runtime API. For clients of the P4Runtime API it is easy to make
mistakes that lead to catastrophic failures, despite the use of Google's
Protocol Buffers as an interface definition language.
This paper proposes P4R-Type, a novel verified P4Runtime API for Scala that
performs static checks for P4 control plane operations, ruling out mismatches
between P4 tables, allowed actions, and action parameters. As a formal
foundation of P4R-Type, we present the $F_{\text{P4R}}$ calculus and its typing
system, which ensure that well-typed programs never get stuck by issuing
invalid P4Runtime operations. We evaluate the safety and flexibility of
P4R-Type with 3 case studies. To the best of our knowledge, this is the first
work that formalises P4Runtime control plane applications, and a typing
discipline ensuring the correctness of P4Runtime operations. | Jens Kanstrup Larsen, Roberto Guanciale, Philipp Haller, Alceste Scalas | 2023-09-07T08:52:49 | http://arxiv.org/abs/2309.03566v1 | # P4R-Type: a Verified API for P4 Control Plane Programs
###### Abstract.
Software-Defined Networking (SDN) significantly simplifies programming, reconfiguring, and optimizing network devices, such as switches and routers. The _de facto_ standard for programming SDN devices is the P4 language. However, the flexibility and power of P4, and SDN more generally, gives rise to important risks. As a number of incidents at major cloud providers have shown, errors in SDN programs can compromise the availability of networks, leaving them in a non-functional state. The focus of this paper are errors in control-plane programs that interact with P4-enabled network devices via the standardized P4Runtime API. For clients of the P4Runtime API it is easy to make mistakes that lead to catastrophic failures, despite the use of Google's Protocol Buffers as an interface definition language.
This paper proposes P4R-Type, a novel verified P4Runtime API for Scala that performs static checks for P4 control plane operations, ruling out mismatches between P4 tables, allowed actions, and action parameters. As a formal foundation of P4R-Type, we present the \(F_{\text{P4R}}\) calculus and its typing system, which ensure that well-typed programs never get stuck by issuing invalid P4Runtime operations. We evaluate the safety and flexibility of P4R-Type with 3 case studies. To the best of our knowledge, this is the first work that formalises P4Runtime control plane applications, and a typing discipline ensuring the correctness of P4Runtime operations.
Keywords:**Software and its engineering \(\rightarrow\) Formal language definitions; _Domain specific languages; \(\rightarrow\) Networks \(\rightarrow\) Programming interfaces. +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
This separation simplifies network management and enables network administrators to quickly and easily reconfigure and optimize network traffic flows.
The de facto Open Source standard for SDN is P4 [P4.org Working Group 2020a]. In P4, the the data plane is programmed by specifying packet processing _tables_ which select the _actions_ to perform when a network packet matches certain patterns. The P4 standard also defines a control plane API (called P4Runtime [P4.org Working Group 2020b]) for writing programs that query or alter the configuration of P4-enabled network devices.
Unfortunately, the power and ease of automation of SDN come with risks: a mistake in an SDN program can leave a network in a non-functional state. Indeed, erroneous configuration changes have compromised the availability of entire regions of large cloud providers [Sharwood 2016]. A recent study by Bhardwaj et al. [2021] shows that 38.8% of SDN bugs are triggered when the controller _"attempts to process system configurations"_ -- i.e. read, add, update, delete table entries; the authors add that _"this fact is astounding because a critical motivation for SDN is to move towards automation and eliminate configuration-based errors."_ In this paper, we focus on statically preventing a specific form of P4Runtime controller bug: attempting to read/insert/modify/delete P4 table entries that do not conform to the actual table layout of the P4 data plane. Such erroneous attempts are not statically checked by the official, weakly-typed P4Runtime API, as we explain below. Preventing this form of bug does not avert all possible P4 configuration processing bugs (e.g. a P4Runtime controller may insert a well-formed but incorrect routing table entry, or omit or delete a necessary entry) -- but it provides a baseline correctness guarantee towards more thorough static verification of P4Runtime applications (that we discuss as future work in Section 10).
### The Problem with Weakly-Typed P4Runtime APIs
For a concrete example of how mistakes could happen, consider Figure 1 (left): it is based on the P4 documentation [P4.org Working Group 2023], and shows a control plane program written in Python using the official P4Runtime API. The program is connected to a P4-enabled switch, and inserts a new entry (i.e. a packet processing rule) into a table called IPv4_table, meaning: _"if a packet has destination address_10.0.1.1, _then perform the action_ IPv6_forward _with the given parameters._" (We provide more details about P4 in Section 2.)
The Python program in Figure 1 contains an error: the table IPv4_table in the switch does _not_ allow for an action called IPv6_forward (although that action may be allowed by other tables in the same switch). The P4Runtime Python API detects this discrepancy at run-time, and throws an exception -- which may cause the program to fail half-way during a series of related P4 rule updates, leaving the network configuration in an inconsistent state. The same program may have other problems: e.g. does the intended action for IPv4_table actually take two parameters? Is one
Figure 1: Example of control plane P4 programs. Left: a Python program using the official P4Runtime API. Right: the equivalent Scala 3 program using verified API P4R-Type.
of such parameters actually called mac_dst? Again, the official P4Runtime Python API would only spot these issues at run-time, by throwing exceptions.
As this example shows, it is all too easy to make mistakes when writing control plane programs in scripting languages (like Python) that don't perform static checks to ensure the validity of P4Runtime operations. However, statically detecting such errors is not trivial: to prevent errors without being overly-restrictive, the static checks must take into account the actual _dependencies_ between the packet processing tables available in a P4-enabled device, the actions allowed by each specific table, and the parameters expected by each specific action.
Our objective is to design and develop a strongly-typed P4Runtime API that addresses the issues above, while satisfying **three key requirements**:
1. the API must have a formal foundation for proving that well-typed programs never get stuck by issuing invalid P4Runtime operations or receiving unexpected responses;
2. the API must be written and usable in an _existing_ programming language -- i.e. the implementation of the formal results (from requirement **(R1)**) must not depend on a bespoke programming language nor type checker;
3. if the API depends on code generation, the amount of generated code must be minimal.
#### Our Proposal: P4R-Type and its Formal Foundation \(F_{\text{P4R}}\)
This paper proposes P4R-Type, a novel verified P4Runtime API for Scala 3 that performs _static_ checks for P4 control plane operations, ruling out mismatches between P4 tables, allowed actions, and action parameters. Programs written with P4R-Type look like the one shown in Figure 1 (right): albeit similar to its Python equivalent, the P4R-Type program does _not_ compile, because (thanks to its type constraints) the off-the-shelf Scala 3 compiler can spot that the action on line 5 is not valid for the table IPv4_Table. The Scala 3 compiler can also similarly spot any discrepancy between a selected action and the supplied parameters.
P4R-Type has a formal foundation: \(F_{\text{P4R}}\), a calculus and typing system allowing us to state and prove that _"well-typed \(F_{\text{P4R}}\) programs never perform invalid P4Runtime operations"_ (like the mistake in Figure 1). \(F_{\text{P4R}}\) is specifically designed for implementation as a Scala 3 API, and for enabling the "Python-like" P4Runtime programs shown in Figure 1.
To the best of our knowledge, this is the first work that formalises control plane applications based on P4Runtime, and a typing discipline to ensure the correctness of P4Runtime operations.
#### Contributions and Outline of the Paper
After a background and overview (Section 2), we introduce our main contributions:
1. The first formal model of P4Runtime networks (Section 3) consisting of clients written in our novel formal language \(F_{\text{P4R}}\) (Section 3.1) and servers with different configurations (Section 3.2) interacting with each other (Section 3.3).
2. A typing discipline for \(F_{\text{P4R}}\) (Section 4) ensuring that if a client is well-typed w.r.t. the configuration of the surrounding P4Runtime network servers (under the server-configuration-to-type encoding we introduce in Definition 5.2), then the client will never perform invalid P4Runtime operations nor get stuck (Theorems 6.1 and 6.4). To ensure that these results translate into a verified P4Runtime client API in an _existing_ programming language (as per requrement (**R2**) above), we equip the \(F_{\text{P4R}}\) typing system with a limited form of type-level computation based on _match types_[1] and _singleton types_, both available in Scala 3. Our development of \(F_{\text{P4R}}\) also contributes a novel combination of _(i)_ match types
without_ default cases, _(ii)_ structural subtyping, and _(iii)_ singleton types: the details and challenges are explained in Remark 4.6. (Besides, our theory and results are not Scala-specific and can be embedded e.g. in dependently-typed languages like Coq.)
3. The first implementation of a verified P4Runtime API, called P4R-Type (Section 7) and published as companion artifact of this paper. P4R-Type is based on the formalisation and results of \(F_{\texttt{P4R}}\), is written and usable in Scala 3, and only depends on a small amount of autogenerated type definitions (based on our server-configuration-to-type encoding in Definition 5.2): therefore, P4R-Type satisfies the requirements (**R1**), (**R2**), and (**R3**) above. We demonstrate the features of P4R-Type with 3 case studies (Section 8), and discuss the drawbacks of alternative approaches (Section 8.4).
We discuss the related work in Section 9 and conclude in Section 10.
## 2. Background and overview
We now provide an overview of Software Defined Networks (Section 2.1), the P4 data plane (Section 2.2), and P4Runtime (Section 2.3), followed by a bird's eye view of our approach (Section 2.4).
### Software Defined Networks
Software Defined Networking (SDN) is an umbrella that covers several technologies to support dynamic and programmable network reconfigurations. SDN can be used to improve network performance (e.g. intelligent load balancers [1]), efficiency (e.g. network resource virtualisation and partitioning among customers of a cloud provider [11]), and security (AI based anomaly detection systems [10]). As mentioned in Section 1, an SDN consists (roughly speaking) of at least two architectural components:
* _data plane_ devices with direct control of packet processing -- e.g. network interface cards, or switches, or a network of such devices; and
* a centralised or distributed _controller_, which is in charge of interacting, via an _interface_, with the data plane devices to manage network flows.
### Programmable Data Plane and the P4 Language
For many years SDN data plane elements were implemented with fixed-function application-specific integrated circuits (ASICs), with very limited programmability. In fact, programmable switches were two orders of magnitude slower than the corresponding fixed-function ASICs. However, newer programmable switches can run as fast as fixed-function ones. The key for this improvement is the usage of dedicated programmable accelerators, called Network Processing Units (NPUs), and FPGAs. Programmable data-processing enables the support of customised network protocols, for example VPN-aware data processing and in-line packet inspection. NPUs and FPGAs cannot be programmed using general purpose languages. Hence, high-speed data plane must be programmed with dedicated programming languages. Recently, P4 [10] has risen as the main Domain Specific Language for data plane. P4 can be compiled to a variety of targets, including NPUs (e.g. Intel Tofino), FPGAs, and software switches. The key form of configuration for a P4 program are _tables_, which are manipulated by the control plane.
The P4 fragment below defines the tables IPv4_table and IPv6_table, with an "if" statement that inspect the header of an incoming network packet and selects one of the two tables. When the program executes IPv4_table.apply(), the P4 system performs 3 steps:
* it computes a _key_ value from the network packet being processed. In this case, the key is the IPv4 destination address of the packet;
```
1table"IPv4_table"{
2key={hdr.ip.IPv4_dst_addr:lpm;}
3actions={Drop_action;
4IPv4_forward;}
5}
6
7table"IPv6_table"{
8key={hdr.ip.IPv6_dst_addr:lpm;}
9actions={Drop_action;
10IPv6_forward;}
11}
12...
13if(hdr.ip.version==4w4)
14IPv4_table.apply();
15else
16IPv6_table.apply();
```
In this example, the definition of IPv4_table says that a table entry can select one of two possible actions (Drop_action and IPV4_forward, defined below) to execute after a packet match:
```
1actionDrop_action(){
2outCrtl.outputPort=PROPORT;
3}
4actionIPv4_forward(EthernetAddressmac_dst,PortIdport){
5packet.ethernet.dstAddr=mac_dst;
6packet.ip.version=4w4;
7packet.ip.ttl=headership.ttl-1;
8outCrtl.outputPort=port;
9}
```
Drop_action does not require any argument and simply forwards the packet to a "port" that drops (i.e. discards) it. The IPv4_forward action requires two arguments: therefore, when a table entry in IPv4_table wants to invoke the action IPv4_forward, the table entry must also specify a destination Ethernet address and a port. In the following section we briefly discuss exemples that violate these constraints.
### P4Runtime and P4Info Metadata Files
Today, applications that control P4-enabled devices use a control plane API called P4Runtime [P4.org Working Group 2020b]: the control application (acting as a P4Runtime client) connects to P4 device (which acts as a P4Runtime server) and issues API calls to query and modify the device configuration.
Thanks to a portable design based on Google's Protobuf interface definition language, P4Runtime programs may be written in any programming language with Protobuf support -- and the official P4Runtime API implementation is written in Python. The use of general-purpose programming languages for control plane applications is possible because their performance is less critical the one of the data plane; moreover, general purpose languages allow for reusing existing software stacks and support a wide range of application domains.
In the usual workflow, when a P4 data plane program is compiled, it yields two outputs:
1. a packet-processing "executable" deployed on a P4-enabled device (e.g. on a switch); and
2. a _P4Info metadata file_, which summarises all the entities defined in the P4 program -- in particular, its tables and actions. Each entity has a numeric identifier.
To interact with a P4-enabled device (e.g. to add entries to its tables), a P4Runtime program uses the P4Info metadata corresponding to the P4 program deployed on the device.
Figure 2 shows an example of P4Info metadata file for the P4 program in Section 2.2. From this metadata we can see that a P4 device running that program has two tables: IPv4_table and
IPv6_table. Each table has one key that is an address of the corresponding IP protocol version. The entries of IPv4_table and IPv6_table can invoke actions IPv4_forward and IPv6_forward (respectively) and must provide a MAC address and port as action's arguments. All table entries can invoke Drop_action, which has no parameters.
P4Runtime applications can change the configuration of a P4-enabled device by adding, updating, and deleting table entries. P4Runtime applications can also read the table contents, possibly using _wildards_ to filter the results. As shown by the Python program in Figure 1, it is easy to make mistakes if the language does not perform static checks on table updates. Specifically, the P4Info metadata in Figure 2 says that any new entry added to IPv4_table cannot use the IPv6_forward action -- which is the mistake highlighted in Figure 1.
### An Overview of Our Approach
To address the issues described above, we propose P4R-Type: a verified P4Runtime API for Scala 3, with a tool to translate P4Info metadata into Scala types. Our approach is depicted below.
As usual, a P4 data plane program is compiled and deployed on one or more P4-enabled network devices; the program's P4Info metadata is made available for P4Runtime applications. This is where P4R-Type comes into play: a programmer can write a P4Runtime control application by importing (1) the P4R-Type library, and (2) a set of type definitions automatically generated from P4Info metadata. If the P4R-Type-based application type-checks, then it will be able to connect to a P4 device (acting as P4Runtime server) and perform P4Runtime operations that never violate the device configuration -- provided that the configuration matches the P4Info metadata. The design and implementation of P4R-Type is based on a formal model allowing us to reason about the behaviour of P4Runtime client applications and P4 devices acting as P4Runtime servers.
Figure 2. Example P4Info metadata file with the tables and actions of the P4 program in Section 2.2. For brevity, we only show actions IDs and omit tables IDs.
Our formal model is outlined above: a P4Runtime server \(S\) holds tables and actions that are well-formed w.r.t. a configuration \(C\) (which represents P4Info metadata). We define an encoding from a P4Info configuration \(C\) into a set of types for \(F_{\text{P4R}}\): a formal calculus describing P4Runtime client applications. We design the typing discipline of \(F_{\text{P4R}}\) to include match types and singleton types, which are also present in the Scala 3 typing system: this design allows us to implement our results as a Scala 3 API (i.e., P4R-Type) reflecting the typing constraints of \(F_{\text{P4R}}\). Then, we prove our Theorems 6.1 and 6.4: if a \(F_{\text{P4R}}\) program \(t\) type-checks with types encoded from a P4Info configuration \(C\), then \(t\) will interact correctly with any P4Runtime server \(S\) that is well-formed w.r.t. \(C\).
## 3. A Model of P4Runtime Clients, Servers, and Networks
We now illustrate how we model P4Runtime networks consisting of P4-enabled devices (acting as servers), and control applications (the clients) that connect and modify the devices' P4 table entries. In Section 3.1 we introduce the syntax of \(F_{\text{P4R}}\), a formal language for modelling P4Runtime client programs, with the capability of connecting to P4Runtime servers and performing P4Runtime operations. In Section 3.2 we model P4Runtime servers by focusing on their internal configuration, i.e. their P4 tables, packet matching methods, and actions. In Section 3.3 we formalise a P4Runtime network as a parallel composition of P4Runtime clients and servers.
We introduce the semantics of \(F_{\text{P4R}}\) programs, servers, and networks later on (in Section 5) after introducing the typing system (in Section 4).
### The \(F_{\text{P4R}}\) Language for P4Runtime Clients
In Definition 3.1 below we introduce the syntax of the \(F_{\text{P4R}}\) language and its types. \(F_{\text{P4R}}\) is designed as an extension of \(F_{<}\). (System F with subtyping (Cardelli et al., 1994)) augmented with:
* **P4Runtime-specific operations**: primitives and types for server addresses and channels;
* **singleton types**, i.e. types inhabited by exactly one value; and
* **match types**, introducing the capability of performing type-level computations and refining the result type of a pattern matching. Our match types are based on the work of Blanvillain et al. (2022) (which in turn formalises the corresponding feature of the Scala 3 programming language) -- but our adaptation includes significant differences: we discuss them later on, in Remark 4.6.
Definition 3.1 (Syntax of \(F_{\text{P4R}}\)).: The syntax of \(F_{\text{P4R}}\) terms \(t\) and types \(T\) is shown in Figure 3 -- where \(I\) (used to index records and pattern matching terms, and record and match types) represents a finite, non-empty set containing sequential natural numbers \(1,2,\ldots\) Moreover, Figure 4 introduces some frequently-used syntactic abbreviations.
Most of Definition 3.1 is based on standard \(F_{<}\). constructs and extensions (in particular, lists and records). The key deviations are the highlighted constructs in Figure 3:
* a **P4Runtime operation**_op_ allows a client to connect to a P4Runtime server, and query or change the entries in its configuration;
* a **ground value**\(v_{G}\) is a value that does not contain lambda nor type abstractions. A ground value is a "simple" value (e.g. string, integer, \(\ldots\)), or a list or record of ground values. For each ground value \(v_{G}\), there is a **singleton type**\(v_{G}\) only inhabited by \(v_{G}\) itself;
* a **byte string**\(\mathbf{b}(\ldots)\) is the byte representation of a sequence of integers, and has type Bytes;
* a **server address**\(a_{T_{m},T_{a},T_{p}}\) represents a handle for connecting to a P4Runtime server (in practice, it represents its IP address and TCP port). A server address \(a_{T_{m},T_{a},T_{p}}\) has a corresponding **server address type*
* ServerRef\([T_{m},T_{a},T_{p}]\), where the type parameters reflect information available in the server's P4Info file:1 Footnote 1: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{m}\) describes the _matches_ of each table in the server configuration; Footnote 2: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 3: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 4: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 5: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 6: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 6: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 7: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3.
* \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed, in Example 4.5, Definition 5.2 and Example 5.
For brevity, we will often just write \(a\), omitting the subscript;
* a **client-server connection**\(s_{T_{m},T_{a},T_{p}}\) represents a communication channel (created after establishing a connection) that a P4Runtime server and client use to communicate. A connection value has a corresponding **channel type**\(\mathrm{Chan}[T_{m},T_{a},T_{p}]\), whose type arguments have the same meaning outlined above for \(\mathrm{ServerRef}[T_{m},T_{a},T_{p}]\). For brevity, we will often just write \(s\), omitting the subscript;
* a **match type** describes a type-level pattern matching, as illustrated in Example 3 below.
Example 3 (Match Types).: Consider the following match type:
\[\mathrm{Int\ match\ \{\mathrm{Int}\Rightarrow\mathrm{Bool},\ String\Rightarrow \mathrm{Unit}\}}\]
We call the types \(\mathrm{Bool}\) and \(\mathrm{Unit}\) (i.e., the types of the expressions that can be executed after a match case is selected) _continuation types_ of the match type. This match type "reduces" to the continuation type \(\mathrm{Bool}\), because its guard matches the type \(\mathrm{Int}\). (More precisely, in Section 4 we will see that the match type and the selected continuation are subtyping-equivalent.) Now consider the following match type abstracted over type \(T\):
\[\begin{array}{r@{\quad}l}\mathrm{Table\_Actions}\quad\triangleq\quad&\forall T \text{.}\ T\ \mathtt{match}\ \{\\ \quad&\quad&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
* the **configuration**\(C\) is a mapping that contains the following fields:
* _table_matches_ (abbreviated _tm_), a mapping from table names to **match fields**, which in turn consist of:
* _name_, the name of the network packet field to inspect;
* _type_, the type of packet matching algorithm used for this packet field (either Exact, Ternary, LPM, Range, or Optional [P4.org Working Group 2020a]).
* _table_actions_ (abbreviated _ta_), mapping table names to sets of allowed action names;
* _action_params_ (abbreviated _ap_), mapping action names to sets of **action parameters**:
* _name_, the name of the parameter;
* _bitwidth_, the size of the parameter.
* \(E\) is a set of **P4Runtime entities \(e\)*
* that can be hosted on a P4Runtime server. The main type of entity is a **table entry**,2 which is a record consisting of:
* _table_name_, the name of table which owns the entry;
* _field_matches_, a set of records describing packet matching rules:
* _name_, the name of the network packet field to inspect;
* a set of additional key-value entries, depending on the type of packet matching algorithm used for this field (for example, when using the matching algorithm type Range, the key-value entries are called 'low' and 'high');
* _action_name_, the name of the action that the entry applies upon network packet match;
* _action_args_, a set of **action argument*
* records, which in turn contains:
* _name_, the name of the associated parameter;
* _value_, the value provided as the argument;
* \(a\) is the **address** where the server listens for connections;
* \(K\) is a set of **channels**: the active connections between the server and its clients.
Footnote 2: P4Runtime models several other types of entries, but they are out of scope for this work.
As mentioned in the opening of Section 3, the P4Runtime server model formalised in Definition 3.3 focuses on the configuration of a P4Runtime server, by abstracting from other implementation details (e.g. its implementation language). An example server configuration can be seen in Figure 5.
### P4Runtime Networks
We conclude this section by formalising the syntax of a network of P4Runtime servers and clients.
Definition 3.4 (P4Runtime Network).: A _P4Runtime network_ is a parallel composition of clients (i.e. terms \(t\) of the grammar in Definition 3.1) and servers (i.e. tuples \(S\) conforming to Definition 3.3):
Figure 5. Example of a P4Runtime server configuration \(C\) (by Definition 3.3). This JSON-like structure models the P4Info metadata describing the configuration of an actual P4 device (as outlined in Sections 2.2 and 2.3).
## 4. The \(F_{\text{P4R}}\) typing system
Definitions 4.1 and 4.2 formalise the typing system of \(F_{\text{P4R}}\). The typing system is based on System \(F_{\text{c}}\): [Cardelli et al. 1994] extended with singleton types and match types [Blanvillain et al. 2022], plus new typing rules for the P4Runtime-specific operations we introduced in Definition 3.1.
Definition 4.1 (Typing Environment).: A _typing environment_\(\Gamma\) is a mapping from term or type variables to types, that we syntactically represent as follows:
\[\begin{array}{rcl}\Gamma&\coloneqq&\emptyset&\text{(Empty typing environment)}\\ &\mid&\Gamma,\,x:T&\text{(Term variable $x$ has type $T$)}\\ &\mid&\Gamma,\,X<:T&\text{(Type variable $X$ has upper bound $T$)}\end{array}\]
Definition 4.2 (The \(F_{\text{P4R}}\) Typing System).: The \(F_{\text{P4R}}\) typing system consists of the following mutually-defined, inductive judgements:
\[\begin{array}{rcl}\vdash\Gamma\text{ env}&\text{($\Gamma$ is a valid typing environment)}&\text{(Figure 6)}\\ \Gamma\vdash T\text{ type}&\text{($T$ is a valid type in $\Gamma$)}&\text{(Figure 7)}\\ \Gamma\vdash T\circ T^{\prime}&\text{(Types $T$ and $T^{\prime}$ are disjoint in $\Gamma$)}&\text{(Definition 4.3)}\\ \Gamma\vdash T<:T^{\prime}&\text{($T$ is subtype of $T^{\prime}$ in $\Gamma$ -- assuming $\Gamma\vdash T$ type and $\Gamma\vdash T^{\prime}$ type)}&\text{(Figure 8)}\\ \Gamma\vdash T\eqeqeqcolon=:T^{\prime}&\text{($T$ and $T^{\prime}$ are subtyping-equivalent in $\Gamma$, i.e. $\Gamma\vdash T<:T^{\prime}$ and $\Gamma\vdash T^{\prime}$ <:T$)}\\ \Gamma\vdash t:T&\text{($t$ has type $T$ in $\Gamma$ -- assuming $\Gamma\vdash T$ type)}&\text{(Figure 9)}\end{array}\]
Most type validity rules in Figure 7 are standard. The exceptions are highlighted:
Figure 6. Typing environment validity rules.
Figure 7. Type validity rules. Non-standard extensions to \(F_{\text{c}}\): are highlighted.
* by rule Type-Val, any ground value \(v_{G}\) (i.e. any value that does _not_ contain lambda or type abstractions, by Definition 3.1) has a corresponding singleton type \(v_{G}\);
* rules Type-SR and Type-Chan say that our new server address and client-server channel types are well-formed if all their type arguments are well-formed;
* by rule Type-Match, a match type is well-formed if the scrutinee type (\(T_{s}\)), the types being matched against (\(T_{i}\)), and the continuation types (\(T^{\prime}_{i}\)) are well-formed.
The subtyping rules in Figure 8 are also standard, with the following highlighted exceptions:
* by rule ST-Val, if a ground value \(v_{G}\) belongs to type \(T\) (by the relation "\(v_{G}\in_{G}T\)" defined in Appendix A.1.4), then the singleton type \(v_{G}\) is subtype of \(T\). For example: since \(42\in\mathrm{Int}\), then we have \(\Gamma\vdash\underline{42}<:\mathrm{Int}\) (assuming \(\Gamma\vdash\mathrm{Int}\) type);
* rule ST-Match1 (adapted from [1]) says that a match type is subtyping-equivalent to the continuation type \(T^{\prime}_{k}\) if all match type cases before \(k\) (i.e. all \(T_{i}\) with \(i<k\)) are _disjoint_ form \(T_{k}\), according to Definition 4.3 below;
* rule ST-Match2 (also adapted from [1]) says that match types are covariant in both the type being matched, and the continuation types.
The type disjointness judgement \(\Gamma\vdash T_{1}\circ T_{2}\) (used in rule ST-Match1) is formalised in Definition 4.3 below: the intuition is that two types \(T_{1}\) and \(T_{2}\) are disjoint when they have no common subtypes, hence there exists no value that can have both types \(T_{1}\) and \(T_{2}\).
Definition 4.3 (Disjointness of Types): Two types \(T_{1}\) and \(T_{2}\) are disjoint in \(\Gamma\), written \(\Gamma\vdash T_{1}\circ T_{2}\), iff:
1. \(\Gamma\vdash T_{1}\) type and \(\Gamma\vdash T_{2}\) type; and
2. \(\nexistsists T_{3}:\Gamma\vdash T_{3}\) type and \(\Gamma\vdash T_{3}<:T_{1}\) and \(\Gamma\vdash T_{3}<:T_{2}\)
Example 4.4 (Subtyping and Disjointness in Match Types): Consider the following match type:
Figure 8. Subtyping rules.
\(\forall X.\ X\ match\ \{\text{Int}\Rightarrow\text{\text@underline{42}},\ \text{Bool}\Rightarrow\text{\text@underline{``Hello"}\}}\)
The type (\(\forall X.\ X\ match\ \{\text{Int}\Rightarrow\text{\text@underline{42}},\ \text{Bool}\Rightarrow\text{\text@underline{``Hello"}\}}\)) true is subtyping-equivalent (i.e. "reduces") to "Hello", by the subtyping rule ST-Match1 in Figure 8. The rule first checks whether true is a subtype of Int, which it is not. Since it is also disjoint from Int (the two types do not share a common subtype), the rule then proceeds to the next case. Here, true is a subtype of Bool, and so the type "reduces" to the case "Hello".
Finally, Figure 9 includes several (highlighted) non-standard typing rules for \(F_{\text{P4R}}\) terms:
* by rule T-Val, a ground value \(\mathit{v}_{G}\) is typed by the singleton type \(\underline{v}_{G}\). E.g. 42 has type 42, hence (via the subsumption rule T-Sub and ST-Val in Figure 8) we also have that 42 has type Int;
* by rule T-Match (adapted from [1]), a pattern matching term is typed with a match type of a similar shape. The clause "\(\Gamma\vdash T_{s}<:\cup_{i\in I}T_{i}\)" ensures that pattern matching is exhaustive;
* i.e. the type of the channel returned by the connection maintains type-level information about the server configuration;
Figure 9. Typing rules for \(F_{\text{P4R}}\) terms. Non-standard extensions to \(F_{<:}\) are highlighted.
* by rule T-OpR, the query operation \(\mathsf{Read}(t_{c},t_{e})\) is typed as follows: 1. the query argument \(t_{e}\) has type P4Entity (Figure 4) applied to type parameters that match those of the type of \(t_{c}\) (expected to be a channel). Intuitively, this means that \(t_{e}\) can only be a P4 entity supported by the P4Runtime server connected over \(t_{c}\); and 2. the read operation returns a list of type P4Entity applied to type arguments that match those of the type of \(t_{c}\). Intuitively, this means that the returned list is expected to only contain entities supported by the P4Runtime server connected via \(t_{c}\);
* rules T-OpI, T-OpM, and T-OpD have type constraints similar to T-OpR above: their argument \(t_{e}\) must be a P4 entity supported by the server connected over channel \(t_{c}\). All these operations return a boolean value (indicating whether the operation had an effect).
Example 4.5 (Typable and Untypable Operations): Consider the following types:3 Footnote 3: You may notice a similarity between the types used in Example 4.5 and the P4Runtime server configuration in Figure 5: indeed, those types capture the constraints of that server configuration. We will reprise the topic in Section 5.2.
\(T_{m}\ =\ \forall T\). \(T\) match {
\(\begin{array}{ll}\
_Remark 4.6_ (Differences with Blanvillain et al. (2022)).: Our formulation of match types differs from the original presentation by Blanvillain et al. (2022) in 3 significant aspects: these differences are non-trivial and interplay with each other in subtle ways, making our formalisation and proofs quite challenging.
1. Blanvillain et al. (2022) use a _nominal_ type system which models class hierarchies, abstracting from class fields and data. Instead, we need data in order to represent P4Runtime tables in \(F_{\text{P4R}}\) and in our results; moreover, our implementation (Section 7) does not make significant use of class hierarchies. Therefore, unlike Blanvillain et al. (2022), we adopt standard data types (records, lists...) with _structural_ typing and subtyping, and we support singleton types -- and consequently, we adapt the match-typing-related rules accordingly.
2. Unlike Blanvillain et al. (2022), our match types do _not_ include a mandatory default case. With the default case, a match type can be "reduced" (i.e. proven subtype-equivalent) to the type in its default case, if the scrutinee type does not match any other case. We removed the mandatory default case because it is not needed (and is actually undesirable) for our modelling of P4Runtime table types. Moreover, the Scala 3 compiler does _not_ require programmers to specify a default case in their match types -- and since our API P4R-Type leverages this feature, we formalised the typing system of \(F_{\text{P4R}}\) accordingly. A default match type case can be obtained (when needed) by adding a branch that matches the top type \(\top\).
3. Correspondingly, our match terms do _not_ include a mandatory default case (unlike Blanvillain et al. (2022)). Consequently, our typing rule T-Match (Figure 9) has an additional constraint w.r.t. Blanvillain et al. (2022): the scrutinee type must be a subtype of the union of all case types, thus ensuring that the pattern matching is exhaustive (the Scala 3 compiler performs similar checks). Notably, match term exhaustiveness is needed to prove progress (Theorem 6.4); instead, Blanvillain et al. (2022) do not check match term exhaustiveness because their default match case ensures that a match term can always be reduced.
## 5. Semantics of \(F_{\text{P4R}}\) Programs and P4Runtime Networks
In this section we formalise the semantics of \(F_{\text{P4R}}\) programs (Section 5.1), P4Runtime servers (Section 5.2), and networks of clients and servers (Section 5.3).
### Semantics of \(F_{\text{P4R}}\) Programs
We introduce the labelled transition system (LTS) semantics of \(F_{\text{P4R}}\). Definition 5.1 below formalises an _early_ semantics, where each transition label denotes either an internal computation (\(\tau\)), or a possible input/output interaction with the surrounding environment. This style of _early_ LTS semantics is inspired by the \(\pi\)-calculus (Sangiorgi and Walker, 2001), and allows us to formalise and reason about the interactions between \(F_{\text{P4R}}\) programs and P4Runtime servers (formalised later in Definition 5.7) while keeping the respective syntax and semantics decoupled.
Definition 5.1 (Semantics of \(F_{\text{P4R}}\)).: Assume a predicate "\(v\in T\)" which holds iff value \(v\) belongs to type \(T\). We define the _labelled transition system (LTS) semantics_ of \(F_{\text{P4R}}\) as a transition relation \(t\xrightarrow{\alpha}t^{\prime}\), where the label \(\alpha\) is defined as:
\[\begin{array}{llll}\text{Transition label}&\alpha&\dot{=}&\tau&\text{(Internal transition)}\\ &|&\text{connect}(a)\leadsto s&\text{(Connect to server address $a$, getting channel $s$)}\\ &|&\text{read}(s,v)\leadsto v^{\prime}&\text{(Perform query $v$ on channel $s$, getting result $v^{\prime}$)}\\ &|\text{insert}(s,v)\leadsto v^{\prime}&\text{(Insert $v$ on channel $s$, getting result $v^{\prime}$)}\\ &|&\text{modify}(s,v)\leadsto v^{\prime}&\text{(Modify $v$ on channel $s$, getting result $v^{\prime}$)}\\ &|\text{delete}(s,v)\leadsto v^{\prime}&\text{(Delete $v$ on channel $s$, getting result $v^{\prime}$)}\\ \end{array}\]
The transition relation \(t\xrightarrow{\alpha}t^{\prime}\) is defined in Figure 10, where the context transition rule E-\(\mathbb{C}\) uses an _evaluation context_\(\mathbb{C}\) (defined below) which represents a \(F_{\mathrm{P4R}}\) term with one hole [ ]:
\[\begin{array}{rcl}\mathbb{C}&\coloneqq&[\,]\mid\mathbb{C}::t\mid v::\mathbb{C }\mid\text{head }\mathbb{C}\mid\text{tail }\mathbb{C}\mid\text{let }x=\mathbb{C}\text{ in }t\\ &\mid&\mathbb{C}\,t\mid v\,\mathbb{C}\mid\,\mathbb{C}\,T\mid\,\mathbb{C}\,f \mid\,\mathbb{C}\text{ match }\{x_{i}:T_{i}\Rightarrow t_{i}\}_{i\in I}\\ &\mid&\{f_{i}=\gamma_{i}\}_{i\in I}\quad\text{where }\,\exists k\in I:\forall i \in I:\begin{cases}i<k\text{ implies }\,\,\gamma_{i}=v_{i}\\ i=k\text{ implies }\,\,\gamma_{i}=\mathbb{C}\\ i>k\text{ implies }\,\,\gamma_{i}=t_{i}\end{cases}\end{array}\]
Most rules in Definition 5.1 are standard, except for the ones highlighted in Figure 10:
* by rule E-Connect, the term Connect(\(a\)) transitions by producing a channel \(s\), whose type conforms to the type of the server address \(a\). The transition label "connect(\(a\))\(\leadsto s\)" means that the term is trying to interact with the surrounding environment: hence, as we will see in Section 5.2, the client expects a P4Runtime server to emit the dual label "connect(\(a\))\(\leadsto s\)" -- meaning that the server is listening on address \(a\) and can produce channel \(s\);
Figure 10. LTS semantics of \(F_{\mathrm{P4R}}\) terms. Non-standard extensions to \(F_{<}\): are highlighted.
* by rule E-Read, the term \(\mathsf{Read}(s,v)\) transitions by producing a value \(v^{\prime}\), which is a list of P4Entity instances (Figure 4) whose type conforms to the type of channel \(s\). The transition label means that the term expects to interact with a P4Runtime server on channel \(s\);
* rules E-Insert, E-Modify, and E-Delete work similarly, and produce a boolean value describing whether the operation had an effect or not.
\(F_{\mathsf{P4R}}\) terms are evaluated from left to right, using the evaluation contexts \(\mathbb{C}\) in Definition 5.1. For instance, the last case "\(\{f_{i}=\gamma_{i}\}_{i\in I}\)" represents a record whose fields \(f_{i}\) are indexed by \(i\in I\), where \(I\) is a set of consecutive natural numbers \(1..n\) (as per Definition 3.1); all fields to the left of \(f_{k}\) (for some \(k\in I\)) are already fully-evaluated into values \(v_{i}\); the field \(f_{k}\) is a context with a hole, which is going to be evaluated next; and all fields to the right of \(f_{k}\) are arbitrary terms \(t_{i}\), which may be evaluated after \(f_{k}\).
### Semantics of P4Runtime Servers
To define our P4Runtime server semantics (in Definition 5.6 later on), we need to ensure that a server \(S\) will only answer to well-typed requests from its clients, and that the server entities are well-typed w.r.t. the server configuration \(C\). To this end, we formalise an encoding of a server configuration \(C\) into \(F_{\mathsf{P4R}}\) types (Definition 5.2 below). Intuitively, this describes how to turn the P4Info metadata of a P4 device into a set of types describing the device tables, actions, etc.
**Definition 5.2** (Encoding of a Server Configuration into \(F_{\mathsf{P4R}}\) Types).: Given a P4Runtime server configuration \(C\), we define the _encoding_\([\cdots]\) of its entries into \(F_{\mathsf{P4R}}\) types in Figure 11.
Figure 11. Definition of the encoding operation \([\![\cdots]\!]\) from P4Runtime configurations to \(F_{\mathsf{P4R}}\) types.
_Example 5.3 (Server Configuration Representation as \(F_{\text{P4R}}\) Types)._ Consider the P4Runtime server configuration in Figure 5: by Definition 5.2, its encoding into the \(F_{\text{P4R}}\) types is shown in Figure 12. (The same types are also used in Example 4.5, where they are called \(T_{m},T_{a},T_{p}\).)
From now on, we will assume that each P4Runtime server is _well-formed_ by Definition 5.4 below: it means that each entity belongs to the P4Entity type (Figure 4) instantiated with type parameters that correspond to the type-encoded server configuration (by Definition 5.2).
_Definition 5.4 (P4Runtime Entity Conformity and Server Well-Formedness)._ A P4Runtime entity \(e\)_conforms_ to a server configuration \(C\) iff:
\[\exists X_{n},X_{a}:e\in\text{P4Entity}\;\llbracket C.\mathit{table\_matches} \rrbracket\;\llbracket C.\mathit{table\_actions}\rrbracket\;\llbracket C. \mathit{action\_params}\rrbracket\;X_{n}\;X_{a}\]
The predicate \(\mathit{Conforms}(e,C)\) holds iff entity \(e\) conforms to the configuration \(C\). A P4Runtime server \(\langle C,E,a,K\rangle\)_is well-formed_ iff \(\forall e\in E:\mathit{Conforms}(e,C)\).
The key insight behind Definition 5.4 is that the instantiation of P4Entity can only reduce to an actual type if the argument \(X_{n}\) is a valid table name in \(C\), and if \(X_{a}\) is a valid action for table \(X_{n}\). Definition 5.4 directly leads to the following property, which will allow us to prove the results in Section 6: if a client sends to the server a well-typed value \(v\), the server will consider it conformant.
Proposition 5.5 (Conformance of Well-Typed Values)._For any server \(S=\langle C,E,a,K\rangle\) and any value \(v\), we have:_
\[\mathit{Conforms}(v,C)\iff\emptyset\vdash v:\text{P4Entity}\;\llbracket C. \mathit{table\_matches}\rrbracket\;\llbracket C.\mathit{table\_actions} \rrbracket\;\llbracket C.\mathit{action\_params}\rrbracket\;X_{n}\;X_{a}\]
_Definition 5.6 (P4Runtime Server Semantics)._ We define the _semantics of a P4Runtime server \(S\)_ as a relation \(S\xrightarrow{\overline{\alpha}}S^{\prime}\) (where \(\alpha\) is from Definition 5.1) inductively defined by the rules in Figure 13.
The P4Runtime server semantics in Definition 5.6 show how the internal configuration of a P4Runtime server evolves, and how the server responds to queries from clients. The semantics are based on the P4.org Working Group (2020b).
The server semantics focus on checking the conformance of requests from the clients, and computes a response using an abstract evaluation predicate "\(\langle\_,\_,\_\rangle\;\downarrow\;\_\)": the details of this predicate are not crucial -- but we assume that it always yields a well-typed response, i.e. a boolean or an entity that conforms to the server configuration \(C\).4
Footnote 4: For reference, the semantics of the predicate "\(\langle C,E,\text{read}(v)\rangle\;\downarrow\;\varphi^{\ast}\)" are available in the appendix, in Figure 17.
Figure 12. The encoding of the server configuration \(C\) in Figure 5 into \(F_{\text{P4R}}\) types. (The same types are also used in Example 4.5, where they are called \(T_{m},T_{a},T_{p}\).)
* By rule Sv-Connect, a server listening on address \(a\) can accept a client connection by generating a unique channel instance \(s\), adding \(s\) to the set of established connections \(K\), and producing a transition label \(\overline{\text{connect}(a)\leadsto}\)\(s\). Importantly, the channels \(s\) belongs to a \(F_{\text{P4R}}\) channel type whose type arguments are obtained by encoding the server configuration \(C\) (by the encoding \(\llbracket\cdots\rrbracket\) in Definition 5.2).
* By rule Sv-Read, the server can handle a client's read request by emitting a label \(\overline{\text{read}(s,v)\leadsto v^{\prime}}\), provided that the connection \(s\) belongs to the set of established connections \(K\), and the query argument \(v\) conforms to the server configuration (by Definition 5.4);
* e.g. by adding or removing P4 table entries.
### Semantics of P4Runtime Networks
We now formalise the semantics of the P4Runtime networks introduced in Definition 3.4.
Definition 5.7 (P4Runtime Network Semantics): The _LTS semantics of a P4Runtime network_ is defined by the following rules, where \(\alpha\) ranges over the labels introduced in Definition 5.1: (for brevity, we omit the symmetric rules)
\[\frac{N\xrightarrow{\alpha}N^{\prime}}{N\,|\,N^{\prime\prime}\,\xrightarrow {\alpha}\,N^{\prime}\,|\,N^{\prime\prime}}\text{\text{\text{\text{\text{\text{ \text{Net-}}}}}}}\alpha\qquad\frac{N_{1}\xrightarrow{\alpha}N_{1}^{\prime}\, \,N_{2}\xrightarrow{\alpha}N_{2}^{\prime}}{N_{1}\,|\,N_{2}\,\xrightarrow{ \tau}\,N_{1}^{\prime}\,|\,N_{2}^{\prime}}\text{\text{\text{\text{\text{Net-}}}}} \alpha\qquad\frac{N_{1}\xrightarrow{\alpha}N_{1}^{\prime}\,\,N_{2}^{\prime}}{N _{1}\,|\,N_{2}\,\xrightarrow{\tau}\,N_{1}^{\prime}\,|\,N_{2}^{\prime}}\text{ \text{\text{\text{Net-}}}}\alpha\]
We often write \(N\to N^{\prime}\) instead of \(N\xrightarrow{\tau}N^{\prime}\), and \(\to\)* for the reflexive and transitive closure of \(\to\).
By Definition 3.4, a network \(N\) is a parallel composition of any number of P4Runtime clients and servers. According to the semantics in Definition 5.7, a network \(N\) can perform a transition \(\alpha\) even when composed with another network \(N^{\prime\prime}\) (rule Net-\(\alpha\)); and if two networks fire dual labels \(\alpha\) and
Figure 13. LTS semantics of a P4Runtime server.
\(\overline{\alpha}\), then they can synchronise when composed, producing a \(\tau\)-transition: this allows a P4Runtime client and server to interact, as illustrated in Example 5.8 below.
Example 5.8 (A Simple P4Runtime Network).: We give a brief example of how a network reduction could look using our semantics. Consider the \(F_{\mathrm{P4R}}\) term:
let
\[c=\textsc{Connect}(a)\]
in
\[\textsc{Insert}(c,v)\]
This term attempts to connect to a P4Runtime server \(a\) and insert a value \(v\) (a P4 table entry). If we compose this \(F_{\mathrm{P4R}}\) term with a P4Runtime server, the resulting network reduces as:
\[\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}s \text{ is a fresh channel }\\ s\in\mathrm{Chan}[\![\overline{C}.t\!m]\!]\!]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Definition 6.2** (_Network Congruence_).: \(\equiv\) is the least congruence between networks such that:
\[N_{1}\mid N_{2}\ \equiv\ N_{2}\mid N_{1}\qquad(N_{1}\mid N_{2})\mid N_{3}\ \equiv\ N_{1}\mid(N_{2}\mid N_{3})\]
**Definition 6.3** (_Well-typed Network_).: We say that _a network \(N\) is well-typed_ iff for all P4Runtime clients \(t\) such that \(N\equiv t\mid N_{0}\) (for some \(N_{0}\)), we have:
1. \(\emptyset\vdash t:T\) (for some \(T\));
2. for all server addresses \(a\) occurring in \(t\): * there is exactly one server \(S=\langle C,E,a,K\rangle\) such that \(N_{0}\equiv S\mid N_{1}\); and * \(a\in\text{ServerRef}\big{[}\llbracket C.table\_matches\rrbracket,\llbracket C. table\_actions\rrbracket,\llbracket C.action\_params\rrbracket\big{]}\)
3. for all client-server channels \(s\) occurring in \(t\): * there is exactly one server \(S=\langle C,E,a,K\rangle\) with \(s\in K\), and such that \(N_{0}\equiv S\mid N_{1}\); and * \(s\in\text{Chan}\big{[}\llbracket C.table\_matches\rrbracket,\llbracket C.table\_actions \rrbracket,\llbracket C.action\_params\rrbracket\big{]}\)
We now have all ingredients to formalise progress (Theorem 6.4), and the resulting Corollary 6.5: well-typed networks only stop reducing when all P4Runtime clients terminate successfully.
**Theorem 6.4** (Progress).: _Take any well-typed network \(N\), and take any P4Runtime client \(t\) such that \(N\equiv t\mid N_{0}\) (for some \(N_{0}\)). Then either:_
* \(t\) _is fully-reduced into a value; or_
* \(t\to t^{\prime}\)_, and correspondingly,_ \(N\to N^{\prime}\equiv t^{\prime}\mid N_{0}\) _with_ \(N^{\prime}\) _well-typed; or_
* _there is a server_ \(S\) _such that_ \(N_{0}\equiv S\mid N_{1}\) _and_ \(t\mid S\to t^{\prime}\mid S^{\prime}\)_, and correspondingly,_ \(N\to N^{\prime}\equiv t^{\prime}\mid S^{\prime}\mid N_{1}\) _with_ \(N^{\prime}\) _well-typed._
**Corollary 6.5** (Type soundness).: _Take any well-typed network \(N\). If \(N\to^{*}N^{\prime}\) and \(N^{\prime}\) cannot perform further \(\tau\)-transitions, then all P4Runtime clients in \(N^{\prime}\) are fully-reduced into values._
## 7. Implementation of P4r-Type: A Scala 3 API Based on \(F_{\tt P4R}\)
We now outline the implementation P4R-Type, our verified API for programming P4Runtime client applications, based on our formalisation of \(F_{\tt P4R}\) and its typing system (Sections 3 and 4). P4R-Type is published as companion artifact of this paper, and its latest version is available at:
[https://github.com/JensKanstrupLarsen/P4R-Type/](https://github.com/JensKanstrupLarsen/P4R-Type/)
Our typing system (Section 4) is designed to take advantage of Scala 3 features (in particular, match types [1]): this naturally leads to implementing P4R-Type as a Scala 3 API. Consequently, the interactions between a client using P4R-Type and one or more P4 devices have the properties presented in Section 6: all read/insert/modify/delete operations are type-safe, and they enjoy progress and preservation (if both client and device use the same P4Info file).
The implementation of P4R-Type consists of: _(1)_ a type-parametric API for P4Runtime operations (connect, read, insert, etc.) (Section 7.1), and _(2)_ a software tool that turns a P4Info file into a set of Scala 3 types which constrain the P4R-Type API (Section 7.2).
### Type-Parametric API for P4Runtime Operations
The P4R-Type API consists of the five P4Runtime operations detailed in Section 3: connect, read, insert, modify, and delete. We implement these operations as methods equipped with the strict type parameters shown in Figure 9 (rules T-OrC, T-OrI, T-OrM, T-OrD). The operations closely correspond to the operations in the P4Runtime protobuf API [P4.org Working Group 2020b]. Under the hood, these methods use the loosely-typed P4Runtime protobuf specification and RPC,5 with (de-)serialisation from/to Scala objects based on the ScalaPB library:6
Footnote 5: [https://github.com/p4lang/p4runtime](https://github.com/p4lang/p4runtime)
Footnote 6: https:scalap.github.io/
* connect uses the StreamChannel RPC to establish a connection;
* read uses the Read RPC to read table entries from the server;
* insert, modify, and delete use the Write RPC to update the server.
The signature of the API methods also align with the formal API:
\[\begin{split}\text{let}\ \ \text{read}&=\ \lambda T_{m}.\ \lambda T_{a}.\ \lambda T_{p}.\ \lambda X_{n}<:\text{TableName.}\ \lambda X_{a}<:T_{a}\ X_{n}.\\ &\lambda c:\text{Chain}[T_{m},T_{a},T_{p}].\\ &\lambda x:\{\text{name}:X_{n},\ \text{matches}:T_{m}\ X_{n},\ \text{ action}:X_{a},\ \text{params}:T_{p}\ X_{a}\}.\ \text{Read}(c,x)\\ \text{in}\ \ \ldots\end{split}\]
```
1defread[TM[_],TA[_],TP[_]]
2(c:FP4Channel[TM,TA,TP],tableEntry:FP4TableEntry[TM,TA,TP,_,_])
3:Seq[FP4TableEntry[TM,TA,TP,_,_]]
4=...
```
In the code snippet above, the two types FP4Channel and FP4TableEntry are also part of P4R-Type. Each of these types take the same type parameters as their equivalents in Figures 3 and 4; such type parameters are usually constrained by the context and inferred by the Scala 3 compiler, hence the user does not need to write them explicitly. The FP4Channel type is simply a case class that contains the table entry values (table name, parameters, etc.), while the FP4Channel is an abstract class containing the methods for serialization (toProto) and deserialization (fromProto).
### Translation of P4 Device Configuration Metadata (P4Info) into Scala 3 Types
P4R-Type includes a tool that implements the encoding in Definition 5.2: the tool takes a P4Info file (representing a P4 device's tables, actions,...) and generates three Scala 3 types, which can be used to instantiate the type parameters \(T_{m},T_{a},T_{p}\) (see Sections 3 and 4) to guarantee type safety and progress. Such generated types are called TableMatchFields, TableActions, and ActionParams:
* type TableMatchFields can instantiate \(T_{m}\), and maps table names to their match fields;
* type TableActions can instantiate \(T_{a}\), and maps table names to their action names;
* type ActionParams can instantiate \(T_{p}\), and maps action names to their parameter types.
A programmer can use P4R-Type to connect to a P4 device and obtain a typed channel constrained by the 3 types above (akin to \(\text{Chain}[T_{m},T_{a},T_{p}]\) in Section 3); when using our type-parametric API (Section 7.1) on this typed channel, only operations compatible with the P4 device can be performed; otherwise, a type error occurs (just like our type system in Section 4 prevents invalid operations).
We now illustrate in more detail the P4R-Type-generated types that can instantiate the type parameters \(T_{m},T_{a},T_{p}\), using Figure 12 as an example.
**The type parameter \(T_{m}\) (match fields of a P4 table) can be instantiated with the higher-kinded type TableMatchFields, which takes a parameter TN (expected to be a known table name).**
```
1typeTableMatchFields[TN]=TNmatch
2case"IPv4_table"="("IPv4_dst_addr",P4.LPM)
3case"IPv6_table"="("IPv6_dst_addr",P4.LPM)
4case"*"=">"*"
```
The type above matches a table name TN with one of the known table names (represented as singleton string types) and yields tuple types pairing TN's field names with their type of packet match (P4.Exact, P4.Ternary, P4.LPM,...which are types provided by P4R-Type). As per P4Runtime standard, table fields can be optionally undefined, unless they perform a P4.Exact packet match.
**The type parameter \(T_{a}\) (P4 table actions) can be instantiated with type TableAction, that matches a table name TN to yield the valid actions for TN (which may include the wildcard *).**
The type parameter \(T_{p}\) (action parameters) can be instantiated with type ActionParams, that matches an action name AN to yield the parameter types for AN. Each parameter type is a tuple with the name of the parameter (as a singleton string type) and the value type.
```
1typeActionParams[AN]=ANmatch
2case"IPv4_forward"=>(("mac_dst",ByteString),("port",ByteString))
3case"IPv6_forward"=>(("mac_dst",ByteString),("port",ByteString))
4case"Drop"=>Unit
5case"=">Unit
```
All three types above also accept a _wildcard_ singleton type "*" as a parameter, representing the request of querying all/any table match fields, actions, or parameters.
## 8. Case Studies and Discussion of Alternative Designs
In this section we demonstrate the usefulness of having compile-time checked P4Runtime queries, by illustrating three case studies implemented using P4R-Type. We discuss one case study in detail (update of multiple switches, in Section 8.1) and outline two more (port forwarding and load balancing, in Sections 8.2 and 8.3): these applications derive and extend the tunnelling example in the P4Runtime tutorials,7 and are all included in the software artifact that accompanies this paper.
Footnote 7: [https://github.com/p4lang/tutorials/tree/master/exercises/p4runtime](https://github.com/p4lang/tutorials/tree/master/exercises/p4runtime)
### Updating a Network with Multiple P4 Switches
Figure 14 shows the case study network. It contains four networks (N1-N4) which are connected through the bridge established by the switches (S1-S4). Switch S1 and S2 use the same subnet mask (10.1.0.0), as do switch S3 and S4 (10.2.0.0). Each switch is configured with a general firewall table for all network traffic, as well as a more specific IPv4 forwarding table for its own subnet. For this reason, the switches use different configuration files, shown in Figure 15. All of the switches should share the same entries for the firewall table. Switch S1 is the master switch for forwarding rules related to subnet 10.1.0.0, while switch S3 is the master switch for forwarding rules related to subnet 10.2.0.0, meaning that S2 and S4 should replicate their table entries, respectively.
The replication of table entries must be done periodically by an external controller. For this case study, we implement a controller in P4R-Type that performs this replication, which should:
1. Insert a set of firewall table entries into all four switches.
2. Read all entries from the ipv4_lpm table on S1, then insert them into S2.
3. Read all entries from the ipv4_table table on S3, then insert them into S4.
When a programmer uses our P4R-Type API, the Scala compiler spots several possible errors that may occur when updating multiple switches with different P4 configurations:
Figure 14. Network topology used in the case studies (Section 8): N1–N4 are networks, and S1–S4 are switches.
* Using non-existent table or action names (e.g., due to typos)
* Inserting the wrong type of entries in a table (e.g., wrong number of match fields)
* Using an existing action in an existing table that does not support it (e.g., an entry in firewall referencing ipv4_forward)
* Passing the wrong type of arguments to an action (e.g., an entry in ipv4_lpm referencing action ipv4_forward, but passing only one argument)
_Generated Types in Scala 3_. Using the types generated by the tool, the replication program written in P4R-Type is shown in Figure 16. Note that the API interface is relatively minimal and similar to the Python API. For instance, compare the insert call in line 9-11 to the Python code in Figure 1. The difference here is that an error like the one shown in Figure 1 would be caught at compile time by the Scala 3 type system. For example, using "Process.forward_packet" instead of "Process.drop" on line 11 would yield a type error: _"a value of type_s.TA[("Process.firewall")] is required"_.
On lines 1-4, the connection to each switch is established. Note that the connect methods are specific to each configuration, unlike the other P4Runtime operations which are part of a generic
Figure 16. The replication program written in P4R-Type.
Figure 15. The packet-processing sections of the P4 files of switches S1 and S3 (left) and S2 and S4 (right).
package: connect returns an FP4Channel instance with predefined type parameters, which in turn constrain the read/insert/modify/delete operations that can be performed on that channel. Consider e.g. lines 9-11 in Figure 16: in the insert call, the tableEntry parameter is constrained to only accept table entries that satisfy the switch configuration of channel s. Since s ranges over a list of channels having two different types of switches (config1 and config2), such entries must be valid in _both_ switch configurations. Since both configurations share a "Process.firewall" table, the program compiles. Instead, if an otherwise valid entry for e.g. the "Process.ipv4_lpm" table is provided, the code would not compile, as that table is defined in config1 but not in config2.
### Port Forwarding Management
We implemented a control plane program for _port forwarding_, which is a Network Address Translations (NAT) service typically offered e.g. by Internet routers. We use the same topology as in Figure 14, but we assume that N1, N2, and N3 are local networks, while N4 is an external network. The goal is to allow external clients to connect to servers hosted in the internal networks. To this end, S4 applies a set of NAT rules saying e.g. that:
* each packet received on the external S4 interface, and having destination IP address 1.2.3.4 and port 42, should be translated to have destination IP address 10.1.0.4 and port 1042 (and vice versa for the internal S4 interface).
We developed a program (using P4R-Type) that offers a command line interface to connect to S4 and query, add, and delete its NAT rules. The program reads and modifies two P4 tables called nat_ingress and nat_egress containing the translations for incoming and outgoing packets. Translated packets are then forwarded according to the entries of a table called ipv4_forward (similar to the one used in Section 8.1).
### Load Balancing
We implemented a control plane program for load balancing packet transfers. We use the same topology as in Figure 14, and the goal is for S1 to equally distribute all packets bound for N4 between its outgoing ports to S2, S3 and S4. To implement this, we use a P4 entity called _counter_,8 which can be incremented by the data plane and read by the control plane. We configure the data plane of S1 with one counter per output port, and rules that increment a counter every time a packet is forwarded through the corresponding port. Our control plane program then periodically reads the counter values (using the P4R-Type method readCounter, similar to read for P4 tables) and updates the packet forwarding rules (using the P4R-Type method modify).
Footnote 8: Counters are not modelled in \(F_{\text{P4R}}\); they can be easily added e.g. as a new case to the union type of P4Entity (Figure 4).
### On the Role of Match Types and Singleton Types
We now discuss whether our results could be achieved with a different design that, while still satisfying requirements (**R1**), (**R2**), and (**R3**) in Section 1, would not rely on match types nor singleton types, and would be therefore less tied to the Scala 3 programming language. Let us consider the case study in Section 8.1, and how we could address it in a subset of Scala 3 _without_ match nor singleton types.
To ensure that the table entries described in a P4Info file are constructed correctly, we would need to generate a dedicated data type for each table, with argument types capturing the constraints on actions and parameters. We would also need to constrain channel types to valid table entry types, to ensure that read/insert/modify/delete only use table entries of the correct type. E.g. in the case of the first P4Info metadata in Figure 15 we might generate a set of type definitions like:
```
1packageconfig1
2
3caseclassActionWildcard()
4caseclassActionDrop()
5caseclassActionForwardPacket(addr:ByteString,port:ByteString)
6
7typeFirewallAction=ActionDrop|ActionWildcard
8caseclassFirewallTableEntry(fields:Option[FP4_LPM],action:FirewallAction)
9
10typeIPV4Action=ActionDrop|ActionForwardPacket|ActionWildcard
11caseclassIPV4TableEntry(fields:(FP4_Exact,Option[FP4_LPM]),action:IPV4Action)
12
13defconnect(...):FP4Channel[FirewallTableEntry|IPV4TableEntry]=...
```
A program written with the resulting API would look like:
```
1vals1=config1.connect(0,"127.0.0.1",50051)
2insert(s1,config1.FirewallTableEntry(Some(FP4_LPM(...)),config1.ActionDrop))
```
The type definitions outlined above are roughly as compact as the match types we generate.9 However, the main drawback of such type definitions is that they are substantially more laborious to formalise: we would need to extend the typing system of \(F_{\text{P4R}}\) (Definition 4.2) with a nominal environment to collect type definitions, and the formal encoding from P4Info metadata to types would be significantly more complex than our Definition 5.2. As a consequence, stating and proving results like our Theorems 6.1 and 6.4 would be considerably harder, hampering requirement (**R1**).
Footnote 9: These definitions may be more verbose in languages without the type union ”\(|\) ”, going against requirement (**R3**). E.g. in F# or OCaml, FirewallAction and IPV4Action would be rendered as labelled sum types, and each action used in more than one table would result in duplicated label definition (in this example, this would apply to ActionDrop and ActionWildcard).
On the practical side, another drawback of the type definitions outlined above is that they would make the API more cumbersome and limited: e.g. it would be hard or impossible to write code like lines 7-11 in Figure 16, where the insert operation works on channels with different P4 configurations config1 and config2. The reason is that channels s1 and s2 would only support table entries of type config1.FirewallTableEntry, whereas channels s3 and s4 would only support config2.FirewallTableEntry: such types would be unrelated and could not be unified, hence a programmer would need to duplicate the code of the insert operations. One might try to mitigate this duplication by leveraging structural typing (available e.g. in TypeScript, or in OCamlstructs) -- but then, the signature of the API method insert would become non-trivial and the feasibility of this approach would require further research. Instead, the match types produced by our encoding in Definition 5.2 allow the Scala compiler to verify that the table entries for "Process.firewall" have the same type under both config1 and config2, hence the code in Figure 16 type-checks.
## 9. Related Work
The programmability of SDNs comes at the cost of complexity and attack surfaces of modern networks (Kreutz et al., 2013). Several proposals address complementary problems to our work by giving formal semantics to the data plane language (Alshnakat et al., 2022; Doenges et al., 2021; Peterson et al., 2023) and by developing static (Eichholz et al., 2022; Liu et al., 2018; Stoenescu et al., 2018) and dynamic (Notzli et al., 2018; Shukla et al., 2020) analysis tools for the data plane.
Several tools have been developed to verify various network properties. Header Space Analysis (Kazemian et al., 2012) is a framework that can analyse reachability and identify loops, among other properties, of dynamic networks. Both the data plane and the control plane must be represented in the abstract framework. NetKAT (Anderson et al., 2014), and the more recent DyNetKat (Caltais et al., 2022), provides a network language which encompasses both data plane
and control plane, with formal semantics and develops syntactic techniques for proving network reachability, non-interference, and correctness of program transformation. Batfish (Fogel et al., 2015) uses a declarative approach to define network behavior via logical relations that represent both data and control plane. The framework allows to check if any reachable network configurations (and packets) can violate forwarding properties.
The main difference with our proposal is that these models are non-executable specifications and are much more abstract than the languages used to program SDNs. Therefore, they do not directly provide a method to program the control plane. Verifying actual control software using these models requires to map software behavior to these specifications, which is extremely hard when the control plane is developed using a general-purpose language like Python or Scala. Moreover, many of these models assume a configurable, but not programmable, data plane, which supports a limited and predefined set of protocols (e.g., SDNs using OpenFlow (McKeown et al., 2008)). Instead, our proposal provides a programming framework for the control plane that can interact with arbitrary P4 data planes, and that can statically prevent invalid table manipulations.
## 10. Conclusion and Future Work
We presented P4R-Type, a novel verified API for P4Runtime programs written in Scala 3. As a foundation for P4R-Type, we presented the first formal model of P4Runtime networks, where servers interact with client applications written in the calculus \(F_{\text{P4R}}\); we also developed a typing system for \(F_{\text{P4R}}\) (including match types and singleton types, inspired by Scala 3) and proved that well-typed \(F_{\text{P4R}}\) clients interact correctly with the surrounding servers (Theorems 6.1 and 6.4). These correctness results are inherited by actual P4 control programs that use our P4R-Type API.
This paper is a stepping stones toward a broader objective: a fully-verified P4 development pipeline encompassing the verification of _both_ the control plane and the data plane, ensuring that configuration updates applied by control programs never compromise desired network properties. This objective determines our future work, outlined below.
While our type system is sound in the sense that well-typed programs never get stuck, a server may still in some cases reject an update by producing a false response value (for Insert, Modify or Delete). Not all these cases can be statically verified (e.g. trying to insert a table entry that already exists in the server), but some cases may be prevented by further typing constraints. For example, instead of using the same P4Entity type for all of the operations that handle table entries, we may adopt distinct formats or restrictions on table entries for distinct operations -- e.g. the Insert operation does not in general accept entries where \(table\_matches="*"\), but the Read operation always does. A solution to this could be to generate a distinct set of match types for each operation: this should not drastically change the formalization nor the proofs. Network properties like reachability of a node, enforcement of access control list, and presence of loops, for systems with programmable data plane cannot be verified by looking only at the control plane. In order to verify these properties, we plan to extend our semantics with P4Runtime stream messages and integrate it with existing semantics of P4. We may also need to formalise more detailed P4Runtime server semantics, e.g. to model P4 network elements that perform delayed table updates, have background processes, or communicate with each other. We expect that, thanks to our adoption of an early LTS semantics for clients and servers (Section 5.1), we will be able to adapt the server semantics, while reusing most of the current proofs and results involving \(F_{\text{P4R}}\) clients.
###### Acknowledgements.
This work was partially supported by the DTU Nordic Five Tech Alliance grant "Safe and secure software-defined networks in P4" and the Horizon Europe grant no. 101093006 "TaRDIS." | Software定義ネットワーク(SDN)は、スイッチやルーターなどのネットワーク機器のプログラミング、再構成、最適化をSignificantly簡素化します。 プログラミングのデファクト・スタンダードはP4言語です。しかし、P4の柔軟性とSDNの汎用性は重要なリスクをもたらします。大手クラウドプロバイダーの複数の事例から明らかになったように、SDNプログラムのエラーはネットワークの利用可能性を低下させ、非機能状態に置きます。この論文の対象となるのは、P4対応ネットワーク機器と連携する制御プロセスのエラーです。P4Runtime APIを使用するクライアントにとって、P4Runtime APIを使用することで重大な失敗を引き起こす可能性があります。GoogleのProtocol Buffersを使用しているにもかかわらず。この論文では、ScalaにP4Runtime APIの新しい検証されたバージョンであるP4R-Typeを提案します。P4制御プロセスの動作に対する静的 |
2309.04209 | Computable error bounds for quasi-Monte Carlo using points with
non-negative local discrepancy | Let $f:[0,1]^d\to\mathbb{R}$ be a completely monotone integrand as defined by
Aistleitner and Dick (2015) and let points
$\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d$ have a non-negative
local discrepancy (NNLD) everywhere in $[0,1]^d$. We show how to use these
properties to get a non-asymptotic and computable upper bound for the integral
of $f$ over $[0,1]^d$. An analogous non-positive local discrepancy (NPLD)
property provides a computable lower bound. It has been known since Gabai
(1967) that the two dimensional Hammersley points in any base $b\ge2$ have
non-negative local discrepancy. Using the probabilistic notion of associated
random variables, we generalize Gabai's finding to digital nets in any base
$b\ge2$ and any dimension $d\ge1$ when the generator matrices are permutation
matrices. We show that permutation matrices cannot attain the best values of
the digital net quality parameter when $d\ge3$. As a consequence the computable
absolutely sure bounds we provide come with less accurate estimates than the
usual digital net estimates do in high dimensions. We are also able to
construct high dimensional rank one lattice rules that are NNLD. We show that
those lattices do not have good discrepancy properties: any lattice rule with
the NNLD property in dimension $d\ge2$ either fails to be projection regular or
has all its points on the main diagonal. Complete monotonicity is a very strict
requirement that for some integrands can be mitigated via a control variate. | Michael Gnewuch, Peter Kritzer, Art B. Owen, Zexin Pan | 2023-09-08T08:42:23 | http://arxiv.org/abs/2309.04209v2 | # Computable error bounds for quasi-Monte Carlo using points with non-negative local discrepancy
###### Abstract
Let \(f:[0,1]^{d}\to\mathbb{R}\) be a completely monotone integrand as defined by Aistleitner and Dick (2015) and let points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\in[0,1]^{d}\) have a non-negative local discrepancy (NNLD) everywhere in \([0,1]^{d}\). We show how to use these properties to get a non-asymptotic and computable upper bound for the integral of \(f\) over \([0,1]^{d}\). An analogous non-positive local discrepancy (NPLD) property provides a computable lower bound. It has been known since Gabai (1967) that the two dimensional Hammersley points in any base \(b\geqslant 2\) have non-negative local discrepancy. Using the probabilistic notion of associated random variables, we generalize Gabai's finding to digital nets in any base \(b\geqslant 2\) and any dimension \(d\geqslant 1\) when the generator matrices are permutation matrices. We show that permutation matrices cannot attain the best values of the digital net quality parameter when \(d\geqslant 3\). As a consequence the computable absolutely sure bounds we provide come with less accurate estimates than the usual digital net estimates do in high dimensions. We are also able to construct high dimensional rank one lattice rules that are NNLD. We show that those lattices do not have good discrepancy properties: any lattice rule with the NNLD property in dimension \(d\geqslant 2\) either fails to be projection regular or has all its points on the main diagonal.
**Keywords:** Associated random variables, Digital nets, Rank one lattices
## 1 Introduction
Quasi-Monte Carlo (QMC) sampling [7, 26] can have much better asymptotic accuracy than plain Monte Carlo (MC), but it does not come with the usual statistical error estimates that MC has. Those estimates can be recovered by randomized QMC (RQMC) [21, 29] based on independent replicates of QMC. In this paper we consider an alternative approach to uncertainty quantification for
QMC. For some special sampling points with a non-negative local discrepancy (NNLD) property described later and a suitably monotone integrand \(f\), we can compute upper and lower bounds on the integral \(\mu\) of \(f\) over the unit cube in \(d\) dimensions. Methods based on random replication can provide confidence intervals for \(\mu\) that attain a desired level such as 95% or 99% asymptotically, as the number of replicates diverges. The method we consider attains 100% coverage for finite \(n\).
Unlike the well-known bounds derived via the Koksma-Hlawka inequality [19], these bounds can be computed by practical algorithms. Convex optimization [2] has the notion of a certificate: a computable bound on the minimum value of the objective function. The methods we present here provide certificates for multidimensional integration of a completely monotone function.
This improved uncertainty quantification comes at some cost. Our versions of the method will be more accurate than MC for dimensions \(d\leqslant 3\), as accurate as MC (apart from logarithmic factors) for \(d=4\) and less accurate than MC for \(d\geqslant 5\). They also require some special knowledge of the integrand.
The problem is trivial and the solution is well known for \(d=1\). If \(f:[0,1]\to\mathbb{R}\) is nondecreasing then
\[\frac{1}{n}\sum_{i=0}^{n-1}f\Big{(}\frac{i}{n}\Big{)}\leqslant\int_{0}^{1}f(x )\,\mathrm{d}x\leqslant\frac{1}{n}\sum_{i=1}^{n}f\Big{(}\frac{i}{n}\Big{)}. \tag{1}\]
These bracketing inequalities hold even if some of the quantities in them are \(\pm\infty\). This works because \(f\) is nondecreasing, the evaluation points in the left hand side are 'biased low' and those in the right hand side are 'biased high'.
To get a multivariate version of (1), we generalize the notion of points biased low to points biased towards the origin in terms of a non-negative local discrepancy (NNLD) property of the points. This property was shown to hold for two dimensional Hammersley points by Gabai [12] in 1967. We couple the NNLD property with a multivariate notion of monotonicity called complete monotonicity [1].
This paper is organized as follows. Section 2 gives some notation and then defines the properties of point sets and functions that we need. Theorem 1 there establishes the bracketing property we need. Section 3 gives fundamental properties of NNLD point sets with an emphasis on projection regular point sets. Only very trivial lattice rules, confined to the diagonal in \([0,1]^{d}\), can be both projection regular and NNLD. Cartesian products preserve the NNLD property as well as an analogous non-positive local discrepancy property. Section 4 compares our bounds to those obtainable from the Koksma-Hlawka inequality. Section 5 shows that digital nets whose generator matrices are permutation matrices produce NNLD point sets. Section 6 gives a construction of rank one lattice rules that are NNLD. We conclude with a discussion and some additional references in Section 7.
Definitions and a bound
Here we define a non-negative local discrepancy (NNLD) property of the points we use as well as a complete monotonicity criterion for the integrand. We then establish bounds analogous to (1). First we introduce some notation.
### Notation
For integer \(b\geqslant 1\), let \(\mathbb{Z}_{b}=\{0,1,\ldots,b-1\}\). The set \(\{1,2,\ldots,d\}\) of variable indices is denoted by \([d]\). For \(u\subseteq[d]\), we use \(|u|\) for the cardinality of \(u\) and \(-u\) for the complement \([d]\setminus u\), especially in subscripts and superscripts. The singleton \(\{j\}\) may be abbreviated to just \(j\) and \(-\{j\}\) to \(-j\). For points \(\mathbf{x},\mathbf{z}\in[0,1]^{d}\) and a set \(u\subseteq[d]=\{1,2,\ldots,d\}\) let \(\mathbf{x}_{u}\colon\mathbf{z}_{-u}\) be the hybrid point with \(j\)'th component \(x_{j}\) for \(j\in u\) and \(j\)'th component \(z_{j}\) for \(j\not\in u\).
The points with all coordinates \(0\) or all coordinates \(1\) are denoted by \(\mathbf{0}\) and \(\mathbf{1}\) respectively. When it is necessary to specify their dimension we use \(\mathbf{0}_{d}\) and \(\mathbf{1}_{d}\). The notation \(\mathbb{1}\{A\}\) is for an indicator variable equal to \(1\) when \(A\) is true and \(0\) otherwise.
For integer \(d\geqslant 1\) we will use the following precedence notion on \([0,1]^{d}\). For \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\) we say that \(\mathbf{x}\leqslant\mathbf{z}\) when \(x_{j}\leqslant z_{j}\) holds for all \(j=1,\ldots,d\).
### Non-negative local discrepancy
A QMC rule is given by a list of points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\in[0,1]^{d}\) and it yields the estimate
\[\hat{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}f(\mathbf{x}_{i})\]
of \(\mu\). We refer to these points as a point set, \(P_{n}\), though in any setting where some \(\mathbf{x}_{i}\) are duplicated we actually treat \(P_{n}\) as a multiset, counting multiplicity of the points. The local discrepancy of \(P_{n}\) at \(\mathbf{z}\in[0,1]^{d}\) is given by
\[\delta(\mathbf{z})=\delta(\mathbf{z};P_{n})=\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{z}))- \mathrm{VOL}([\mathbf{0},\mathbf{z}))\]
where \(\mathrm{VOL}\) is Lebesgue measure and \(\widehat{\mathrm{VOL}}\) is the empirical measure with
\[\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{z}))=\frac{1}{n}\sum_{i=0}^{n-1}1_{\mathbf{x}_{ i}\in[\mathbf{0},\mathbf{z})}.\]
That is, \(\mathrm{VOL}\) is \(\mathbb{U}[0,1]^{d}\) while \(\widehat{\mathrm{VOL}}\) is \(\mathbb{U}(P_{n})\). The quantity \(D_{n}^{*}=\sup_{\mathbf{z}\in[0,1]^{d}}|\delta(\mathbf{z})|\) is called the star discrepancy of the point set \(P_{n}\).
**Definition 1**.: The point set \(P_{n}\) with points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\) has non-negative local discrepancy (NNLD) if
\[\delta(\mathbf{z})\geqslant 0 \tag{2}\]
for all \(\mathbf{z}\in[0,1]^{d}\).
A distribution for \(\mathbf{x}\in\mathbb{R}^{d}\) is positively lower orthant dependent [32] if
\[\Pr(\mathbf{x}\leqslant\mathbf{z})\geqslant\prod_{j=1}^{d}\Pr(x_{j}\leqslant z_{j})\]
for all \(\mathbf{z}\in\mathbb{R}^{d}\). A sufficient condition for NNLD is that the \(\mathbb{U}(P_{n})\) distribution on \([0,1]^{d}\) is positively lower orthant dependent and that the marginal distributions \(\mathbb{U}\{x_{0,j},\ldots,x_{n-1,j}\}\) for each \(j=1,\ldots,d\) are stochastically smaller than \(\mathbb{U}[0,1]\). The random variable \(X\) is stochastically smaller than the random variable \(Y\) if \(\Pr(X\leqslant z)\geqslant\Pr(Y\leqslant z)\) for all \(z\in\mathbb{R}\) and in that case we also say that the distribution of \(X\) is stochastically smaller than that of \(Y\). There is a related notion of positive upper orthant dependence as well as two related notions of negative orthant dependence, both upper and lower.
In one dimension, the points \(0,1/n,\ldots,(n-1)/n\) are NNLD. As mentioned earlier, \(n=b^{m}\) Hammersley points in base \(b\geqslant 2\) and dimension \(d=2\) are NNLD [12]. Those Hammersley points are constructed as follows. For \(0\leqslant i<n\) write \(i=\sum_{k=1}^{m}a_{i}(k)b^{k-1}\) for digits \(a_{i}(k)\in\{0,1,\ldots,b-1\}\) and set \(i^{\prime}=\sum_{k=1}^{m}a_{i}(m-k+1)b^{k-1}\). Then the \(i\)'th such Hammersley point is \(\mathbf{x}_{i}=\big{(}i/n,i^{\prime}/n\big{)}\) for \(i=0,1,\ldots,n-1\). Some further properties of the Hammersley points, related to the work of [12], are given by [3].
We will also make use of a complementary property: non-positive local discrepancy.
**Definition 2**.: The point set \(P_{n}\) with points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\) has non-positive local discrepancy (NPLD) if
\[\delta(\mathbf{z})\leqslant 0 \tag{3}\]
for all \(\mathbf{z}\in[0,1]^{d}\).
One of our techniques is to take NNLD points \(\mathbf{x}_{i}\) and reflect them to \(\mathbf{1}-\mathbf{x}_{i}\) to get points that oversample rectangular regions near \(\mathbf{1}\). In doing so we will need to take care of two issues. One is that for \(d\geqslant 2\), the complement of a hyperrectangle \([\mathbf{0},\mathbf{a})\) under this transformation is not another hyperrectangle. The other is that even for \(d=1\), the complement of a half open interval \([0,a)\) is a closed interval \([a,1]\).
To handle these issues we make two observations below. First, for an \(n\)-point set \(P_{n}\subset[0,1]^{d}\) let us additionally define the local discrepancy with respect to closed boxes:
\[\overline{\delta}(\mathbf{z})=\overline{\delta}(\mathbf{z};P_{n})=\widehat{\mathrm{ VOL}}([\mathbf{0},\mathbf{z}])-\mathrm{VOL}([\mathbf{0},\mathbf{z}]).\]
**Observation 1**.: _The point set \(P_{n}\) has the NNLD property if and only if_
\[\overline{\delta}(\mathbf{z})\geqslant 0\quad\text{ for all }\mathbf{z}\in[0,1]^{d}. \tag{4}\]
_This is due to the following reasoning: First, we always have \(\overline{\delta}(\mathbf{z})\geqslant\delta(\mathbf{z})\) for all \(\mathbf{z}\in[0,1]^{d}\). Thus the NNLD property of \(P_{n}\) implies (4). For the converse, we
_assume that \(P_{n}\) satisfies (4) and consider two cases. If \(z_{j}=0\) for some \(j\in[d]\) then \(\delta(\mathbf{z})=0\). If instead \(\min_{j\in[d]}z_{j}>0\) then_
\[\delta(\mathbf{z})=\lim_{\varepsilon\downarrow 0}\overline{\delta}(\mathbf{z}- \varepsilon\mathbf{1}).\]
_Either way, (2) holds, i.e., \(P_{n}\) is NNLD._
**Observation 2**.: _The condition_
\[\overline{\delta}(\mathbf{z})\leqslant 0\quad\text{ for all }\mathbf{z}\in[0,1]^{d} \tag{5}\]
_implies that \(P_{n}\) has the NPLD property, since \(\delta(\mathbf{z})\leqslant\overline{\delta}(\mathbf{z})\) for all \(\mathbf{z}\in[0,1]^{d}\). As a partial converse, if \(P_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\), then the NPLD property also implies condition (5). Indeed, in that case we have \(\overline{\delta}(\mathbf{1})=0\) and_
\[\overline{\delta}(\mathbf{z})=\lim_{\varepsilon\downarrow 0}\delta(\mathbf{z}+ \varepsilon\mathbf{1})\leqslant 0\quad\text{ for all }\mathbf{z}\in[0,1)^{d}.\]
_Now consider for any \(\mathbf{z}\in[0,1)^{d}\) and any \(\varnothing\neq u\subsetneq[d]\) the closed anchored box \([\mathbf{0},(\mathbf{z}_{u}{:}\mathbf{1}_{-u})]\). Due to \(P_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\), it contains exactly the same number of points from \(P_{n}\) as the anchored box \([\mathbf{0},(\mathbf{z}_{u}{:}\mathbf{z}_{-u}^{*})]\), where \(\mathbf{z}^{*}\) is defined by \(z_{j}^{*}:=\max(\{x_{0,j},\ldots,x_{n-1,j}\}\setminus\{1\})\) for \(j=1,\ldots,d\) taking \(z_{j}^{*}=0\) in case it is \(\max(\varnothing)\). Consequently, we have_
\[\overline{\delta}(\mathbf{z}_{u}{:}\mathbf{1}_{-u})\leqslant\overline{ \delta}(\mathbf{z}_{u}{:}\mathbf{z}_{-u}^{*})\leqslant 0.\]
_Hence for \(d=1\) we have equivalence of (5) and NPLD for all \(P_{n}\subset[0,1]\). But if \(d\geqslant 2\), then for arbitrary \(P_{n}\subset[0,1]^{d}\) not contained in \([0,1)^{d}\cup\{\mathbf{1}\}\) the NPLD property does not necessarily imply condition (5), as a trivial example with \(d=2\), \(n=1\), \(P_{n}=\{(1,1/2)\}\) shows: \(\delta(\mathbf{z})=-\mathrm{VOL}([\mathbf{0},\mathbf{z}))\leqslant 0\) for all \(\mathbf{z}\in[0,1]^{d}\), but \(\overline{\delta}((1,1/2))=1-1/2=1/2>0\)._
For \(d=1\) if the points in \(\tilde{P}_{n}\) are \(1-x_{i}\) for the points \(x_{i}\) of \(P_{n}\), then
\[\overline{\delta}(z;P_{n})+\delta(1-z;\tilde{P}_{n})=0,\]
i.e., \(\overline{\delta}(z;P_{n})=-\delta(1-z;\tilde{P}_{n})\) for all \(z\in[0,1]\). Then due to Observations 1 and 2, reflections of NNLD points are NPLD points and vice versa for \(d=1\).
In addition to reflection, we consider another useful transformation. Let \(\tilde{\mathbf{x}}_{i}\) be the base \(b\) Hammersley points for \(i=0,\ldots,n-1\) where \(n=b^{m}\) and \(d=2\). Then [4] show that
\[\mathbf{x}_{i}=(1/n+\tilde{x}_{i,1},1-\tilde{x}_{i,2}) \tag{6}\]
are NPLD.
### Completely monotone functions
Here we define completely monotone functions, describing them in words before giving the formal definition. If \(\mathbf{x}\leqslant\mathbf{z}\), then a completely monotone function can increase but not decrease if any \(x_{j}\) is replaced by \(z_{j}\). That is \(f(\mathbf{x}_{-j}{:}\mathbf{z}_{j})-f(\mathbf{x})\geqslant 0\) always holds. Next, the size of this difference can only be increasing as some other component \(x_{k}\) is increased to \(z_{k}\), so certain differences of differences must also be non-negative. This condition must hold for anywhere from \(1\) to \(d\) applications of differencing. The \(|u|\)-fold differences of differences are alternating sums of the form
\[\Delta_{u}(\mathbf{x},\mathbf{z})=\sum_{v\subseteq u}(-1)^{|u-v|}f(\mathbf{x}_{-v}{:}\mathbf{z }_{v}).\]
Note that the coefficient of \(f(\mathbf{x}_{-u}{:}\mathbf{z}_{u})\) in \(\Delta_{u}(\mathbf{x},\mathbf{z})\) is positive.
**Definition 3**.: The function \(f:[0,1]^{d}\to\mathbb{R}\) is completely monotone if \(\Delta_{u}(\mathbf{x},\mathbf{z})\geqslant 0\) for all non-empty \(u\) and all \(\mathbf{x},\mathbf{z}\in[0,1]^{d}\) with \(\mathbf{x}_{u}\leqslant\mathbf{z}_{u}\).
In [1], Aistleitner and Dick use completely monotone functions to analyze the total variation of \(f\) in the sense of Hardy and Krause, denoted by \(V_{\rm HK}(f)\). See [28] for an account. From Theorem 2 of [1], if \(V_{\rm HK}(f)<\infty\) then we can write
\[f(\mathbf{x})=f(\mathbf{0})+f^{+}(\mathbf{x})-f^{-}(\mathbf{x})\]
where \(f^{+}\) and \(f^{-}\) are completely monotone functions with \(f^{+}(\mathbf{0})=f^{-}(\mathbf{0})=0\). They call \(f^{+}-f^{-}\) the Jordan decomposition of \(f\). The functions \(f^{\pm}\) are uniquely determined.
If \(f\) is right-continuous and \(V_{\rm HK}(f)<\infty\) then \(f(\mathbf{x})=\nu([\mathbf{0},\mathbf{x}])\) for a uniquely determined signed Borel measure \(\nu\), by Theorem 3 of [1]. Let this signed measure have Jordan decomposition \(\nu=\nu^{+}-\nu^{-}\) for ordinary (unsigned) Borel measures \(\nu^{\pm}\). Then \(f^{\pm}(\mathbf{x})=\nu^{\pm}([\mathbf{0},\mathbf{x}]\setminus\{\mathbf{0}\})\).
The completely monotone functions that we study take the form
\[f(\mathbf{x})=f(\mathbf{0})+\lambda\,\nu([\mathbf{0},\mathbf{x}]) \tag{7}\]
where \(\nu\) is an arbitrary probability measure on \([0,1]^{d}\) (or, more precisely, on the Borel \(\sigma\)-algebra of \([0,1]^{d}\)) and \(\lambda\geqslant 0\). Note that every right-continuous completely monotone function \(f\) on \([0,1]^{d}\) can be represented in that way, see, e.g., [10, II.5.11 Korrespondenzsatz, p. 67].
If \(\nu\) is absolutely continuous with respect to the Lebesgue measure, then we may represent \(f\), due to the Radon-Nikodym theorem, as
\[f(\mathbf{x})=f(\mathbf{0})+\lambda\int_{[\mathbf{0},\mathbf{x}]}g(\mathbf{z})\,\mathrm{d}\mathbf{z} \tag{8}\]
where \(g\) is a probability density on \([0,1]^{d}\), i.e., a non-negative Lebesgue integrable function on \([0,1]^{d}\) with integral equal to one.
### Basic result
Here we present the basic integration bounds. To bracket \(\mu\) we use up to \(2n\) function evaluations using \(n\) each for the lower and upper limits. For some constructions it is possible that some function evaluations might be usable in both limits, reducing the cost of computation. For \(d=1\) we only need \(n+1\) evaluations.
**Theorem 1**.: _Let \(f\) be a completely monotone function of the form (7). Let \(P_{n}=\{\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\}\subset[0,1]^{d}\), and put \(\widetilde{P}_{n}=\{\mathbf{1}-\mathbf{x}_{0},\ldots,\mathbf{1}-\mathbf{x}_{n-1}\}\)._
1. _Let_ \(\widetilde{P}_{n}\) _have non-negative local discrepancy. Then_ \[\overline{\mu}=\hat{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}f(\mathbf{x}_{i})\geqslant \int_{[0,1]^{d}}f(\mathbf{x})\,\mathrm{d}\mathbf{x}.\] (9)
2. _Let_ \(P_{n}\) _have non-positive local discrepancy. If additionally either_ \(P_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\) _or_ \(\nu\) _is absolutely continuous with respect to the Lebesgue measure, then_ \[\underline{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}f(\mathbf{1}-\mathbf{x}_{i})\leqslant\int_{ [0,1]^{d}}f(\mathbf{x})\,\mathrm{d}\mathbf{x}.\] (10)
Proof.: Without loss of generality take \(f(\mathbf{0})=0\) and \(\lambda=1\). Consequently, \(f(\mathbf{x})=\nu([\mathbf{0},\mathbf{x}])\) for all \(\mathbf{x}\in[0,1]^{d}\). We obtain
\[\mu=\int_{[0,1]^{d}}\nu([\mathbf{0},\mathbf{x}])\,\mathrm{d}\mathbf{x}=\int_{[0,1]^{d}} \int_{[0,1]^{d}}1_{\mathbf{z}\leqslant\mathbf{x}}\,\mathrm{d}\nu(\mathbf{z})\,\mathrm{d} \mathbf{x}.\]
Reversing the order of integration,
\[\mu=\int_{[0,1]^{d}}\int_{[0,1]^{d}}1_{\mathbf{z}\leqslant\mathbf{x}}\, \mathrm{d}\mathbf{x}\,\mathrm{d}\nu(\mathbf{z})=\int_{[0,1]^{d}}\mathrm{VOL}([\mathbf{z}, \mathbf{1}])\,\mathrm{d}\nu(\mathbf{z}). \tag{11}\]
Similarly,
\[\hat{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}\nu([\mathbf{0},\mathbf{x}_{i}])=\frac{1}{n}\sum_ {i=0}^{n-1}\int_{[0,1]^{d}}1_{\mathbf{z}\leqslant\mathbf{x}_{i}}\,\mathrm{d}\nu(\mathbf{z})\]
from which
\[\hat{\mu}=\int_{[0,1]^{d}}\frac{1}{n}\sum_{i=0}^{n-1}1_{\mathbf{z} \leqslant\mathbf{x}_{i}}\,\mathrm{d}\nu(\mathbf{z})=\int_{[0,1]^{d}}\widehat{\mathrm{ VOL}}([\mathbf{z},\mathbf{1}])\,\mathrm{d}\nu(\mathbf{z}). \tag{12}\]
Combining (11) and (12) the integration error now satisfies
\[\hat{\mu}-\mu =\int_{[0,1]^{d}}\Bigl{(}\widehat{\mathrm{VOL}}([\mathbf{z},\mathbf{1}]) -\mathrm{VOL}([\mathbf{z},\mathbf{1}]\Bigr{)}\,\mathrm{d}\nu(\mathbf{z})\] \[=\int_{[0,1]^{d}}\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n })\,\mathrm{d}\nu(\mathbf{z}), \tag{13}\]
where \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\) is the local discrepancy of \(\widetilde{P}_{n}\) with respect to the anchored closed box \([\mathbf{0},\mathbf{1}-\mathbf{z}]\). Recall that \(\nu\) is a positive measure.
For part (i), let \(\widetilde{P}_{n}\) have the NNLD property. Due to Observation 1 we have \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\geqslant 0\) for all \(\mathbf{z}\in[0,1]^{d}\). Hence \(\hat{\mu}\geqslant\mu\), establishing (9).
For part (ii), let \(\widetilde{P}_{n}\) have the NPLD property. If additionally \(\widetilde{P}_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\), then Observation 2 ensures that \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\leqslant 0\) for all \(\mathbf{z}\in[0,1]^{d}\), establishing \(\hat{\mu}\leqslant\mu\). If instead \(\nu\) is absolutely continuous with respect to the Lebesgue measure, then we can replace \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\) in (13) by \(\delta(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\) without changing the integral. Hence we get again \(\hat{\mu}\leqslant\mu\). In any case, exchanging the roles of \(P_{n}\) and \(\widetilde{P}_{n}\) establishes (10).
Theorem 1 provides an upper bound for \(\mu\) when sampling from reflected NNLD points. This bound will approach \(\mu\) as \(n\to\infty\) if those points also satisfy \(D_{n}^{*}\to 0\) as \(n\to\infty\). To get a lower bound we can use reflected NPLD points, provided that either \(\nu\) is absolutely continuous or those points all belong to \([0,1)^{d}\cup\{\mathbf{1}\}\). The NPLD points could be those given by equation (6). We find in Section 5 that NPLD points are not as simple to construct as NNLD points.
### Example
Here is a simple example to illustrate these bounds. The integrand is known to be completely monotone because it is a multivariate cumulative distribution function (CDF). For \(\mathbf{x}\in[0,1]^{2}\) we take
\[f(\mathbf{x})=\Pr(X_{1}\leqslant x_{1},X_{2}\leqslant x_{2}) \tag{14}\]
for \(\mathbf{X}\sim\mathcal{N}(0,\Sigma)\) with \(\Sigma=\left(\begin{smallmatrix}1&\rho\\ \rho&1\end{smallmatrix}\right)\) using \(\rho=0.7\). Due to (9), we can compute an upper bound for \(\mu=\int_{[0,1]^{2}}f(\mathbf{x})\,\mathrm{d}\mathbf{x}\) by sampling at points \(\mathbf{1}-\mathbf{x}_{i}\) where \(\mathbf{x}_{i}\in[0,1]^{2}\) are the first \(n=2^{m}\) Hammersley points in any base \(b\geqslant 2\). We can compute a lower bound for \(\mu\) by first transforming Hammersley points via (6) to get NPLD points \(\mathbf{x}_{i}\) and then sampling at \(\mathbf{1}-\mathbf{x}_{i}\). Note that the point sets in these bounds are not extensible in that the points for \(n=b^{m}\) are not necessarily reused for \(n=b^{m+1}\).
Figure 1 shows the results for \(n=2^{m}\) and \(1\leqslant m\leqslant 13\). Over the given range, \(n(\overline{\mu}-\underline{\mu})\) increases with \(n\) while \(n(\overline{\mu}-\underline{\mu})/\log(n)\) decreases with \(n\). The computed upper and lower bounds for \(n=2^{13}\) show that
\[0.5618735\leqslant\mu\leqslant 0.5619890.\]
This function is so smooth and the dimension is so small that comparable accuracy could be attained by standard low dimensional integration methods with many fewer function evaluations. However, these computations took approximately five seconds in R on a MacBook Air M2 laptop, using the mvtnorm package [13, 14] to compute \(f\). A more efficient integration could save only about five seconds and it would not come with guaranteed bounds.
## 3 More about NNLD points
Here we collect some observations about properties that any \(n\geqslant 1\) NNLD points in \([0,1]^{d}\) must necessarily have. Then we use those properties to describe constraints that the NNLD property imposes on customary QMC constructions (lattices and digital nets). Finally we show that the NNLD and NPLD properties are preserved by tensor products.
The first and most obvious property of NNLD points is that \(\mathbf{0}\) must be one of those points or else there is a box \(B=[\mathbf{0},\boldsymbol{a})\) with \(0=\widehat{\mathrm{VOL}}(B)<\mathrm{VOL}(B)\) so that \(\delta(\boldsymbol{a})<0\). Next it must be true that all \(n\) points belong to \([0,1-1/n]^{d}\). Suppose to the contrary that \(x_{i1}>1-1/n\) for some \(0\leqslant i<n\). Then for some
Figure 1: The top panel shows upper and lower bounds for \(\mu=\int_{[0,1]^{2}}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}\) using transformations of the Hammersley points and \(n=2^{m}\) for \(1\leqslant m\leqslant 13\). The bottom panel plots the difference between those upper and lower bounds versus \(n\), on a logarithmic scale.
\(\epsilon>0\) there exists \(B=[0,1-1/n+\epsilon)\times[0,1]^{d-1}\) with \(\widehat{\mathrm{VOL}}(B)\leqslant(n-1)/n<\mathrm{VOL}(B)\) so that \(\mathbf{x}_{i}\) are not NNLD. The same argument applies if \(x_{ij}>1-1/n\) for any \(i\) and any \(j\).
Trivial constructions of NNLD points have \(\mathbf{x}_{i}=(i/n)\mathbf{1}\in[0,1]^{d}\) for \(0\leqslant i<n\). We observe that these points as well as the Hammersley points for \(d=2\) have variables that are positively correlated. We will use a general positive dependence property in Sections 5 and 6 to construct more NNLD point sets. The NPLD construction in (6) creates a negative lower orthant dependence property for the components of \(\mathbf{x}_{i}\in[0,1]^{2}\).
Many of the constructions \(P_{n}\) we consider are projection regular by which we mean that the projections of \(P_{n}\) onto each single coordinate are equal to the full set \(\{0,1/n,2/n,\ldots,(n-1)/n\}\). Projection regularity is usually considered advantageous in QMC, as it guarantees a certain structure and even distribution of the integration node set, and simplifies the derivation of error bounds. However, combined with the NNLD property, it imposes a constraint on the point set that we will use to rule out certain constructions.
**Proposition 1**.: _Let \(P_{n}\) be a point set with \(n\) points in \([0,1)^{d}\) that is projection regular. If \(P_{n}\) has the NNLD property, then \(P_{n}\) must contain the point_
\[\mathbf{x}_{*}=\left(\frac{n-1}{n},\frac{n-1}{n},\ldots,\frac{n-1}{n}\right).\]
Proof.: Suppose that \(P_{n}\) is projection regular and does not contain \(\mathbf{x}_{*}\). Then there must exist at least one two dimensional projection \(Q_{n}\) of \(P_{n}\) which does not contain the point \(\mathbf{y}_{*}:=(\frac{n-1}{n},\frac{n-1}{n})\). Without loss of generality, assume that \(Q_{n}\) is the projection of \(P_{n}\) onto the first and second coordinates.
This implies, due to projection regularity, that at least two points of \(Q_{n}\) do not lie in the box \([\mathbf{0},\mathbf{y}_{*})\). Thus,
\[\delta(\mathbf{y}_{*})=\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{y}_{*}))-\mathrm{VOL}( [\mathbf{0},\mathbf{y}_{*}))\leqslant\frac{n-2}{n}-\frac{(n-1)^{2}}{n^{2}}=-\frac{1}{ n^{2}}.\]
Therefore, \(P_{n}\) has negative local discrepancy for the box \([\mathbf{0},\mathbf{y}_{*})\times[0,1)^{d-2}\).
Proposition 1 has some consequences for well known QMC points. We will consider digital nets and integration lattices. The most widely used and studied integration lattices are rank one lattices. Given a generating vector \(\mathbf{g}=(g_{1},\ldots,g_{d})\in\mathbb{N}^{d}\) and a sample size \(n\geqslant 1\), a rank one lattice uses points
\[\mathbf{x}_{i}=\left(\frac{g_{1}i}{n},\frac{g_{2}i}{n},\ldots,\frac{g_{d}i}{n} \right)\,\mathrm{mod}\ 1\]
for \(0\leqslant i<n\) where the modulus operation above takes the fractional part of its argument. These \(n\) points form a group under addition modulo \(1\). More general integration lattices having ranks between \(1\) and \(d\) can also be constructed [6, 26, 33]. Lattice rules with ranks larger than \(1\) are seldom used. They also have the group structure.
**Corollary 1**.: _For fixed \(d,n\geqslant 1\) there is only one projection regular lattice point set in \([0,1)^{d}\) that consists of \(n\) points and has the NNLD property, namely the lattice point set_
\[\left\{\mathbf{0},\frac{1}{n}\mathbf{1},\frac{2}{n}\mathbf{1},\ldots,\frac{n- 1}{n}\mathbf{1}\right\},\]
_whose points all lie on the main diagonal of the \(d\)-dimensional unit cube \([0,1)^{d}\)._
Proof.: Let \(P_{n}\) be a projection regular lattice point set, consisting of \(n\) points in \([0,1)^{d}\), that has NNLD. Due to Proposition 1, \(P_{n}\) has to contain the point \(\boldsymbol{x}_{*}=\frac{n-1}{n}\mathbf{1}\). Due to the additive group structure of \(P_{n}\), we have
\[k\boldsymbol{x}_{*}\bmod 1=\frac{n-k}{n}\mathbf{1}\in P_{n}\quad\text{ for }k=0,1, \ldots,n-1.\]
The set above has \(n\) distinct points, so they must be all of \(P_{n}\).
From Corollary 1 we see, in particular, that the only projection regular rank one lattices that are NNLD are trivial, and equivalent to taking all \(g_{j}=1\). If we also consider lattices that are not projection regular, then we can find constructions that are NNLD and do not only consist of points on the main diagonal of the unit cube \([0,1)^{d}\). See Theorem 3.
Now we look at \((t,m,d)\)-nets [7, 26]. The most widely used \((t,m,d)\)-nets are those of Sobol' in base \(b=2\). Sobol' points require one to choose parameters known as direction numbers, with those of [20] being especially prominent. By considering the point \(\boldsymbol{x}_{*}=\mathbf{1}(1-1/n)\), we often find that such Sobol' points cannot be NNLD. The first and third components of \(\boldsymbol{x}_{i}\in[0,1]^{d}\) for \(d\geqslant 3\) are projection regular but, for \(2\leqslant m\leqslant 20\) they fail to contain \((1-1/n,1-1/n)\). Therefore the projection of the Sobol' points onto those two dimensions fails to be NNLD and hence the \(d\) dimensional point set is not NNLD either.
Like lattice point sets, digital \((t,m,d)\)-nets in base \(b\geqslant 2\) have a group structure; this time it is based on the digitwise addition modulo \(b\), which is performed in each component separately. Using this group structure and Proposition 1, we obtain a corollary with a similar flavor to Corollary 1, although with less dramatic consequences.
**Corollary 2**.: _Let \(d,m\geqslant 1\) and \(b\geqslant 2\). Let_
\[\alpha_{b,m}=\sum_{\nu=1}^{m}b^{-\nu}=\frac{1-b^{-m}}{b-1}.\]
_On the one hand, any digital \((t,m,d)\)-net in base \(b\geqslant 2\) that is projection regular and has the NNLD property contains the cyclic subgroup_
\[\{\mathbf{0},\alpha_{b,m}\mathbf{1},2\alpha_{b,m}\mathbf{1},\ldots,(b-1)\alpha _{b,m}\mathbf{1}\},\]
_which consists of \(b\) points on the main diagonal._
_On the other hand, any \((t,m,d)\)-net in base \(b\geqslant 2\) has at most \(b^{t+\lceil\frac{m-t}{d}\rceil}\) points on the main diagonal._
Proof.: Let \(n=b^{m}\), and let \(P_{n}\) be a projection regular digital \((t,m,d)\)-net, consisting of \(n\) points in \([0,1)^{d}\), that has NNLD. Due to Proposition 1, \(P_{n}\) has to contain the point \(\mathbf{x}_{*}=\frac{n-1}{n}\mathbf{1}=(b-1)\alpha_{b,m}\mathbf{1}\). Using the specific commutative group addition of \(P_{n}\), we see that adding up \(\mathbf{x}_{*}\)\(k\) times yields
\[k\mathbf{x}_{*}=(b-k)\alpha_{b,m}\mathbf{1}\in P_{n}\]
for \(k=0,1,\ldots,b-1\).
Now let \(P_{n}\) be an arbitrary \((t,m,d)\)-net in base \(b\). Put \(k:=\lceil\frac{m-t}{d}\rceil\). We may partition the half-open unit cube \([0,1)^{d}\) into \(b^{m-t}\) half-open axis-parallel boxes (of the same shape and of volume \(b^{t-m}\)) with side length \(b^{-k}\) and, possibly, side length \(b^{1-k}\). Due to the net property, each of these boxes contains exactly \(b^{t}\) points of \(P_{n}\), and at most \(b^{k}\) of the boxes have a non-trivial intersection with the main diagonal.
The next result shows that Cartesian products of finitely many NNLD (or NPLD) point sets are also NNLD (respectively NPLD).
**Lemma 1**.: _For positive integers \(d_{1}\), \(d_{2}\), \(n_{1}\) and \(n_{2}\), let \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n_{1}-1}\in[0,1]^{d_{1}}\) and \(\tilde{\mathbf{x}}_{0},\ldots,\tilde{\mathbf{x}}_{n_{2}-1}\in[0,1]^{d_{2}}\) be NNLD point sets. Let \(\mathbf{z}_{0},\ldots,\mathbf{z}_{N-1}\in[0,1]^{d_{1}+d_{2}}\) for \(N=n_{1}n_{2}\) be the Cartesian product of those two point sets. Then \(\mathbf{z}_{0},\ldots,\mathbf{z}_{N-1}\) are NNLD points. If both \(\mathbf{x}_{i}\) and \(\tilde{\mathbf{x}}_{i}\) are NPLD then \(\mathbf{z}_{i}\) are also NPLD._
Proof.: For any \(\mathbf{z}\in[0,1]^{d_{1}+d_{2}}\) define \(\mathbf{x}=\mathbf{z}_{[d_{1}]}\) and \(\tilde{\mathbf{x}}=\mathbf{z}_{-[d_{1}]}\). Let \(\mathrm{VOL}_{1}\), \(\mathrm{VOL}_{2}\) and \(\mathrm{VOL}\) denote Lebesgue measure on \([0,1]^{d_{1}}\), \([0,1]^{d_{2}}\) and \([0,1]^{d}\) for \(d=d_{1}+d_{2}\), respectively. Let \(\widehat{\mathrm{VOL}}_{1}\), \(\widehat{\mathrm{VOL}}_{2}\) and \(\widehat{\mathrm{VOL}}\) be empirical measures for \(\mathbf{x}_{i}\), \(\tilde{\mathbf{x}}_{i}\) and \(\mathbf{z}_{i}\) respectively. If \(\mathbf{x}_{i}\) and \(\tilde{\mathbf{x}}_{i}\) are NNLD then
\[\widehat{\mathrm{VOL}}([\mathbf{0}_{d},\mathbf{z})) =\widehat{\mathrm{VOL}}_{1}([\mathbf{0}_{d_{1}},\mathbf{x}))\widehat{ \mathrm{VOL}}_{2}([\mathbf{0}_{d_{2}},\tilde{\mathbf{x}}))\] \[\geqslant\mathrm{VOL}_{1}([\mathbf{0}_{d_{1}},\mathbf{x}))\mathrm{VOL}_{2 }([\mathbf{0}_{d_{2}},\tilde{\mathbf{x}}))\] \[=\mathrm{VOL}([\mathbf{0}_{d},\mathbf{z})).\]
Therefore \(\delta(\mathbf{z})\geqslant 0\) and \(\mathbf{z}_{i}\) are NNLD. The same argument, with the inequalities reversed, applies to the NPLD case.
## 4 Comparison to Koksma-Hlawka bounds
The Koksma-Hlawka inequality is
\[|\hat{\mu}-\mu|\leqslant D_{n}^{*}V_{\mathrm{HK}}(f) \tag{15}\]
where \(D_{n}^{*}\) denotes again the star discrepancy and \(V_{\mathrm{HK}}(f)\) is the total variation of \(f\) in the sense of Hardy and Krause. We can be sure that
\[\hat{\mu}-D_{n}^{*}V_{\mathrm{HK}}(f)\leqslant\mu\leqslant\hat{\mu}+D_{n}^{* }V_{\mathrm{HK}}(f)\]
but the endpoints of this interval are in general far harder to compute than \(\mu\) is. One difficulty is that \(V_{\mathrm{HK}}(f)\) is a sum of \(2^{d}-1\) Vitali variations (see [28])
that in general are harder to compute than \(f\) itself is. However when \(\tilde{f}\), defined by \(\tilde{f}(\boldsymbol{x})=f(\boldsymbol{1}-\boldsymbol{x})\) for every \(\boldsymbol{x}\), is completely monotone then it is useful to work with an alternative definition of total variation \(V_{\mathrm{HK}\boldsymbol{0}}\) (see [1]). For this definition, \(V_{\mathrm{HK}\boldsymbol{0}}(\tilde{f})=V_{\mathrm{HK}}(f)\), and \(V_{\mathrm{HK}\boldsymbol{0}}(\tilde{f})=\tilde{f}(\boldsymbol{1})-\tilde{f}( \boldsymbol{0})=f(\boldsymbol{0})-f(\boldsymbol{1})\), see [1].
With an expression for total variation we still need a value or a bound for \(D_{n}^{*}\). The computation of \(D_{n}^{*}\) is expensive, but in some instances it might be worth doing, and for a given set of points we could pre-compute \(D_{n}^{*}\). It is possible to compute \(D_{n}^{*}\) exactly at cost \(O(n^{d/2+1})\) for fixed \(d\) as \(n\to\infty\), see [8]. The cost to compute \(D_{n}^{*}\) is exponential in the dimension \(d\). If \(n=d\to\infty\) together then computation of \(D_{n}^{*}\) is NP-complete, see [16, 15]. Nevertheless, there are algorithms known that provide either upper and lower bounds for \(D_{n}^{*}\) in moderate dimension, see [34], or lower bounds for \(D_{n}^{*}\) even in high dimensions, see [17]. For these and other facts about computing \(D_{n}^{*}\), cf. [9].
Then, if we have computed a value \(\varepsilon\geqslant D_{n}^{*}(P_{n})\) we then get an interval
\[\hat{\mu}\pm\varepsilon(f(\boldsymbol{0})-f(\boldsymbol{1}))\]
that is sure to contain \(\mu\), when \(f(\boldsymbol{1}-\boldsymbol{x})\) is completely monotone, whether or not \(P_{n}\) is NNLD.
## 5 Digital net constructions
The NNLD points of [3, 12] are two dimensional Hammersley points which are a special kind of digital nets [7] in which the generator matrices are permutation matrices. In this section we show that digital nets constructed with permutation matrices can be used to get NNLD points with \(n=b^{m}\) points for any integer base \(b\geqslant 2\) in any dimension \(d\geqslant 1\). This generalizes the result of [3, 12] which holds for \(d=2\). We obtain this generalization by a probabilistic argument using the notion of associated random variables from reliability theory [11]. We also show that there is a limit to how good digital nets can be when their generator matrices are permutation matrices.
### Permutation digital nets
Here we describe how permutation digital nets are constructed. We won't need the more general definition of digital nets until we study them more closely in Section 5.3.
For a dimension \(d\geqslant 1\), an integer base \(b\geqslant 2\) and an integer \(m\geqslant 1\) we choose \(d\) matrices \(C^{(j)}\in\mathbb{Z}_{b}^{m\times m}\). For \(n=b^{m}\) and indices \(i=0,1,\ldots,n-1\), write \(i=\sum_{k=1}^{m}a_{i,k}b^{k-1}\) for \(a_{i,k}\in\mathbb{Z}_{b}\) and put \(\vec{i}=(a_{i,1},\ldots,a_{i,k})^{\mathsf{T}}\). Now let
\[\vec{x}_{ij}=C^{(j)}\vec{i}\mod b\]
have components \(\vec{x}_{ij}(k)\in\mathbb{Z}_{b}\). Then \(\boldsymbol{x}_{i}\) has j'th component
\[x_{ij}=\sum_{k=1}^{m}\vec{x}_{ij}(k)b^{-k}\in[0,1).\]
Here we use arithmetic modulo \(b\) to define the digital nets. It is customary to only use arithmetic modulo \(b\) when \(b\) is a prime number and to use a generalization based on finite fields when \(b=p^{r}\) for a prime number \(p\) and some power \(r\geqslant 2\). Our proofs of NNLD properties exploit a monotonicity of integers modulo \(b\) whether or not \(b\) is a prime.
As an illustration, the first 16 Hammersley points in base \(b\geqslant 2\) for \(d=2\) are constructed this way with
\[C^{(1)}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\quad\text{and}\quad C^{(2)}=\begin{pmatrix}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix}. \tag{16}\]
Hammersley points for \(d=2\) and general \(m\geqslant 1\) are constructed similarly, with \(C^{(1)}=I_{m}\) and \(C^{(2)}\) a'reversed' identity matrix as in (16). The Hammersley points for \(d\geqslant 3\) are constructed using different bases for different components [18].
### Associated random variables
The settings with \(d=1\) or with \(n=1\) are trivial so we work with \(d\geqslant 2\) and \(n>1\). The key ingredient in constructing a short proof of the NNLD property is the notion of associated random variables [11] that originated in reliability theory.
**Definition 4**.: Random variables \(T_{1},\ldots,T_{m}\) are associated if, for \(\boldsymbol{T}=(T_{1},\ldots,T_{m})\) we have \(\operatorname{Cov}(g_{1}(\boldsymbol{T}),g_{2}(\boldsymbol{T}))\geqslant 0\) for all pairs of functions \(g_{1},g_{2}:\mathbb{R}^{m}\to\mathbb{R}\) that are nondecreasing in each argument individually and for which \(\mathbb{E}(g_{1}(\boldsymbol{T}))\), \(\mathbb{E}(g_{2}(\boldsymbol{T}))\) and \(\mathbb{E}(g_{1}(\boldsymbol{T})g_{2}(\boldsymbol{T}))\) all exist.
The next theorem uses points that are a digital net with permutation matrix generators, followed by shifting every component of each point to the right by a distance \(1/n\). It shows that they oversample sets of the form \((\boldsymbol{z},\boldsymbol{1}]\).
**Theorem 2**.: _For integers \(m\geqslant 1\), \(b\geqslant 2\) and \(d\geqslant 2\), let \(\pi_{1},\ldots,\pi_{d}\) be permutations of \(\{1,\ldots,m\}\), not necessarily distinct. For \(n=b^{m}\) and \(i=0,\ldots,n-1\) and \(k=1,\ldots,m\) define \(a_{i}(k)\in\mathbb{Z}_{b}\) via \(i=\sum_{k=1}^{m}a_{i}(k)b^{k-1}\). If \(\boldsymbol{x}_{i}\in(0,1]^{d}\) has components_
\[x_{ij}=\frac{1}{n}+\sum_{k=1}^{m}b^{-k}a_{i}(\pi_{j}(k)),\quad j=1,\ldots,d \tag{17}\]
_then for any \(\boldsymbol{z}\in[0,1]^{d}\)_
\[\frac{1}{n}\sum_{i=0}^{n-1}\prod_{j=1}^{d}\mathbbm{1}\left\{x_{ij}>1-z_{j} \right\}\geqslant\prod_{j=1}^{d}z_{j}. \tag{18}\]
Proof.: We define a random index \(i\sim\mathbb{U}\{0,1,\ldots,n-1\}\) which then implies that for each index \(j\) the digits \(a_{i}(\pi_{j}(k))\sim\mathbb{U}(\mathbb{Z}_{b})\) independently for \(k=1,\ldots,m\). For each \(j=1,\ldots,d\) we have \(x_{ij}\sim\mathbb{U}\{1/n,2/n,\ldots,1\}\). Therefore for any \(z_{j}\in[0,1]\), \(\Pr(x_{ij}>1-z_{j})\geqslant z_{j}\).
Let \(T_{j}\) be the value of the random variable \(x_{ij}\) where \(i\) is random and \(j\) is not. Letting \(\gamma_{j}\) be the inverse of the permutation \(\pi_{j}\), we may write
\[T_{j}=x_{ij}=\frac{1}{n}+\sum_{k=1}^{m}b^{-\gamma_{j}(k)}a_{i}(k).\]
Independent random variables \(a_{i}(k)\) are associated by Theorem 2.1 of [11]. Then \(T_{1},\ldots,T_{d}\) are associated by result P4 of [11] because they are nondecreasing functions of \(a_{i}(1),\ldots,a_{i}(m)\).
For \(d=2\), let \(g_{1}(\mathbf{T})=\mathbb{1}\{x_{i1}>1-z_{1}\}\) and \(g_{2}(\mathbf{T})=\mathbb{1}\{x_{i2}>1-z_{2}\}\). These are nondecreasing functions of associated random variables and so by the definition of associated random variables
\[\Pr(x_{i1}>1-z_{1},x_{i2}>1-z_{2})\geqslant\Pr(x_{i1}>1-z_{1})\Pr(x_{i2}>1-z_ {2}).\]
Next, for \(2<r\leqslant d\) let \(g_{1}(\mathbf{T})=\prod_{j=1}^{r-1}\mathbb{1}\{x_{ij}>1-z_{j}\}\) and \(g_{2}(\mathbf{T})=\mathbb{1}\{x_{ir}>1-z_{r}\}\). Using induction we conclude that with our random \(i\),
\[\Pr(x_{ij}>1-z_{j},\ j=1,\ldots,d)\geqslant\prod_{j=1}^{d}\Pr(x_{ij}>1-z_{j}) \geqslant\prod_{j=1}^{d}z_{j}\]
which is equivalent to (18).
**Corollary 3**.: _For integer \(b\geqslant 2\) and dimension \(d\geqslant 2\) let \(\tilde{\mathbf{x}}_{0},\ldots,\tilde{\mathbf{x}}_{n-1}\in[0,1]^{d}\) be points of a digital net constructed in base \(b\) using permutation matrices as generators. Then the points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\in[0,1]^{d}\) with \(x_{ij}=1-(1/n+\tilde{x}_{ij})\) are NNLD._
Proof.: Pick \(\mathbf{z}\in[0,1]^{d}\). Now \(\mathbb{1}\{x_{ij}<z_{j}\}=\mathbb{1}\{\tilde{x}_{ij}+1/n>1-z_{j}\}\) and so
\[\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{z}))=\frac{1}{n}\sum_{i=0}^{n-1}\prod_{j=1} ^{d}\mathbb{1}\{x_{ij}<z_{j}\}=\frac{1}{n}\sum_{i=0}^{n-1}\prod_{j=1}^{d} \mathbb{1}\{\tilde{x}_{ij}+1/n>1-z_{j}\}\geqslant\prod_{j=1}^{d}z_{j}\]
by Theorem 2.
For \(d=2\) it was possible to turn an NNLD point set into an NPLD point set in (6) which includes a reflection \(x_{i,2}=1-\tilde{x}_{i,2}\). If we were to reflect two or more components of an NNLD point set, then those components would take on a positive upper orthant dependence, which does not generally provide the negative lower orthant dependence we want for NPLD points. For projection regular NNLD points the reflection of \(s\geqslant 2\) components will contain \(\mathbf{1}_{s}/n\) and there will be a box \(B=[\mathbf{0}_{s},\mathbf{1}_{s}(1/n+\epsilon))\) with \(\delta(B)=1/n-(1/n+\epsilon)^{s}>0\) for small enough \(\epsilon>0\).
### Quality of permutation digital nets
It is clear on elementary grounds that a permutation digital net with two identical permutations among \(\pi_{1},\ldots,\pi_{d}\) would be very bad. The resulting points would satisfy \(x_{ij}=x_{ij^{\prime}}\) for \(0\leqslant i<n\) and some \(1\leqslant j<j^{\prime}\leqslant d\). Here we show that our restriction to permutation digital nets rules out the best digital nets when \(d\geqslant 3\). We begin with the definitions of these nets.
**Definition 5**.: For integers \(d\geqslant 1\), \(b\geqslant 2\), and vectors \(\boldsymbol{k},\boldsymbol{a}\in\mathbb{N}^{d}\) with \(a_{j}\in\mathbb{Z}_{b^{k_{j}}}\) for \(j=1,\ldots,d\) the Cartesian product
\[\mathcal{E}(\boldsymbol{k},\boldsymbol{a})=\prod_{j=1}^{d}\Bigl{[}\frac{a_{j} }{b^{k_{j}}},\frac{a_{j}+1}{b^{k_{j}}}\Bigr{)}\]
is an elementary interval in base \(b\).
**Definition 6**.: For integers \(b\geqslant 2\), \(d\geqslant 1\) and \(0\leqslant t\leqslant m\), the \(n\) points \(\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{n-1}\) are a \((t,m,d)\)-net in base \(b\) if
\[\widehat{\mathrm{VOL}}(\mathcal{E}(\boldsymbol{k},\boldsymbol{a}))=\mathrm{VOL }(\mathcal{E}(\boldsymbol{k},\boldsymbol{a}))\]
holds for all elementary intervals in base \(b\) for which \(\sum_{j=1}^{d}k_{j}\leqslant m-t\).
Digital nets are \((t,m,d)\)-nets. Other things being equal, smaller values of \(t\) denote better equidistribution of the points \(\boldsymbol{x}_{i}\) which translates into a lower bound on \(D_{n}^{*}\) and hence a smaller upper bound in the Koksma-Hlawka inequality. From Theorem 4.10 of [26]
\[D_{n}^{*}=O\Bigl{(}\frac{b^{t}\log(n)^{d-1}}{n}\Bigr{)}+O\Bigl{(}\frac{\log(n) ^{d-2}}{n}\Bigr{)} \tag{19}\]
where the implied constants depend only on \(d\) and \(b\). The powers of \(\log(n)\) are not negligible but they are also not seen in examples of integration errors [30].
The quality parameter of a permutation digital net can be very bad. For \(d=2\), taking the Hammersley construction yields \(t=0\) which is the best possible value. Here we show that for \(d\geqslant 3\), the best available values of \(t\) are far from optimal.
The following definition and result are based on [24, Sect. 2.3].
**Construction 1** (Digital Construction of \((t,m,d)\)-Nets).: _For prime \(b\), and \(C^{(1)},\ldots,C^{(d)}\in(\mathbb{F}_{b})^{m\times m}\), let \(\mathcal{C}=\{C^{(1)},\ldots,C^{(d)}\}\). For \(h\in\mathbb{F}_{b}^{m}\) define \(p(h)\in[0,1)^{d}\) componentwise by its \(b\)-adic digit expansion_
\[p(h)_{j}=\delta_{1}^{(j)}(h)b^{-1}+\delta_{2}^{(j)}(h)b^{-2}+\cdots+\delta_{m }^{(j)}(h)b^{-m}\in[0,1),\ \ \ \ j=1,\ldots,d,\]
_where \(\delta^{(j)}(h)=(\delta_{1}^{(j)}(h),\ldots,\delta_{m}^{(j)}(h))\) is simply the vector \(C^{(j)}h\in\mathbb{F}_{b}^{m}\). We define the point set_
\[P(\mathcal{C})=(p(h))_{h\in\mathbb{F}_{b}^{m}}. \tag{20}\]
_Clearly, \(|P(\mathcal{C})|=b^{m}\)._
_To assess the quality of \(P(\mathcal{C})\), we define the quality criterion \(\rho(\mathcal{C})\): For \(\mathbf{m}=(m_{1},m_{2},\ldots,m_{d})\in\{0,1,\ldots,m\}^{d}\) with \(|\mathbf{m}|=\sum_{j=1}^{d}m_{j}\) let_
\[\mathcal{C}^{(\mathbf{m})}=\begin{pmatrix}C^{(1)}(1{:}m_{1},\cdot)\\ C^{(2)}(1{:}m_{2},\cdot)\\ \vdots\\ C^{(d)}(1{:}m_{d},\cdot)\end{pmatrix}\in\mathbb{F}_{b}^{|\mathbf{m}|\times d}\]
_where \(C^{(j)}(1{:}m_{j},\cdot)\in\mathbb{F}_{b}^{m_{j}\times d}\) represents the first \(m_{j}\) rows of \(C^{(j)}\). Now \(\rho(\mathcal{C})\) is the maximum number \(\rho\in\{0,1,\ldots,m\}\) such that for all \(\mathbf{m}\in\{0,1,\ldots,m\}^{d}\) with \(|\mathbf{m}|=\rho\) we have \(\operatorname{rank}(\mathcal{C}^{(\mathbf{m})})=\rho\)._
**Proposition 2**.: _Let \(b,m,\mathcal{C}\), and \(P(\mathcal{C})\) be as in Construction 1. Then \(P(\mathcal{C})\) is a \((t,m,d)\)-net for \(t=m-\rho(\mathcal{C})\)._
**Observation 3**.: _The proposition shows that the best possible \(t\)-value \(t(\mathcal{C})\) of \(P(\mathcal{C})\) is at most \(m-\rho(\mathcal{C})\). But similar arguments as in the corresponding proof of [24, Proposition 2.7] show that actually_
\[t(\mathcal{C})=m-\rho(\mathcal{C}).\]
**Proposition 3**.: _Let \(V:=\{v_{1},\ldots,v_{m}\}\) be a set of linearly independent vectors in \(\mathbb{F}_{b}^{m}\). Let \(m=\ell d+r\), where \(\ell\in\mathbb{N}_{0}\) and \(0\leqslant r<d\). If the rows \(C_{k}^{(j)}\), \(k=1,\ldots,m\), of the matrices \(C^{(j)}\), \(j=1,\ldots,d\), are all contained in \(V\), then \(\rho(\mathcal{C})\leqslant 2\lfloor m/d\rfloor+1\). Therefore, the smallest \(t\)-value \(t(\mathcal{C})\) of \(P(\mathcal{C})\) satisfies_
\[t(\mathcal{C})\geqslant(d-2)\lfloor m/d\rfloor+r-1.\]
Proof.: Consider the \(m\) row vectors
\[C_{1}^{(1)},C_{1}^{(2)},\ldots,C_{1}^{(d)},\quad C_{2}^{(1)},C_{2}^{(2)}, \ldots,C_{2}^{(d)},\quad\ldots\quad,C_{\ell+1}^{(1)},C_{\ell+1}^{(2)},\ldots,C_ {\ell+1}^{(r)}.\]
_Case 1_: Two of these row vectors are equal. Assume these rows are \(C_{k}^{(j)}\) and \(C_{k^{\prime}}^{(j^{\prime})}\). If \(j=j^{\prime}\), then we consider the matrix \(C:=\mathcal{C}^{(\mathbf{m})}\) with \(m_{j}=\max\{k,k^{\prime}\}\) and \(m_{\nu}=0\) for all \(\nu\neq j\). Obviously, \(\operatorname{rank}(C)\leqslant\max\{k,k^{\prime}\}-1\). Hence it follows that \(\rho(\mathcal{C})\leqslant\max\{k,k^{\prime}\}-1\leqslant\lceil m/d\rceil-1\). If \(j\neq j^{\prime}\), then we consider the matrix \(C:=\mathcal{C}^{(\mathbf{m})}\) with \(m_{j}=k\), \(m_{j^{\prime}}=k^{\prime}\), and \(m_{\nu}=0\) for all \(\nu\notin\{j,j^{\prime}\}\). Obviously, \(\operatorname{rank}(C)\leqslant k+k^{\prime}-1\). Hence it follows that \(\rho(\mathcal{C})\leqslant k+k^{\prime}-1\leqslant 2\lceil m/d\rceil-1\).
_Case 2_: All of these row vectors are different. Consider \(C_{\ell+1}^{(d)}\). Then there exist \(1\leqslant j<d\) and \(1\leqslant h\leqslant\ell+1\) or \(j=d\) and \(1\leqslant h\leqslant\ell\) such that \(C_{\ell+1}^{(d)}=C_{h}^{(j)}\).
Now we argue similarly as in case 1: If \(j=d\), then it is easy to see that \(\rho(\mathcal{C})\leqslant\ell=\lfloor m/d\rfloor\). If \(j\neq j^{\prime}\), then \(\rho(\mathcal{C})\leqslant h+\ell\leqslant 2\ell+1\leqslant 2\lfloor m/d\rfloor+1\).
In any case, we have shown that \(\rho(\mathcal{C})\leqslant 2\lfloor m/d\rfloor+1\).
**Corollary 4**.: _Let \(m=\ell d+r\), where \(\ell\in\mathbb{N}\) and \(0\leqslant r<d\). If \(C^{(1)},\ldots,C^{(d)}\in\mathbb{F}_{b}^{m\times m}\) are all permutation matrices, then the smallest \(t\)-value \(t(\mathcal{C})\) of \(P(\mathcal{C})\) satisfies_
\[t(\mathcal{C})\geqslant(d-2)\lfloor m/d\rfloor+r-1.\]
Proof.: This follows directly from Proposition 3, since the rows of the matrices \(C^{(1)},\ldots,C^{(d)}\) are all in \(\{e_{1},\ldots,e_{m}\}\), where \(e_{i}\) denotes the \(i\)-th standard unit vector of \(\mathbb{F}_{b}^{m}\).
Let us represent the permutation matrix where row \(k\) has a one in column \(\pi(k)\) as simply the column vector with entries \(\pi(k)\). Then we can represent our permutation nets with an \(m\times d\) matrix \(\Pi\) with \(j\)'th column \(\pi_{j}\). For example the Hammersley points with generator matrices \(I_{m}\) and reversed \(I_{m}\) are represented this way by
\[\Pi=\begin{pmatrix}1&m\\ 2&m-1\\ \vdots&\vdots\\ m&1\end{pmatrix}. \tag{21}\]
For \(d=3\) we want \(\Pi\in\{1,\ldots,m\}^{m\times 3}\) with the largest possible value of
\[\rho=\min\bigl{\{}k+k^{\prime}\mid\Pi_{k,j}=\Pi_{k^{\prime},j^{\prime}},1 \leqslant j<j^{\prime}\leqslant 3\bigr{\}}-1.\]
Then we get quality parameter \(t=m-\rho\). If we simply adjoin a third column to \(\Pi\) in (21) the best \(\rho\) we can get is \(m/2\) if \(m\) is even and \((m+1)/2\) if \(m\) is odd. These lead to \(t\geqslant m/2\) if \(m\) is even and \(t\geqslant(m-1)/2\) if \(m\) is odd, which is much worse than the bound in Corollary 4. For \(t=m/2\) the first term in (19) is \(O(b^{m/2}\log(n)^{2}/n)=O(\log(n)^{2}/\sqrt{n})\) because \(b=n^{1/m}\).
If \(m=3\ell\), then we can choose the first \(\ell\) rows of \(\Pi\) to be
\[\begin{pmatrix}1&2&3\\ 4&5&6\\ \vdots&\vdots&\vdots\\ 3\ell-2&3\ell-1&3\ell\end{pmatrix}.\]
Let us label these first \(\ell\) rows of \(\Pi\) by \(\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{\ell}\in\mathbb{N}^{3}\). Now, for \(\mathbf{r}=(a,b,c)\) let \(\mathbf{r}^{\prime}=(b,c,a)\) and \(\mathbf{r}^{\prime\prime}=(c,a,b)\) be one and two rotations of the elements of \(\mathbf{r}\) to the left with wraparound. By taking the rows of \(\Pi\) in this order
\[\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{\ell},\ \ \mathbf{r}^{\prime}_{\ell},\mathbf{r}^{ \prime}_{\ell-1},\ldots,\mathbf{r}^{\prime}_{1},\ \ \mathbf{r}^{\prime\prime}_{\ell},\mathbf{r}^{\prime\prime}_{\ell-1},\ldots,\mathbf{r}^{ \prime\prime}_{1}\]
we get \(\rho=2\ell\) and hence \(t=m/3\). This is very close to the bound \(\lfloor m/d\rfloor+0-1=m/3-1\) from Corollary 4. We prefer the ordering
\[\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{\ell},\ \ \mathbf{r}^{\prime}_{\ell},\mathbf{r}^{ \prime\prime}_{\ell},\ \ \mathbf{r}^{\prime}_{\ell-1},\mathbf{r}^{\prime\prime}_{\ell-1},\ \ \mathbf{r}^{\prime}_{\ell-2},\mathbf{r}^{\prime\prime}_{\ell-2},\ \ \ \ldots\ \ \mathbf{r}^{\prime}_{2},\mathbf{r}^{\prime\prime}_{2},\ \ \mathbf{r}^{ \prime}_{1},\mathbf{r}^{\prime\prime}_{1}\]
because while it attains the same value of \(t\) it has fewer pairs of columns for which \(k+k^{\prime}=2\ell+1\). With \(t=m/3\) for \(d=3\) the first term in (19) is \(O(b^{t}\log(n)^{2}/n)=O(n^{-2/3}\log(n)^{2})\).
Using the same method for \(d=4\) and \(m=4\ell\) we can get \(\rho=2\ell=m/2\), implying that \(t=m/2\), and yielding a rate of \(O(b^{t}\log(n)^{3}/n)=O(n^{-1/2}\log(n)^{3})\). This result for \(d=4\) matches the rate for plain MC apart from the power of \(\log(n)\). So the \(100\%\) error bounds available from NNLD sampling come with a logarithmic accuracy penalty in comparison to plain MC.
A second choice for \(d=4\) is to use a Cartesian product of two Hammersley point sets with \(\sqrt{n}\) points each. The error of such a Cartesian product would ordinarily be the same as that of the individual Hammersley rules in two dimensions with their reduced sample sizes. That is \(O(n^{-1/2}\log(n))\) which is then a better logarithmic factor than the 4 dimensional permutation nets attain.
For \(d=3\) we could also use a Cartesian product of Hammersley points with \(n=b^{2}\) points and a one dimensional grid \(\{0,1/n,\ldots,1-1/n\}\). This then uses \(N=n^{2}\) points and we expect an error of \(O(\log(n)/n)=O(\log(N)/N^{1/2})\) which is a worse rate than we can get with the permutation net in \([0,1]^{3}\).
### Other generator matrices
Permutation matrices are not the only generator matrices that can produce points with the NNLD property. For digital nets in base 2, we know from Proposition 1 that if \(C^{(1)}=I_{m}\) then we must have \(C^{(j)}\mathbf{1}_{m}=\mathbf{1}_{m}\bmod 2\). This in turn implies that every row of \(C^{(j)}\) must have an odd number of 1s in it. A numerical search shows there are 221 choice of nonsingular \(C^{(2)}\) when \(m=4\) and \(C^{(1)}=I_{4}\). Below are some examples:
\[C^{(2)}=\begin{pmatrix}1&0&0&0\\ 1&1&0&1\\ 0&1&1&1\\ 1&1&1&0\end{pmatrix}\quad\text{or}\quad\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 1&0&1&1\\ 1&1&1&0\end{pmatrix}\quad\text{or}\quad\begin{pmatrix}0&0&1&0\\ 1&0&0&0\\ 1&1&0&1\\ 0&1&0&0\end{pmatrix}.\]
Nevertheless, it is hard to find an example where non-permutation matrices perform better than permutation matrices with respect to the \(t\)-value. When \(d=3\), one can verify, either by lengthy reasoning or brute-force enumeration, that NNLD digital nets constructed by non-permutation matrices cannot attain a better t-value than those constructed by permutation matrices for \(m\leqslant 7\) and \(b=2\).
## 6 Non-trivial Rank 1 lattices that are NNLD
Here we consider special cases of rank-1 lattice rules that are suboptimal in terms of discrepancy, but produce NNLD points. While they can be defined in any dimension \(d\geqslant 2\) it is only for dimension 1 that they are projection regular. Therefore the conclusions from Proposition 1 and Corollary 1 do not hold for them when \(d>1\).
**Theorem 3**.: _For integers \(m\geqslant d\) and \(b\geqslant 2\) and \(0\leqslant i<n=b^{m}\), let_
\[\boldsymbol{x}_{i}=\Big{(}\frac{i}{n},\frac{ib}{n},\ldots,\frac{ib^{j-1}}{n}, \ldots,\frac{ib^{d-1}}{n}\Big{)}\mod 1.\]
_Then points \(\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{n-1}\) are NNLD._
Before proving this theorem we note that these points are quite poor for integration; however, the structure of the points can be useful for showing good integration bounds in suitably weighted spaces, see [5]. There are only \(b^{d-j+1}\) unique values of \(x_{ij}\). Further, when \(|j-j^{\prime}|\) is small the points \((x_{ij},x_{ij^{\prime}})\) lie within at most \(b^{|j-j^{\prime}|}\) lines in \([0,1)^{2}\) and have a large discrepancy.
Proof.: We write \(i=\sum_{k=1}^{m}a_{i}(k)b^{k-1}\) and then
\[nx_{ij}=b^{j-1}\sum_{k=1}^{m}a_{i}(k)b^{k-1}\mod b^{m}=\sum_{k=1}^{m+1-j}a_{i} (k)b^{j+k-2}.\]
For \(i\sim\mathbb{U}\{0,1,\ldots,n-1\}\) the digits \(a_{i}(1),\ldots,a_{i}(m)\) are independent \(\mathbb{U}(\mathbb{Z}_{b})\) random variables. Hence they are associated random variables which makes \(nx_{i1},\ldots,nx_{id}\) and hence \(x_{i1},\ldots,x_{id}\) into associated random variables. Finally, \(x_{ij}\) has the uniform distribution on \(\{0,1/n_{j},2/n_{j},\ldots,1-1/n_{j}\}\) where \(n_{j}=n/b^{j-1}\). This distribution is stochastically smaller than \(\mathbb{U}[0,1]\) and so \(\boldsymbol{x}_{i}\) are NNLD.
The values \(x_{ij}\) for \(0\leqslant i<b^{m}\) in these lattices take \(n_{j}=b^{d-j+1}\) distinct values \(\ell/n_{j}\) for \(0\leqslant\ell<n_{j}\) with each of those values appearing \(n/n_{j}\) times. As such they constitute a left endpoint integration rule on \(n_{j}\) points and so for nonperiodic smooth integrands we anticipate an error rate of \(O(n_{j}^{-1})\). For this to be better than plain MC we require \(n_{j}\geqslant\sqrt{n}\) or \(j\leqslant m/2\). While a better rate is available for periodic integrands, those cannot be completely monotone unless they are constant.
## 7 Discussion and further references
We find that it is possible to get computable bounds on some integrals by using points with a suitable bias property (non-negative local discrepancy (NNLD)) on integrands with a suitable monotonicity property (complete monotonicity). The method of associated random variables is useful for showing that a given point set is NNLD.
There are several generalizations of multivariate monotonicity in [25]. They include the complete monotonicity discussed here as well as the more commonly considered monotonicity in each of the \(d\) inputs one at a time. The complexity of integrating coordinate-wise monotone functions has been studied by [27, 31]. Scrambled \((t,m,d)\)-nets have been shown to be negatively orthant dependent if and only if \(t=0\)[35]. Similarly, it was shown in [36] that randomly shifted and
jittered (RSJ) rank-1 lattices based on a random generator are also negatively orthant dependent and that, in some sense, one cannot achieve this result by employing less randomness. Using the NLOD property of the distribution of these RQMC points, it follows from [23] that for functions which are monotone in each variable scrambled nets and RSJ rank-1 lattices cannot increase variance over plain Monte Carlo in any dimension \(d\).
While complete monotonicity is a very special property, its applicability can be widened by the method of control variates. If \(h(\cdot)\) is completely monotone with known integral \(\theta\), we will in some settings be able to find \(\lambda_{+}>0\) for which \(f+\lambda_{+}h\) is a completely monotone function of \(\mathbf{x}\). Then by Theorem 1 we can compute an upper bound \(B_{+}\geqslant\mu+\lambda_{+}\theta\) and conclude that \(\mu\leqslant B_{+}-\lambda_{+}\theta\). Similarly a lower bound can be found by choosing \(\lambda_{-}\) such that \(\lambda_{-}h-f\) is a completely monotone function of \(\mathbf{x}\), using Theorem 1 to get an upper bound \(\lambda_{-}\theta-\mu\leqslant B_{-}\) and then concluding that \(\mu\geqslant\lambda_{-}\theta-B_{-}\). Details on how to choose \(h\) and find \(\lambda_{\pm}\) are beyond the scope of this article.
The customary way to quantify uncertainty in QMC is to use RQMC replicates with statistically derived asymptotic confidence intervals. For a recent thorough empirical evaluation of RQMC, see [22], who found the usual confidence intervals based on the central limit theorem to be even more reliable than sophisticated bootstrap methods. Here we have found an alternative computable non-asymptotic approach with 100% coverage, but so far it does not give very good accuracy for high dimensions.
## Acknowledgments
We thank Josef Dick, David Krieg, Frances Kuo, Dirk Nuyens and Ian Sloan for discussions. Much of this work took place at the MATRIX Institute's location in Creswick Australia as part of their research program on 'Computational Mathematics for High-Dimensional Data in Statistical Learning', in February 2023, and the paper was finalized during the Dagstuhl Seminar 23351 'Algorithms and Complexity for Continuous Problems', in Schloss Dagstuhl, Wadern, Germany, in August 2023. We are grateful to MATRIX and to the Leibniz Center Schloss Dagstuhl. The contributions of ABO and ZP were supported by the U.S. National Science Foundation under grant DMS-2152780. Peter Kritzer is supported by the Austrian Science Fund (FWF) Project P34808. For the purpose of open access, the authors have applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission.
| ```
$f:[0,1]^d\to\mathbb{R}$をAistleitner and Dick (2015)によって定義された完全降順積分関数として、[0,1]^dに点$\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}$が存在し、その点の非負の局所誤差(NNLD)は$[0,1]^d$中のどこでも存在する。これらの性質を利用して、$f$の$ [0,1]^d$における積分の非漸近的かつ計算可能な上限値を求める。非負の局所誤差(NPLD)の性質は計算可能な下限値を提供する。Gabai (1967)以来、二次元Hammersley点をベース$b\ge2$のどの基底でも非負の局所誤差を持つことが知られている。確率的な関連する変数の |
2309.13775 | The Rashomon Importance Distribution: Getting RID of Unstable, Single
Model-based Variable Importance | Quantifying variable importance is essential for answering high-stakes
questions in fields like genetics, public policy, and medicine. Current methods
generally calculate variable importance for a given model trained on a given
dataset. However, for a given dataset, there may be many models that explain
the target outcome equally well; without accounting for all possible
explanations, different researchers may arrive at many conflicting yet equally
valid conclusions given the same data. Additionally, even when accounting for
all possible explanations for a given dataset, these insights may not
generalize because not all good explanations are stable across reasonable data
perturbations. We propose a new variable importance framework that quantifies
the importance of a variable across the set of all good models and is stable
across the data distribution. Our framework is extremely flexible and can be
integrated with most existing model classes and global variable importance
metrics. We demonstrate through experiments that our framework recovers
variable importance rankings for complex simulation setups where other methods
fail. Further, we show that our framework accurately estimates the true
importance of a variable for the underlying data distribution. We provide
theoretical guarantees on the consistency and finite sample error rates for our
estimator. Finally, we demonstrate its utility with a real-world case study
exploring which genes are important for predicting HIV load in persons with
HIV, highlighting an important gene that has not previously been studied in
connection with HIV. Code is available at
https://github.com/jdonnelly36/Rashomon_Importance_Distribution. | Jon Donnelly, Srikar Katta, Cynthia Rudin, Edward P. Browne | 2023-09-24T23:09:48 | http://arxiv.org/abs/2309.13775v4 | The Rashomon Importance Distribution: Getting RID of Unstable, Single Model-based Variable Importance
###### Abstract
Quantifying variable importance is essential for answering high-stakes questions in fields like genetics, public policy, and medicine. Current methods generally calculate variable importance for a given model trained on a given dataset. However, for a given dataset, there may be many models that explain the target outcome equally well; without accounting for all possible explanations, different researchers may arrive at many conflicting yet equally valid conclusions given the same data. Additionally, even when accounting for all possible explanations for a given dataset, these insights may not generalize because not all good explanations are stable across reasonable data perturbations. We propose a new variable importance framework that quantifies the importance of a variable across the set of all good models and is stable across the data distribution. Our framework is extremely flexible and can be integrated with most existing model classes and global variable importance metrics. We demonstrate through experiments that our framework recovers variable importance rankings for complex simulation setups where other methods fail. Further, we show that our framework accurately estimates the _true importance_ of a variable for the underlying data distribution. We provide theoretical guarantees on the consistency and finite sample error rates for our estimator. Finally, we demonstrate its utility with a real-world case study exploring which genes are important for predicting HIV load in persons with HIV, highlighting an important gene that has not previously been studied in connection with HIV. Code is available here.
## 1 Introduction
Variable importance analysis enables researchers to gain insight into a domain or a model. It is particularly important in high stakes real world domains such as genetics [47; 35], finance [6; 40], and criminal justice [16; 25]. Variable importance would ideally be measured as the importance of each variable to the data generating process. However, the data generating process is never known in practice, so prior work generally draws insight by analyzing variable importance for a surrogate model, treating that model and its variable importance as truth.
This approach can be misleading because there may be many good models for a given dataset - a phenomenon referred to as the Rashomon effect [8; 43] -- and variables that are important for one good model on a given dataset are _not_ necessarily important for others. As such, any insights drawn from a single model need not reflect the underlying data distribution or even the consensus among good models. Recently, researchers have sought to overcome the Rashomon effect by computing _Rashomon sets_, the set of all good (i.e., low loss) models for a given dataset [16; 13]. However, _the set of all good models is not stable across reasonable perturbations (e.g., bootstrap or jackknife) of a single dataset_, with stability defined as in [53]. This concept of stability is one of the three pillars of vertical data science [54; 14; 55]. In order to ensure trustworthy analyses, variable importance measures must account for both the Rashomon effect and stability.
Figure 1 provides a demonstration of this problem: across 500 bootstrap replicates from _the same_ data set, the Rashomon set varies wildly - ranging from ten models to over _ten thousand_ -- suggesting that we should account for its instability in any computed statistics. This instability is further highlighted when considering the Model Class Reliance (MCR) variable importance, which is the range of model reliance (i.e., variable importance) values across the Rashomon set for the given dataset [16] (we define MCR and the Rashomon set more rigorously in Sections 2 and 3 respectively). In particular, for variable \(X_{2}\), one interval -- ranging from -0.1 to 0.33 -- suggests that there exist good models that do not depend on this variable at all (0 indicates the variable is not important); on the other hand, another MCR from a bootstrapped dataset ranges from 0.33 to 0.36, suggesting that this variable is essential to all good models. Because of this instability, different researchers may draw very different conclusions about the same data distribution even when using the same method.
Figure 1: Statistics of Rashomon sets computed across 500 bootstrap replicates of a given dataset sampled from the Monk 3 data generation process [45]. The original dataset consisted of 124 observations, and the Rashomon set was calculated using its definition in Equation 1, with parameters specified in Section E of the supplement. The Rashomon set size is the number of models with loss below a threshold. Model reliance is a measure of variable importance for a single variable — in this case, \(X_{2}\) — and Model Class Reliance (MCR) is its range over the Rashomon set. Both the Rashomon set size and model class reliance are unstable across bootstrap iterations.
Figure 2: An overview of our framework. **Step 1:** We bootstrap multiple datasets from the original. **Step 2:** We show the loss values over the model class for each bootstrapped dataset, differentiated by color. The dotted line marks the Rashomon threshold; all models whose loss is under the threshold are in the Rashomon set for that bootstrapped dataset. On top, we highlight the number of bootstrapped datasets for which the corresponding model is in the Rashomon set. **Step 3:** We then compute the distribution of model reliance values for a single variable \(j\) across the Rashomon set for each bootstrapped dataset. **Step 4:** We then average the corresponding CDF across bootstrap replicates into a single CDF (in purple). **Step 5:** Using the CDF, we can compute the marginal distribution (PDF) of variable importance for variable \(j\) across the Rashomon sets of bootstrapped datasets.
In this work, we present a framework unifying concepts from classical nonparametric estimation with recent developments on Rashomon sets to overcome the limitations of traditional variable importance measurements. We propose a stable, model- and variable-importance-metric-agnostic estimand that quantifies variable importance across all good models for the empirical data distribution and a corresponding bootstrap-style estimation strategy. Our method creates a cumulative density function (CDF) for variable importance over all variables via the framework shown in Figure 2. Using the CDF, we can compute a variety of statistics (e.g., expected variable importance, interquartile range, and credible regions) that can summarize the variable importance distribution.
The rest of this work is structured as follows. After more formally introducing our variable importance framework, we theoretically guarantee the convergence of our estimation strategy and derive error bounds. We also demonstrate experimentally that our estimand captures the true variable importance for the data generating process more accurately than previous work. Additionally, we illustrate the generalizability of our variable importance metric by analyzing the reproducibility of our results given new datasets from the same data generation process. Lastly, we use our method to analyze which transcripts and chromatin patterns in human T cells are associated with high expression of HIV RNA. Our results suggest an unexplored link between the LINC00486 gene and HIV load.
## 2 Related Work
The key difference between our work and most others is the way it incorporates model uncertainty, also called the Rashomon effect [8]. The Rashomon effect is the phenomenon in which many different models explain a dataset equally well. It has been documented in high stakes domains including healthcare, finance, and recidivism prediction [15; 31; 16]. The Rashomon effect has been leveraged to create uncertainty sets for robust optimization [46], to perform responsible statistical inference [12], and to gauge whether simple yet accurate models exist for a given dataset [43]. One barrier to studying the Rashomon effect is the fact that _Rashomon sets_ are computationally hard to calculate for non-trivial model classes. Only within the past year has code been made available to solve for (and store) the full Rashomon set for any nonlinear function class - that of decision trees [52]. This work enables us to revisit the study of variable importance with a new lens.
A classical way to determine the importance of a variable is to leave it out and see if the loss changes. This is called algorithmic reliance [16] or leave one covariate out (LOCO) inference [26; 39]. The problem with these approaches is that the performance of the model produced by an algorithm will not change if there exist other variables correlated with the variable of interest.
Model reliance (MR) methods capture the global variable importance (VI) of a given feature _for a specific model_[16]. (Note that MR is limited to refer to permutation importance in [16], while we use the term MR to refer to any metric capturing global variable importance of a given feature and model. We use VI and MR interchangably when the relevant model is clear from context.) Several methods for measuring the MR of a model from a specific model class exist, including the variable importance measure from random forest which uses out-of-bag samples [8] and Lasso regression coefficients [21]. Lundberg et al. [30] introduce a way of measuring MR in tree ensembles using SHAP [29]. Williamson et al. [51] develop MR based on the change in performance between the optimal model and the optimal model using a subset of features.
In addition to the metrics tied to a specific model class, many MR methods can be applied to models _from any model class_. Traditional correlation measures [21] can measure the linear relationship (Pearson correlation) or general monotonic relationship (Spearman correlation) between a feature and predicted outcomes for a model from any model class. Permutation model reliance, as discussed by [1; 16; 23], describes how much worse a model performs when the values of a given feature are permuted such that the feature becomes uninformative. Shapley-based measures of MR, such as those of [50; 29], calculate the average marginal contribution of each feature to a model's predictions. A complete overview of the variable importance literature is beyond the scope of this work; for a more thorough review, see, for example, [2; 32; 33]. Rather than calculating the importance of a variable for a single model, our framework finds the importance of a variable for all models within a Rashomon set, although our framework is applicable to _all_ of these model reliance metrics.
In contrast, model class reliance (MCR) methods describe how much a _class of models_ (e.g., decision trees) relies on a variable. Fisher et al. [16] uses the Rashomon set to provide bounds on the possible _range_ of model reliance for good models of a given class. Smith et al. [44] analytically find the range
of model reliance for the model class of random forests. Zhang and Janson [56] introduce a way to compute confidence bounds for a specific variable importance metric over arbitrary models, which Aufiero and Janson [3] extend so that it is applicable to a broad class of surrogate models in pursuit of computational efficiency. These methods report MCR as a range, which gives no estimate of variable importance - only a range of what values are possible. In contrast, Dong and Rudin [13] compute and visualize the variable importance for every member of a given Rashomon set in projected spaces, calculating a set of points; however, these methods have no guarantees of stability to reasonable data perturbations. In contrast, our framework overcomes these finite sample biases, supporting stronger conclusions about the underlying data distribution.
Related to our work from the stability perspective, Duncan et al. [14] developed a software package to evaluate the stability of permutation variable importance in random forest methods; we perform a similar exercise to demonstrate that current variable importance metrics computed for the Rashomon set are not stable. Additionally, Basu et al. [5] introduced iterative random forests by iteratively reweighting trees and bootstrapping to find _stable_ higher-order interactions from random forests. Further, theoretical results have demonstrated that bootstrapping stabilizes many machine learning algorithms and reduces the variance of statistics [19; 9]. We also take advantage of bootstrapping's flexibility and properties to ensure stability for our variable importance.
## 3 Methods
### Definitions and Estimands
Let \(\mathcal{D}^{(n)}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) denote a dataset of \(n\) independent and identically distributed tuples, where \(Y_{i}\in\mathbb{R}\) denotes some outcome of interest and \(X_{i}\in\mathbb{R}^{p}\) denotes \(p\) covariates. Let \(g^{*}\) represent the data generating process (DGP) producing \(\mathcal{D}^{(n)}\). Let \(f\in\mathcal{F}\) be a model in a model class (e.g., a tree in the set of all possible sparse decision trees), and let \(\phi_{j}\left(f,\mathcal{D}^{(n)}\right)\) denote a function that measures the importance of variable \(j\) for a model \(f\) over a dataset \(\mathcal{D}^{(n)}\). This can be any of the functions described earlier (e.g., permutation importance, SHAP). Our framework is flexible with respect to the user's choice of \(\phi_{j}\) and enables practitioners to use the variable importance metric best suited for their purpose; for instance, conditional model reliance [16] is best-suited to measure only the unique information carried by the variable (that cannot be constructed using other variables), whereas other metrics like subtractive model reliance consider the unconditional importance of the variable. Our framework is easily integrable with either of these. We only assume that the variable importance function \(\phi\) has a bounded range, which holds for a wide class of metrics like SHAP [29], permutation model reliance, and conditional model reliance. Finally, let \(\ell(f,\mathcal{D}^{(n)};\lambda)\) represent a loss function given \(f,\mathcal{D}^{(n)},\) and loss hyperparameters \(\lambda\) (e.g., regularization). We assume that our loss function is bounded above and below, which is true for common loss functions like 0-1 classification loss, as well as for differentiable loss functions with covariates from a bounded domain.
In an ideal setting, we would measure variable importance using \(g^{*}\) and the whole population, but this is impossible because \(g^{*}\) is unknown and data is finite. In practice, scientists instead use the empirical loss minimizer for a specific dataset \(\hat{f}^{*}\in\arg\min_{f\in\mathcal{F}}\ell(f,\mathcal{D}^{(n)})\) as a surrogate for \(g^{*}\); however, several models could explain the same dataset equally well (i.e., the Rashomon effect). Rather than using a single model as a surrogate for \(g^{*}\), we propose using the entire Rashomon set to quantify variable importance. Given a single dataset \(\mathcal{D}^{(n)}\), we define the **Rashomon set** for a model class \(\mathcal{F}\) and parameter \(\varepsilon\) as the set of all models in \(\mathcal{F}\) whose empirical losses are within some bound \(\varepsilon>0\) of the empirical loss minimizer:
\[\mathcal{R}(\epsilon,\mathcal{F},\ell,\mathcal{D}^{(n)},\lambda)=\left\{f\in \mathcal{F}:\ell(f,\mathcal{D}^{(n)};\lambda)\leq\min_{f^{\prime}\in\mathcal{ F}}\ell(f^{\prime},\mathcal{D}^{(n)};\lambda)+\varepsilon\right\}. \tag{1}\]
We denote this Rashomon set by \(\mathcal{R}^{\varepsilon}_{\mathcal{D}^{(n)}}\) or "Rest" (this assumes a fixed \(\mathcal{F}\), \(\ell\) and \(\lambda\)). As discussed earlier, the Rashomon set can be fully computed and stored for non-linear models (e.g., sparse decision trees [52]). For notational simplicity, we often omit \(\lambda\) from the loss function.
While the Rashomon set describes the set of good explanations for a _single_ dataset, Rashomon sets vary across permutations (e.g., subsampling and resampling schemes) of the given data. We define a stable quantity for variable importance that accounts for all good models and all permutations from the data, the **Rashomon Importance Distribution (_RID_)**, whose cumulative distribution function
(CDF) is defined as follows:
\[\mathbb{P}(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda) \leq k) :=\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left[\frac{|\{f \in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}:\phi_{j}(f,\mathcal{D}^{(n)}) \leq k\}|}{|\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}|}\right] \tag{2}\] \[=\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left[\frac{ \text{volume of Rset s.t. variable $j$'s importance is at most $k$}}{\text{volume of Rset}}\right],\]
where \(\phi_{j}\) denotes the variable importance metric being computed on variable \(j\), \(k\in[\phi_{\min},\phi_{\max}]\). For a continuous model class (e.g., linear regression models), the cardinality in the above definition becomes the volume under a measure on the function class, usually \(\ell_{2}\) on parameter space. _RID_ constructs the cumulative distribution function (CDF) for the distribution of variable importance across Rashomon sets; as \(k\) increases, the value of \(\mathbb{P}(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda)\leq k)\) becomes closer to 1. The probability and expectation are taken with respect to datasets of size \(n\) sampled from the empirical distribution \(\mathcal{P}_{n}\), which is the same as considering all possible resamples of size \(n\) from the originally observed dataset \(\mathcal{D}^{(n)}\). Equation (2) weights the contribution of \(\phi_{j}(f,\mathcal{D}^{(n)})\) for each model \(f\) by the proportion of datasets for which this model is a good explanation (i.e., in the Rashomon set). Intuitively, this provides greater weight to the importance of variables for stable models.
We now define an analogous metric for the loss function \(\ell\); we define the **Rashomon Loss Distribution (_RLD_)** as the expected fraction of functions in the Rashomon set with loss below \(k\).
\[\mathbb{P}\left(\textit{RLD}(\varepsilon,\mathcal{F},\ell; \lambda)\leq k\right) :=\mathbb{E}_{\mathcal{D}^{(n)}}\left[\frac{|\{f\in\mathcal{R}_{ \mathcal{D}^{(n)}}^{\varepsilon}:\ell(f,\mathcal{D}^{(n)})\leq k\}|}{| \mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}|}\right]\] \[=\mathbb{E}_{\mathcal{D}^{(n)}}\left[\frac{|\mathcal{R}\left(k- \min_{f\in\mathcal{F}}\ell(f,\mathcal{D}^{(n)}),\mathcal{F},\ell;\lambda \right)|}{|\mathcal{R}(\varepsilon,\mathcal{F},\ell;\lambda)|}\right].\]
This quantity shows how quickly the Rashomon set "fills up" on average as loss changes. If there are many near-optimal functions, this will grow quickly with \(k\).
In order to connect _RID_ for model class \(\mathcal{F}\) to the unknown DGP \(g^{*}\), we make a Lipschitz-continuity-style assumption on the relationship between _RLD_ and _RID_ relative to a general model class \(\mathcal{F}\) and \(\{g^{*}\}\). To draw this connection, we define the loss CDF for \(g^{*}\), called \(LD^{*}\), over datasets of size \(n\) as:
\[\mathbb{P}(LD^{*}(\ell,n;\lambda)\leq k):=\mathbb{E}_{\mathcal{D}^{(n)}}\left[ \mathds{1}[\ell(g^{*},\mathcal{D}^{(n)})\leq k]\right].\]
One could think of \(LD^{*}\) as measuring how quickly the DGP's Rashomon set fills up as loss changes. Here, \(LD^{*}\) is the analog of _RLD_ for the data generation process.
**Assumption 1**.: _If_
\[\rho\left(\textit{RLD}(\varepsilon,\mathcal{F},\ell;\lambda),LD^{ *}(\ell,n;\lambda)\right) \leq\gamma\text{ then}\] \[\rho\left(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda), \textit{RID}_{j}(\varepsilon,\{g^{*}\},\ell;\lambda)\right) \leq d(\gamma)\]
_for a monotonically increasing function \(d:[0,\ell_{\max}-\ell_{\min}]\rightarrow[0,\phi_{\max}-\phi_{\min}]\) such that \(d(0)=0\). Here, \(\rho\) represents any distributional distance metric (e.g., 1-Wasserstein)._
Assumption 1 says that a Rashomon set consisting of good approximations for \(g^{*}\) in terms of loss will also consist of good approximations for \(g^{*}\) in terms of variable importance; i.e., loss is smooth in variable importance. More formally, from this assumption, we know that as \(\rho(LD^{*},\textit{RLD})\to 0\), the variable importance distributions will converge: \(\rho\left(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda),\textit{RID} _{j}(\varepsilon,\{g^{*}\},\ell;\lambda)\right)\to 0\). We demonstrate that this assumption is realistic for a variety of model classes like linear models and generalized additive models in Section C of the supplement.
### Estimation
We estimate the CDF of \(\textit{RID}_{j}\) for each variable \(j\) by leveraging bootstrap sampling to draw new datasets from the empirical data distribution: we sample observations from an observed dataset, construct its Rashomon set, and compute the \(j\)-th variable's importance for each model in the Rashomon set. After repeating this process for \(B\) bootstrap iterations, we estimate _RID_'s CDF by weighting each model \(f\)'s realized variable importance score by the proportion of the bootstrapped
datasets for which \(f\) is in the Rashomon set _and_ the size of each Rashomon set in which \(f\) appears. Specifically, let \(\mathcal{D}_{b}^{(n)}\) represent the dataset sampled with replacement from \(\mathcal{D}^{(n)}\) in iteration \(b=1,\ldots,B\) of the bootstrap procedure. For each dataset \(\mathcal{D}_{b}^{(n)}\), we find the Rashomon set \(\mathcal{R}_{\mathcal{D}_{b}^{(n)}}^{\varepsilon}\). Finally, we compute an **empirical estimate \(\widehat{\boldsymbol{RID}}_{j}\)** of \(\text{\it RID}_{j}\) by computing:
\[\mathbb{P}(\widehat{\boldsymbol{RID}}_{j}(\varepsilon,\mathcal{F},\ell; \lambda)\leq k)=\frac{1}{B}\sum_{b=1}^{B}\left(\frac{|\{f\in\mathcal{R}_{ \mathcal{D}_{b}^{(n)}}^{\varepsilon}:\phi_{j}(f,\mathcal{D}_{b}^{(n)})\leq k \}|}{|\mathcal{R}_{\mathcal{D}_{b}^{(n)}}^{\varepsilon}|}\right).\]
Under Assumption 1, we can directly connect our estimate \(\widehat{\boldsymbol{RID}}(\varepsilon,\mathcal{F},\ell;\lambda)\) to the DGP's variable importance distribution \(\text{\it RID}(\ell_{\text{max}},\{g^{*}\},\ell;\lambda)\), which Theorem 1 formalizes.
**Theorem 1**.: _Let Assumption 1 hold for distributional distance \(\rho(A_{1},A_{2})\) between distributions \(A_{1}\) and \(A_{2}\). For any \(t>0\), \(j\in\{0,\ldots,p\}\) as \(\rho\left(LD^{*}(\ell,n;\lambda),\text{\it RID}(\varepsilon,\mathcal{F},\ell; \lambda)\right)\to 0\) and \(B\to\infty\),_
\[\mathbb{P}\left(\Big{|}\mathbb{P}(\widehat{\boldsymbol{RID}_{j}}(\varepsilon, \mathcal{F},\ell;\lambda)\leq k)-\mathbb{P}(\text{\it RID}_{j}(\varepsilon, \{g^{*}\},\ell;\lambda)\leq k)\Big{|}\geq t\right)\to 0.\]
For a set of models that performs sufficiently well in terms of loss, \(\widehat{\boldsymbol{RID}}_{j}\) thus recovers the CDF of variable importance for the true model across all reasonable perturbations. Further, we can provide a finite sample bound for the estimation of a marginal distribution between \(\widehat{\boldsymbol{RID}}_{j}\) and \(\text{\it RID}_{j}\) for the model class \(\mathcal{F}\), as stated in Theorem 2. Note that this result does not require Assumption 1.
**Theorem 2**.: _Let \(t>0\) and \(\delta\in(0,1)\) be some pre-specified values. Then, with probability at least \(1-\delta\) with respect to bootstrap samples of size \(n\),_
\[\Big{|}\mathbb{P}\Big{(}\widehat{\boldsymbol{RID}}_{j}(\varepsilon,\mathcal{ F},\ell;\lambda)\leq k\Big{)}-\mathbb{P}\Big{(}\text{\it RID}_{j}(\varepsilon, \mathcal{F},\ell;\lambda)\leq k\Big{)}\Big{|}\leq t \tag{3}\]
_with number of bootstrap samples \(B\geq\frac{1}{2\ell^{2}}\ln\left(\frac{2}{\delta}\right)\) for any \(k\in[\phi_{\min},\phi_{\max}]\)._
Because we use a bootstrap procedure, we can control the number of bootstrap iterations to ensure that the difference between \(\text{\it RID}_{j}\) and \(\widehat{\boldsymbol{RID}}_{j}\) is within some pre-specified error. (As defined earlier, \(\text{\it RID}_{j}\) is the expectation over infinite bootstraps, whereas \(\widehat{\boldsymbol{RID}}_{j}\) is the empirical average over \(B\) bootstraps.) For example, after 471 bootstrap iterations, we find that \(\mathbb{P}(\widehat{\boldsymbol{RID}}_{j}(\varepsilon,\mathcal{F},\ell; \lambda)\leq k)\) is within 0.075 of \(\mathbb{P}(\text{\it RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda)\leq k)\) for any given \(k\) with 90% confidence. It also follows that as \(B\) tends to infinity, the estimated \(\text{\it RID}_{j}\) will converge to the true value.
Since we stably estimate the entire distribution of variable importance values, we can create (1) stable point estimates of variable importance (e.g., expected variable importance) that account for the Rashomon effect (2), interquantile ranges of variable importance, and (3) confidence regions that characterize uncertainty around a point estimate of variable importance. We prove exponential rates of convergence for these statistics estimated using our framework in Section B of the supplement.
Because our estimand and our estimation strategy (1) enable us to manage instability, (2) account for the Rashomon effect, and (3) are completely model-agnostic and flexibly work with most existing variable importance metrics, \(\text{\it RID}\) is a valuable quantification of variable importance.
## 4 Experiments With Known Data Generation Processes
### _Rid_ Distinguishes Important Variables from Extraneous Variables
There is no generally accepted ground truth measure for variable importance, so we first evaluate whether a variety of variable importance methods can correctly distinguish between variables used to generate the outcome (in a known data generation process) versus those that are not. We consider the following four data generation processes (DGPs). **Chen's** DGP [10]: \(Y=\mathbb{1}[-2\sin(X_{1})+\max(X_{2},0)+X_{3}+\exp(-X_{4})+\varepsilon\geq 2.048],\) where \(X_{1},\ldots,X_{10},\varepsilon\sim\mathcal{N}(0,1).\) Here, only \(X_{1},\ldots,X_{4}\) are relevant. **Friedman's** DGP [18]: \(Y=\mathbb{1}[10\sin(\pi X_{1}X_{2})+20(X_{3}-0.5)^{2}+10X_{4}+5X_{5}+ \varepsilon\geq 15],\) where \(X_{1},\ldots,X_{6}\sim\mathcal{U}(0,1),\varepsilon\sim\mathcal{N}(0,1).\) Here, only \(X_{1},\ldots,X_{5}\) are relevant. The **Monk 1** DGP [45]: \(Y=\max\left(\mathbb{1}[X_{1}=X_{2}],\mathbb{1}[X_{5}=1]\right),\) where the variables
\(X_{1},\ldots,X_{6}\) have domains of 2, 3, or 4 unique integer values. Only \(X_{1},X_{2},X_{5}\) are important. The **Monk 3** DGP [45]: \(Y=\max\left(\mathbb{1}[X_{5}=3\text{ and }X_{4}=1],\mathbb{1}[X_{5}\neq 4\text{ and }X_{2}\neq 3]\right)\) for the same covariates in Monk 1. Also, \(5\%\) label noise is added. Here, \(X_{2},X_{4},\) and \(X_{5}\) are relevant.
We compare the ability of _RID_ to identify extraneous variables with that of the following baseline methods, whose details are provided in Section E of the supplement: subtractive model reliance \(\phi^{\text{sub}}\) of a random forest (RF) [7], LASSO [21], boosted decision trees [17], and generalized optimal sparse decision trees (GOSDT) [27]; conditional model reliance (CMR) [16]; the impurity based model reliance metric for RF from [8]; the LOCO algorithm reliance [26] for RF and Lasso; the Pearson and Spearman correlation between each feature and the outcome; the mean of the partial dependency plot (PDP) [20] for each feature; the SHAP value [30] for RF; and mean of variable importance clouds (VIC) [13] for the Rashomon set of GOSDTs [52]. If we do not account for instability and simply learn a model and calculate variable importance, baseline models generally perform poorly, as shown in Section D of the supplement. Thus, we chose to account for instability in a way that benefits the baselines. We evaluate each baseline method for each variable across \(500\) bootstrap samples and compute the _median VI across bootstraps_, with the exception of VIC -- for VIC, we take the _median VI value across the Rashomon set_ for the original dataset, as VIC accounts for Rashomon uncertainty. Here, we aim to see whether we can identify extraneous (i.e., unimportant variables). For a DGP with \(C\) extraneous variables, we classify the \(C\) variables with the \(C\) smallest median variable importance values as extraneous. We repeat this experiment with different values for the Rashomon threshold \(\varepsilon\) in Section D of the supplement.
Figure 3 (top) reports the proportion of variables that are correctly classified for each simulation setup as a stacked barplot. _RID_ identifies all important and unimportant variables for these complex simulations. Note that four other baseline methods - MR RF, RF Impurity, RF Permute, and VIC - also differentiated all important from unimportant variables. Motivated by this finding, we next
Figure 3: (Top) The proportion of features ranked correctly by each method on each data set represented as a _stacked_ barplot. The figures are ordered according to method performance across the four simulation setups. (Bottom) The proportion of the 500 independent DGP \(\phi^{(sub)}\) calculations on _new datasets from the DGP_ that were close to the distribution _over bootstraps from a single dataset_ for each method for each variable in each simulation. Underneath each method’s label, the first row shows the percentage of times across all 500 independently generated datasets and variables that the DGP’s variable importance was inside of that method’s interval. The second row shows the percentage of pairwise rankings correct for each method (from the top plot). Higher is always better.
explore how well methods recover the true value for subtractive model reliance on the DGP, allowing us to distinguish between the best performing methods on the classification task.
### _Rid_ Captures Model Reliance for the true data generation process
_Rid_ allows us to quantify uncertainty in variable importance due to _both_ the Rashomon effect and instability. We perform an ablation study investigating how accounting for both stability and the Rashomon effect compares to having one without the other. We evaluate what proportion of subtractive model reliances calculated for the DGP on 500 test sets are contained within our uncertainty intervals generated using only one training dataset. This experiment tells us whether the intervals created on a single dataset will generalize.
To create the uncertainty interval on the training dataset and for each method, we first find the subtractive model reliance \(\phi^{(sub)}\) across 500 bootstrap iterations of a given dataset for the four algorithms shown in Figure 3 (bottom) (baseline results without bootstrapping are in Section D of the supplementary material). Additionally, we find the VIC for the Rashomon set of GOSDTs on the original dataset. We summarize these model reliances (500 bootstraps \(\times\) 28 variables across datasets \(\times\) 4 algorithms + 8,247 models in VIC's + 10,840,535 total models across Rsets \(\times\) 28 variables from _RID_) by computing their box-and-whisker ranges (1.5 \(\times\) Interquartile range [49]). To compare with "ground truth," we sample 500 test datasets from the DGP and calculate \(\phi^{(sub)}\) for the DGP for that dataset. For example, assume the DGP is \(Y=X^{2}+\varepsilon\). We would then use \(f(X)=X^{2}\) as our machine learning model and evaluate \(\phi^{(sub)}(f,\mathcal{D}^{(n)})\) on \(f\) for each of the 500 test sets. We then check if the box-and-whisker range of each method's interval constructed on the training set contains the computed \(\phi^{(sub)}\) for the DGP for each test dataset. Doing this allows us to understand whether our interval contains the _true_\(\phi^{(sub)}\) for each test set.
Figure 3 (bottom) illustrates the proportion of times that the test variable importance values fell within the uncertainty intervals from training. These baselines fail to capture the test \(\phi^{(sub)}\)_entirely_ for at least one variable (\(<0.05\%\) recovery proportion). **Only _RID both_ recovers important/unimportant_ classifications perfectly and achieves a strong recovery proportion at 95%**.
### _Rid_ is Stable
Our final experiment investigates the stability of VIC and MCR (which capture only Rashomon uncertainty but not stability) to _RID_, which naturally considers data perturbations. We generate 50 independent datasets from each DGP and compute the box-and-whisker ranges (BWR) of each uncertainty metric for each dataset; for every pair of BWRs for a given method, we then calculate the Jaccard similarity between BWR's. For each generated dataset, we then average the Jaccard similarity across variables.
Figure 4 displays the similarity scores between the box and whisker ranges of MCR, VIC, and _RID_ across the 50 datasets for each DGP. Note that Monk 1 has no noise added, so instability should not be a concern for any method. For real datasets, **MCR and VIC achieve median similarity below 0.55; _RID_'s median similarity is 0.69; it is much more stable.
Figure 4: Median Jaccard similarity scores across 50 independently generated MCR, VIC, and _RID_ box and whisker ranges for each DGP except Monk 1 (which has no noise); 1 is perfect similarity. Error bars show 95% confidence interval around the median.
Case Study
Having validated _RID_ on synthetic datasets, we demonstrate its utility in a real world application: studying which host cell transcripts and chromatin patterns are associated with high expression of Human Immunodeficiency Virus (HIV) RNA. We analyzed a dataset that combined single cell RNAseq/ATACseq profiles for 80,000 individual HIV infected cells from two different donors in the aims of finding new cellular cofactors for HIV expression that could be targeted to reactivate the latent HIV reservoir in people with HIV (PWH). A longer description of the data is in [38]. Finding results on this type of data allows us to create new hypotheses for which genes are important for HIV load prediction and might generalize to large portions of the population.
To identify which genes are stably importance across good models, we evaluated this dataset using _RID_ over the model class of sparse decision trees using subtractive model reliance. We selected 14,614 samples (all 7,307 high HIV load samples and 7,307 random low HIV load samples) from the overall dataset in order to balance labels, and filtered the complete profiles down to the top 100 variables by individual AUC. For full experimental details, see Section E of the supplement.
Figure 5 illustrates the probability that _RID_ is greater than 0 for the 10 highest probability variables (0 is when the variable is not important at all). **We find that LINC0486 - a less explored gene - is the most important variable**, with the \(\mathbb{P}\) (\(RID_{LINC00486}>0)=78.4\%\). LINC0486 is a long non-coding RNA (i.e., it functions as an RNA molecule but does not encode a protein like most genes), and there is no literature on this gene and HIV, making this association a novel one. However, recent work [48] has shown that LINC00486 can enhance EBV (Epstein-Barr virus) infection by activating NF-\(\kappa\)B. It is well established that NF-\(\kappa\)B can regulate HIV expression [34; 24; 4], suggesting a possible mechanism and supporting future study. Notably, _RID_ highlighted PDE4D, which interacts with the Tat protein and thereby HIV transcription [42]; HNRNPA2B1, which promotes HIV expression by altering the structure of the viral promoter [41]; and MALAT1, which has recently been shown to be an important regulator of HIV expression [37]. These three findings validate prior work and show that _RID_ can uncover variables that are known to interact with HIV.
**Note that previous methods - even those that account for the Rashomon effect - could not produce this result**. MCR and VIC do not account for instability. For example, after computing MCR for 738 bootstrap iterations, we find that the MCR for the LINC00486 gene has overlap with 0 in \(96.2\%\) of bootstrapped datasets, meaning MCR would not allow us to distinguish whether LINC00486 is important or not \(96.2\%\) of the time. Without _RID_, we would not have strong evidence that LINC00486 is necessary for good models. By explicitly accounting for instability, we increase trust in our analyses.
Critically, _RID_ also found _very low_ importance for the majority of variables, allowing researchers to dramatically reduce the number of possible directions for future experiments designed to test a gene's functional role. Such experiments are time consuming and cost tens of thousands of dollars _per donor_, so narrowing possible future directions to a small set of genes is of the utmost importance. **Our analysis provides a manageable set of clear directions for future work studying the functional roles of these genes in HIV.**
Figure 5: Probability of each gene’s model reliance being greater than 0 across Rashomon sets across bootstrapped datasets for the ten genes with the highest \(\mathbb{P}(RID_{j}>0)\). We ran 738 bootstrap iterations to ensure that \(\mathbb{P}(\widehat{RID}_{j}>0)\) is within 0.05 of \(\mathbb{P}(RID_{j}>0)\) with 95% confidence (from Theorem 2).
Conclusion and Limitations
We introduced _RID_, a method for recovering the importance of variables in a way that accounts for both instability and the Rashomon effect. We showed that _RID_ distinguishes between important and extraneous variables, and that _RID_ better captures the true variable importance for the DGP than prior methods. We showed through a case study in HIV load prediction that _RID_ can provide insight into complicated real world problems. Our framework overcomes instability and the Rashomon effect, moving beyond variable importance for a single model and increasing reproducibility.
A limitation is that currently, there are relatively few model classes for which the Rashomon set can be computed. Therefore, future work should aim to compute and store the Rashomon set of a wider variety of model classes. Future work may investigate incorporating Rashomon sets that may be well-approximated (e.g., GAMs, [11]), but not computed exactly, into the _RID_ approach. Nonetheless, sparse trees are highly flexible, and using them with _RID_ improves the trustworthiness and transparency of variable importance measures, enabling researchers to uncover important, reproducible relationships about complex processes without being misled by the Rashomon effect.
## 7 Acknowledgements
We gratefully acknowledge support from NIH/NIDA R01DA054994, NIH/NIAID R01AI143381, DOE DE-SC0023194, NSF IIS-2147061, and NSF IIS-2130250.
## Appendix A Proofs
First, recall the following assumption from the main paper:
**Assumption 1**.: _If_
\[\rho\left(\text{RID}(\varepsilon,\mathcal{F},\ell;\lambda),\text{RID}_{j}( \varepsilon,\{g^{*}\},\ell;\lambda)\right)\leq d(\gamma)\]
_for a monotonically increasing function \(d:[0,\ell_{\max}-\ell_{\min}]\rightarrow[0,\phi_{\max}-\phi_{\min}]\) such that \(d(0)=0\). Here, \(\rho\) represents any distributional distance metric (e.g., 1-Wasserstein)._
**Theorem 1**.: _Let Assumption 1 hold for distributional distance \(\rho(A_{1},A_{2})\) between distributions \(A_{1}\) and \(A_{2}\). Also, let \(k\in[\phi_{min},\phi_{max}]\). For all \(t>0\), as \(\rho\left(\text{LID}^{*}(t,n;\lambda),\text{RID}(\varepsilon,\mathcal{F},\ell ;\lambda)\right)\to 0\) and \(B\rightarrow\infty\),_
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left|\mathbb{P}_{ \mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\widehat{\text{RID}}_{j}(\varepsilon, \mathcal{F},\ell;\lambda)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P} _{n}}(\text{RID}_{j}(\varepsilon,\{g^{*}\},\ell;\lambda)\leq k)\right|\geq t \right)\to 0.\]
Proof.: Let \(\mathcal{D}^{(n)}\) be a dataset of \(n\)\((x_{i},y_{i})\) tuples independently and identically distributed according to the empirical distribution \(\mathcal{P}_{n}\). Let \(k\in[\phi_{\min},\phi_{\max}]\).
Then, we know that
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left| \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\widehat{\text{RID}}_{j}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)} \sim\mathcal{P}_{n}}(\text{RID}_{j}(\varepsilon,\{g^{*}\},\ell;\lambda)\leq k )\right|\geq t\right)\] \[= \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left| \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\widehat{\text{RID}}_{j}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)} \sim\mathcal{P}_{n}}(\text{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda) \right.\right.\] \[+\left.\left.\left.\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{ n}}(\text{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda)-\mathbb{P}_{\mathcal{D}^{(n)} \sim\mathcal{P}_{n}}(\text{RID}_{j}(\varepsilon,\{g^{*}\},\ell;\lambda)\leq k )\right|\geq t\right)\right.\] \[\left.\left.\left(\text{by adding }0\right)\right.\] \[\leq \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left| \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\widehat{\text{RID}}_{j}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)} \sim\mathcal{P}_{n}}(\text{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda) \right|\right.\] \[+\left.\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \text{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda)-\mathbb{P}_{\mathcal{D}^{(n )}\sim\mathcal{P}_{n}}(\text{RID}_{j}(\varepsilon,\{g^{*}\},\ell;\lambda)\leq k )\right|\geq\frac{t}{2}\right)\] \[\left.\left(\text{by union bound}\right).\]
Now, assume that \(\rho(\text{RID}(\varepsilon,\mathcal{F},\ell;\lambda)\to 0,\) and let \(\rho\) be the pointwise absolute difference between the CDFs of the distributions. Recall that, in the theorem statement, we have assumed \(\rho\left(\text{LID}^{*}(\ell,n;\lambda),\text{RID}(\varepsilon,\mathcal{F}, \ell;\lambda)\right)\to 0\). Therefore, by Assumption 1,
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left|\mathbb{P}_{ \mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{ RID}_{j}(\varepsilon,\{g^{*}\},\ell;\lambda)\leq k)\right|\geq\frac{t}{2} \right)\to 0.\]
Additionally, we will show in Corollary 1 that as \(B\rightarrow\infty,\)
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left|\mathbb{P}_{ \mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\widehat{\text{RID}}_{j}(\varepsilon, \mathcal{F},\ell;\lambda)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \text{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda)\leq k)\right|\geq\frac{t }{2}\right)\to 0.\]
Therefore, as \(B\rightarrow\infty\) and \(\rho\left(\text{LID}^{*}(\ell,n;\lambda),\text{RID}(\varepsilon,\mathcal{F}, \ell;\lambda)\right)\to 0,\) the _estimated_ Rashomon importance distribution for model class \(\mathcal{F}\) converges to the true Rashomon importance distribution for the DGP \(g^{*}\).
**Theorem 2**.: _Let \(\mathcal{D}^{(n)}\) be a dataset of \(n\)\((x_{i},y_{i})\) tuples independently and identically distributed according to the empirical distribution \(\mathcal{P}_{n}\). Let \(k\in[\phi_{\min},\phi_{\max}]\). Then, with probability \(1-\delta\), with \(B\geq\frac{1}{2^{t}}\ln\left(\frac{2}{\delta}\right)\) bootstrap replications,_
\[\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\text{RID} }_{j}\leq k\right)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{RID} _{j}\leq k)\right|<t.\]
Proof.: First, let us restate the definition of \(\widehat{\text{RID}}_{j}\) and \(\widehat{\text{RID}}_{j}\). Let \(n\in\mathbb{N}\). Let \(\varepsilon\) be the Rashomon threshold, and let the Rashomon set for some dataset \(\mathcal{D}^{(n)}\) and some fixed model class \(\mathcal{F}\) be denoted as \(\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}\). Without loss
of generality, assume \(\mathcal{F}\) is a finite model class. Then, for a given \(k\in[\phi_{\min},\phi_{\max}]\),
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\textit{RID}_{j}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)=\mathbb{E}_{\mathcal{D}^{(n )}\sim\mathcal{P}_{n}}\left[\frac{\sum_{f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{ \varepsilon}}\mathbb{1}[\phi_{j}(f,\mathcal{D}^{(n)})\leq k]}{|\mathcal{R}_{ \mathcal{D}^{(n)}}^{\varepsilon}|}\right].\]
Note that the expectation is over all datasets of size \(n\) sampled with replacement from the originally observed dataset, represented by \(\mathcal{P}_{n}\); we are taking the expectation over bootstrap samples.
We then sample datasets of size \(n\) with replacement from the _empirical_ CDF \(\mathcal{P}_{n}\), find the Rashomon set for the replicate dataset, and compute the variable importance metric for each model in the discovered Rashomon set. For the same \(k\in[\phi_{\min},\phi_{\max}]\),
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{RID }_{j}}(\varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)=\frac{1}{B}\sum_{ \mathcal{D}_{b}^{(n)}\sim\mathcal{P}_{n}}\left[\frac{\sum_{f\in\mathcal{R}_{ \mathcal{D}^{(n)}}^{\varepsilon}}\mathbb{1}[\phi_{j}(f,\mathcal{D}_{b}^{(n)} )\leq k]}{|\mathcal{R}_{\mathcal{D}_{b}^{(n)}}^{\varepsilon}|}\right],\]
where \(B\) represents the number of size \(n\) datasets sampled from \(\mathcal{P}_{n}\).
Notice that
\[0\leq\frac{\sum_{f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}}\mathbb{1} [\phi_{j}(f,\mathcal{D}^{(n)})\leq k]}{|\mathcal{R}_{\mathcal{D}^{(n)}}^{ \varepsilon}|}\leq 1. \tag{4}\]
Because \(\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{RID }_{j}}(\varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)\) is an Euclidean average of the quantity in Equation (4) and \(\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\textit{RID}_{j}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)\) is the expectation of the quantity in Equation (4), we can use Hoeffding's inequality to show that
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left| \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{RID }}_{j}(\varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)-\mathbb{P}_{ \mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\textit{RID}_{j}(\varepsilon, \mathcal{F},\ell;\lambda)\leq k\right)\right|>t\right)\] \[\leq 2\exp\left(-2Bt^{2}\right)\]
for some \(t>0\).
Now, we can manipulate Hoeffding's inequality to discover a finite sample bound. Instead of setting \(B\) and \(t,\) we will now find the \(B\) necessary to guarantee that
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left|\mathbb{P}_{ \mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{RID}}_{j}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)-\mathbb{P}_{\mathcal{D}^{( n)}\sim\mathcal{P}_{n}}\left(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell; \lambda)\leq k\right)\right|\geq t\right)\leq\delta \tag{5}\]
for some \(\delta,t>0\).
Let \(\delta>0\). From Hoeffding's inequality, we see that if we choose \(B\) such that \(2\exp\left(-2Bt^{2}\right)\leq\delta\), then
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left|\mathbb{P}_{ \mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{RID}}_{j}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)-\mathbb{P}_{\mathcal{D}^{(n )}\sim\mathcal{P}_{n}}\left(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell; \lambda)\leq k\right)\right|\geq t\right)\leq 2\exp\left(-2Bt^{2}\right)\leq\delta.\]
Notice that \(2\exp\left(-2Bt^{2}\right)\leq\delta\) if and only if \(B\geq\frac{1}{2t^{2}}\ln\left(\frac{2}{3}\right).\)
Therefore, with probability \(1-\delta\),
\[\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{ RID}}_{j}\leq k\right)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left( \textit{RID}_{j}\leq k\right)\right|\leq t\]
with \(B\geq\frac{1}{2t^{2}}\ln\left(\frac{2}{\delta}\right)\) bootstrap iterations.
**Corollary 1**.: _Let \(t>0\), \(k\in[\phi_{\min},\phi_{\max}],\) and assume that \(\mathcal{D}^{(n)}\sim\mathcal{P}_{n}.\) As \(B\to\infty,\)_
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left|\mathbb{P}_{ \mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{RID}_{j}}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)-\mathbb{P}_{\mathcal{D}^{( n)}\sim\mathcal{P}_{n}}\left(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell; \lambda)\leq k\right)\right|\geq t\right)\to 0.\]
Proof.: Recall the results of Theorem 2:
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\left| \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left(\widehat{\textit{RID}_{j}}( \varepsilon,\mathcal{F},\ell;\lambda)\leq k\right)-\mathbb{P}_{\mathcal{D}^{(n)} \sim\mathcal{P}_{n}}\left(\textit{RID}_{j}(\varepsilon,\mathcal{F},\ell; \lambda)\leq k\right)\right|\geq t\right)\] \[\leq 2\exp\left(-2Bt^{2}\right)\] \[\to 0\text{ as B }\to\infty.\]
Statistics Derived From _Rid_
**Corollary 2**.: _Let \(\varepsilon,B>0\). Then,_
\[\mathbb{P}\left(\left|\mathbb{E}_{RID_{j}}[RID_{j}]-\mathbb{E}_{ \widehat{RID_{j}}}[\widehat{RID_{j}}]\right|\geq\varepsilon_{E}\right)\leq 2 \exp\left(\frac{-2B\varepsilon_{E}^{2}}{(\phi_{\max}-\phi_{\min})^{2}}\right). \tag{6}\]
_Therefore, the expectation of \(\widehat{RID_{j}}\) converges exponentially quickly to the expectation of \(RID_{j}\)._
Proof.: Let \(\phi_{\min},\phi_{\max}\) represent the bounds of the variable importance metric \(\phi\). Assume that \(0\leq\phi_{\min}\leq\phi_{\max}<\infty\). If \(\phi_{\min}<0\), then we can modify the variable importance metric to be strictly positive; for example, if \(\phi\) is Pearson correlation - which has a range between -1 and 1 - we can define a new variable importance metric that is the absolute value of the Pearson correlation _or_ define another metric that is the Pearson correlation plus 1 so that the range is now bounded below by 0.
Now, recall that for any random variable \(X\) whose support is strictly greater than 0, we can calculate its expectation as \(\mathbb{E}_{X}[X]=\int_{0}^{\infty}(1-\mathbb{P}(X\leq x))dx\). Because \(\phi_{\min}\geq 0\), we know that
\[\mathbb{E}_{RID_{j}}[RID_{j}]\] \[= \int_{\phi_{\min}}^{\phi_{\max}}\left(1-\mathbb{P}(RID_{j}\leq k) \right)dk\] \[= \int_{\phi_{\min}}^{\phi_{\max}}\left(1-\mathbb{E}_{D^{(n)}} \left[\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n )}}^{\varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)})\leq k]}{\sum_{f\in\mathcal{F} }\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]}\right]\right)dk\] \[= \int_{\phi_{\min}}^{\phi_{\max}}dk-\int_{\phi_{\min}}^{\phi_{\max }}\mathbb{E}_{D^{(n)}}\left[\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in \mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)}) \leq k]}{\sum_{f\in\mathcal{F}}\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}} ^{\varepsilon}]}\right]dk\] \[= \left(\phi_{\max}-\phi_{\min}\right)-\mathbb{E}_{D^{(n)}}\left[ \int_{\phi_{\min}}^{\phi_{\max}}\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in \mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)} )\leq k]}{\sum_{f\in\mathcal{F}}\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)} }^{\varepsilon}]}dk\right]\text{ by Fubini's theorem}.\]
Using similar logic we can show that
\[\mathbb{E}_{\widehat{RID_{j}}}[\widehat{RID_{j}}] =\int_{\phi_{\min}}^{\phi_{\max}}\left(1-\frac{1}{B}\sum_{b=1}^{ B}\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{ \varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)}_{b})\leq k]}{\sum_{f\in\mathcal{F}} \mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]}\right)dk\] \[=\int_{\phi_{\min}}^{\phi_{\max}}dk-\int_{\phi_{\min}}^{\phi_{ \max}}\frac{1}{B}\sum_{b=1}^{B}\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in \mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)}_{b} )\leq k]}{\sum_{f\in\mathcal{F}}\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)} }^{\varepsilon}]}dk\] \[=\left(\phi_{\max}-\phi_{\min}\right)-\frac{1}{B}\sum_{b=1}^{B} \left(\int_{\phi_{\min}}^{\phi_{\max}}\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f \in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)} _{b})\leq k]}{\sum_{f\in\mathcal{F}}\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n )}}^{\varepsilon}]}dk\right).\]
We can then rewrite \(\left|\mathbb{E}_{RID_{j}}[RID_{j}]-\mathbb{E}_{\widehat{RID_{j}}}[\widehat{RID _{j}}]\right|\) using the calculations above:
\[\left|\mathbb{E}_{RID_{j}}[RID_{j}]-\mathbb{E}_{\widehat{RID_{j} }}[\widehat{RID_{j}}]\right|\] \[= \Bigg{|}\left(\phi_{\max}-\phi_{\min}\right)-\mathbb{E}_{D^{(n)} \sim\mathcal{P}_{n}}\left[\int_{\phi_{\min}}^{\phi_{\max}}\sum_{f\in\mathcal{F }}\frac{\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}] \mathbb{1}[\phi_{j}(f,D^{(n)}_{b})\leq k]}{\sum_{f\in\mathcal{F}}\mathbb{1}[f \in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]}dk\right]\] \[-\left(\left(\phi_{\max}-\phi_{\min}\right)-\frac{1}{B}\sum_{b=1} ^{B}\left(\int_{\phi_{\min}}^{\phi_{\max}}\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f \in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)} _{b})\leq k]}{\sum_{f\in\mathcal{F}}\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n )}}^{\varepsilon}]}dk\right)\Bigg{)}\Bigg{|}\] \[=\Bigg{|}-\mathbb{E}_{D^{(n)}\sim\mathcal{P}_{n}}\left[\int_{ \phi_{\min}}^{\phi_{\max}}\sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in\mathcal{R} _{\mathcal{D}^{(n)}}^{\varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)}_{b})\leq k]}{ \sum_{f\in\mathcal{F}}\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{ \varepsilon}]}dk\right]\] \[+\frac{1}{B}\sum_{b=1}^{B}\left(\int_{\phi_{\min}}^{\phi_{\max}} \sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{ \varepsilon}]\mathbb{1}[\phi_{j}(f,D^{(n)}_{b})\leq k]}{\sum_{f\in\mathcal{F}} \mathbb{1}[f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}]}dk\right)\Bigg{|}.\]
Because \(0\leq\mathbb{P}(RID_{j}\leq k),\mathbb{P}(\widehat{RID_{j}}\leq k)\leq 1\) for all \(k\in\mathbb{R}\),
\[\int_{\phi_{\min}}^{\phi_{\max}}0dk\leq\int_{\phi_{\min}}^{\phi_{ \max}}\mathbb{P}(RID_{j}\leq k)dk,\int_{\phi_{\min}}^{\phi_{\max}}\mathbb{P}( \widehat{RID_{j}}\leq k)dk\leq\int_{\phi_{\min}}^{\phi_{\max}}1dk\] \[0\leq\int_{\phi_{\min}}^{\phi_{\max}}\mathbb{P}(RID_{j}\leq k)dk, \int_{\phi_{\min}}^{\phi_{\max}}\mathbb{P}(\widehat{RID_{j}}\leq k)dk\leq(\phi _{\max}-\phi_{\min}),\]
suggesting that \(\left(\int_{\phi_{\min}}^{\phi_{\max}}\frac{\sum_{f\in\mathcal{F}}\mathbb{1} \mathbb{1}\mathbb{1}[\phi_{f}(f,D_{b}^{(n)})\leq k]}{\sum_{f\in\mathcal{F}} \mathbb{1}[f\in\mathcal{R}_{\mathcal{D}(n)}^{\epsilon}]}dk\right)\) is bounded.
Then, by Hoeffding's inequality, we know that
\[\mathbb{P}\left(\left|\mathbb{E}_{RID_{j}}[RID_{j}]-\mathbb{E}_{ \widehat{RID_{j}}}[\widehat{RID_{j}}]\right|>\varepsilon_{E}\right)\] \[= \mathbb{P}\Bigg{(}\Bigg{|}\mathbb{E}_{D^{(n)}\mathcal{P}_{n}} \left[\int_{\phi_{\min}}^{\phi_{\max}}\sum_{f\in\mathcal{F}}\frac{\mathbb{1} [f\in\mathcal{R}_{\mathcal{D}(n)}^{\epsilon}]\mathbb{1}[\phi_{j}(f,D_{b}^{(n) })\leq k]}{\sum_{f\in\mathcal{F}}\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}(n)} ^{\epsilon}]}dk\right]\] \[-\frac{1}{B}\sum_{b=1}^{B}\left(\int_{\phi_{\min}}^{\phi_{\max}} \sum_{f\in\mathcal{F}}\frac{\mathbb{1}[f\in\mathcal{R}_{\mathcal{D}(n)}^{ \epsilon}]\mathbb{1}[\phi_{j}(f,D_{b}^{(n)})\leq k]}{\sum_{f\in\mathcal{F}} \mathbb{1}[f\in\mathcal{R}_{\mathcal{D}(n)}^{\epsilon}]}dk\right)\Bigg{|}> \varepsilon_{E}\Bigg{)}\] \[\leq 2\exp\left(\frac{-2B\varepsilon_{E}^{2}}{(\phi_{\max}-\phi_{\min })^{2}}\right).\]
**Corollary 3**.: _Let \(F_{\widehat{RID_{j}}}(k)=\mathbb{P}(\widehat{RID_{j}}\leq k)\) and \(F_{\widehat{RID_{j}}}(k)=\mathbb{P}(RID_{j}\leq k)\) represent the CDFs of \(\widehat{RID_{j}}\) and \(RID_{j}\) respectively. Assume \(F_{\widehat{RID_{j}}}(k)\) and \(F_{\widehat{RID_{j}}}(k)\) are strictly increasing in \(k\in[\phi_{\min},\phi_{\max}].\) Then, the interquantile range (IQR) of \(\widehat{RID_{j}}\) will converge in probability to the IQR of \(RID_{j}\) exponentially quickly._
Proof.: Let \(k_{0.25}\) be the \(k\) such that \(F_{\widehat{RID_{j}}}(k_{0.25})=0.25.\) And let \(k_{0.75}\) be the \(k\) such that \(F_{\widehat{RID_{j}}}(k_{0.75})=0.75.\) Similarly, let \(\hat{k}_{0.25}\) be the \(k\) such that \(F_{\widehat{RID_{j}}}(\hat{k}_{0.25})=0.25.\) And let \(\hat{k}_{0.75}\) be the \(k\) such that \(F_{\widehat{RID_{j}}}(\hat{k}_{0.75})=0.75.\) The IQR of \(\widehat{RID_{j}}\) converges to the IQR of \(RID_{j}\) if \(\hat{k}_{0.25}\to k_{0.25}\) and \(\hat{k}_{0.75}\to k_{0.75}.\)
Because \(F_{\widehat{RID_{j}}}(k)\) and \(F_{\widehat{RID_{j}}}(k)\) are increasing in \(k\), we know that if \(\mathbb{P}\left(\widehat{RID_{j}}\leq k_{0.25}\right)=0.25,\) then \(\hat{k}_{0.25}=k_{0.25}.\) An analogous statement holds for \(\hat{k}_{0.75}.\)
So, we will bound how far \(F_{\widehat{RID_{j}}}(k_{0.25})\) is from \(0.25=F_{RID_{j}}(k_{0.25})\) and how far \(F_{\widehat{RID_{j}}}(k_{0.75})\) is from \(0.75=F_{\widehat{RID_{j}}}(k_{0.75}).\)
Let \(t>0.\) Then,
\[\mathbb{P}\left(\left|F_{\widehat{RID_{j}}}(k_{0.25})-F_{\widehat {RID_{j}}}(k_{0.25})\right|+\left|F_{\widehat{RID_{j}}}(k_{0.75})-F_{\widehat{ RID_{j}}}(k_{0.75})\right|>t\right)\] \[\leq \mathbb{P}\left(\left\{\left|F_{\widehat{RID_{j}}}(k_{0.25})-F_{ \widehat{RID_{j}}}(k_{0.25})\right|>\frac{t}{2}\right\}\cup\left\{\left|F_{ \widehat{RID_{j}}}(k_{0.75})-F_{\widehat{RID_{j}}}(k_{0.75})\right|>\frac{t}{2} \right\}\right)\] \[\leq \mathbb{P}\left(\left\{\left|F_{\widehat{RID_{j}}}(k_{0.25})-F_{ \widehat{RID_{j}}}(k_{0.25})\right|>\frac{t}{2}\right\}\right)\] \[+\mathbb{P}\left(\left\{\left|F_{\widehat{RID_{j}}}(k_{0.75})-F_{ \widehat{RID_{j}}}(k_{0.75})\right|>\frac{t}{2}\right\}\right)\text{ by Union bound.}\]
Then, by Theorem 2,
\[\mathbb{P}\left(\left|F_{\widehat{RID_{j}}}(k_{0.25})-F_{\widehat{RID_{j}}}(k_{0. 25})\right|>\frac{t}{2}\right)\leq 2\exp\left(-2B\frac{t^{2}}{4}\right).\]
So,
\[\mathbb{P}\left(\left|F_{\widehat{kID}_{j}}(k_{0.25})-F_{kID_{j}}(k_ {0.25})\right|+\left|F_{\widehat{kID}_{j}}(k_{0.75})-F_{kID_{j}}(k_{0.75})\right|>t\right)\] \[\leq \mathbb{P}\left(\left\{\left|F_{\widehat{kID}_{j}}(k_{0.25})-F_{ kID_{j}}(k_{0.25})\right|\right\}>\frac{t}{2}\right)\] \[+\mathbb{P}\left(\left\{\left|F_{\widehat{kID}_{j}}(k_{0.75})-F_{ kID_{j}}(k_{0.75})\right|\right\}>\frac{t}{2}\right)\] \[\leq 2\exp\left(-2B\frac{t^{2}}{4}\right)+2\exp\left(-2B\frac{t^{2}} {4}\right)\] \[= 4\exp\left(-2B\frac{t^{2}}{4}\right).\]
So, as \(B\to\infty,\mathbb{P}\left(\left|F_{\widehat{kID}_{j}}(k_{0.25})-F_{kID_{j}}( k_{0.25})\right|+\left|F_{\widehat{kID}_{j}}(k_{0.75})-F_{kID_{j}}(k_{0.75}) \right|>t\right)\) decreases exponentially quickly, ultimately converging to 0.
Therefore, the IQR of \(\widehat{kID}\) converges to the IQR of _RID_ exponentially quickly.
Example Model Classes for Which _Rid_ Converges
First, recall the following assumption from the main paper:
**Assumption 1**.: _If_
\[\rho\left(\text{RLD}(\varepsilon,\mathcal{F},\ell;\lambda),\text{LD}^{\star}( \ell,n;\lambda)\right)\leq\gamma\text{ then}\]
\[\rho\left(\text{RID}_{j}(\varepsilon,\mathcal{F},\ell;\lambda),\text{RID}_{j} (\varepsilon,\{g^{\star}\},\ell;\lambda)\right)\leq d(\gamma)\]
_for a monotonically increasing function \(d:[0,\ell_{\max}-\ell_{\min}]\to[0,\phi_{\max}-\phi_{\min}]\) such that \(d(0)=0\). Here, \(\rho\) represents any distributional distance metric (e.g., 1-Wasserstein)._
In this section, we highlight two simple examples of model classes and model reliance metrics for which Assumption 1 holds. First we show that Assumption 1 holds for the class of linear regression models with the model reliance metric being the coefficient assigned to each variable in Proposition 1; Proposition 2 presents a similar result for generalized additive models. We begin by presenting two lemmas which will help prove Proposition 1:
**Lemma 1**.: _Let \(\ell\) be unregularized mean square error, used as the objective for estimating optimal models in some class of continuous models \(\mathcal{F}\). Assume that the DGP's noise \(\epsilon\) is centered at 0: \(\mathbb{E}[\epsilon]=0\). Define the function \(m:[0,\ell_{\text{\rm max}}]\to[0,1]\) as:_
\[m(\varepsilon):=\lim_{n\to\infty}\int_{\ell_{\min}}^{\ell_{\max}}\left|\mathbb{ P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{LD}^{\star}(\ell)\leq k)- \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{RLD}(\varepsilon, \mathcal{F},\ell)\leq k)\right|\,dk.\]
_The function \(m\) is a strictly increasing function of \(\varepsilon\); \(m\) simply measures the integrated absolute error between the CDF of \(g^{\star}\)'s loss distribution and the CDF of the Rashomon set's loss distribution. Then, if \(g^{\star}\in\mathcal{F}\), then \(m(0)=0\)._
Proof.: Let \(\ell\) be unregularized mean square error, used as the objective for estimating optimal models in some class of continuous models \(\mathcal{F}\). Let \(g^{\star}\) denote the unknown DGP. Throughout this proof, we consider the setting with \(n\to\infty\), although we often omit this notation for simplicity.
First, we restate the definition of _RLD_ and \(\text{LD}^{\star}\) for reference:
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{RLD}(\varepsilon, \mathcal{F},\ell;\lambda)\leq k):=\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P} _{n}}\left[\frac{\nu(\{f\in\mathcal{R}_{\mathcal{D}^{(n)}}^{\varepsilon}: \ell(f,\mathcal{D}^{(n)})\leq k\})}{\nu(\mathcal{R}_{\mathcal{D}^{(n)}}^{ \varepsilon})}\right]\]
and
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{LD}^{\star}(\ell,n; \lambda)\leq k):=\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left[ \mathbb{1}[\ell(g^{\star},\mathcal{D}^{(n)})\leq k]\right].\]
Because \(g^{\star}\) is the DGP, we know that its expected loss should be lower than the expected loss for any other model in the model class: \(\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}[\ell(g^{\star},\mathcal{D}^ {(n)})]\leq\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}[\ell(f,\mathcal{ D}^{(n)})]\) for any \(f\in\mathcal{F}\) such that \(f\neq g^{\star}\), as we have assumed that any noise has expectation 0. For simplicity, we denote \(\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}[\ell(g^{\star},\mathcal{D}^ {(n)})]\) by \(\ell^{\star}\). We first show that \(m\) is monotonically increasing in \(\varepsilon\) by showing that, for any \(\varepsilon>\varepsilon^{\prime}\geq 0\):
\[\lim_{n\to\infty}\int_{\ell_{\min}}^{\ell_{\max}}\left|\mathbb{ P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{LD}^{\star}(\ell)\leq k)- \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{RLD}(\varepsilon, \mathcal{F},\ell)\leq k)\right|\,dk\] \[>\lim_{n\to\infty}\int_{\ell_{\min}}^{\ell_{\max}}\left|\mathbb{ P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{LD}^{\star}(\ell)\leq k)- \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{RLD}(\varepsilon^{ \prime},\mathcal{F},\ell)\leq k)\right|\,dk\]
by demonstrating that the inequality holds for each individual value of \(k\). First, note that:
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\text{LD}^{\star}(\ell)\leq k )=\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left[\mathbb{1}[\ell(g^{ \star},\mathcal{D}^{(n)})\leq k]\right].\]
As \(n\to\infty\), this quantity approaches
\[\mathbb{E}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}\left[\mathbb{1}[\ell(g^{ \star},\mathcal{D}^{(n)})\leq k]\right]=\mathbb{1}[\ell(g^{\star},\mathcal{D}^ {(n)})\leq k].\]
We will consider three cases: first, we consider \(\ell^{\star}>k_{1}\geq 0\), followed by \(\varepsilon^{\prime}+\ell^{\star}>k_{2}\geq\ell^{\star}\), and finally \(k_{3}\geq\varepsilon^{\prime}+\ell^{\star}\). Figure 6 provides a visual overview of these three cases and the broad idea within each case.
**Case 1: \(\ell^{\star}>k_{1}\geq 0\)**
For any \(k_{1}\) such that \(\ell^{*}>k_{1}\geq 0\), it holds that
\[\mathbb{1}[\ell(g^{*},\mathcal{D}^{(n)})\leq k_{1}]=0,\]
since \(\ell^{*}>k_{1}\) by definition. Further, because \(\ell(g^{*},\mathcal{D}^{(n)})\leq\ell(f,\mathcal{D}^{(n)})\) for mean squared error in the infinite data setting,
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RLD}(\varepsilon^{ \prime},\mathcal{F},\ell)\leq k_{1})=\mathbb{P}_{\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(\textit{RLD}(\varepsilon,\mathcal{F},\ell)\leq k_{1})=0\]
**Case 2:**\(\varepsilon^{\prime}+\ell^{*}\geq k_{2}\geq\ell^{*}\)
For any \(k_{2}\) such that \(\varepsilon^{\prime}+\ell^{*}\geq k_{2}\geq\ell^{*}\),
\[\mathbb{1}[\ell(g^{*},\mathcal{D}^{(n)})\leq k_{2}]=1,\]
since \(\ell(g^{*},\mathcal{D}^{(n)})\leq k_{2}\) by the definition of \(k_{2}\). Let \(\nu\) denote a volume function over the target model class. Recalling that \(\varepsilon>\varepsilon^{\prime}\), we know that:
\[\nu(\mathcal{R}^{\varepsilon})>\nu(\mathcal{R}^{\varepsilon^{ \prime}}) \iff\frac{1}{\nu(\mathcal{R}^{\varepsilon})}<\frac{1}{\nu( \mathcal{R}^{\varepsilon^{\prime}})}\] \[\iff\frac{\nu(\{f\in\mathcal{R}^{\varepsilon}:\ell(f,\mathcal{D} ^{(n)})\leq k_{2}\})}{\nu(\mathcal{R}^{\varepsilon})}<\frac{\nu(\{f\in \mathcal{R}^{\varepsilon}:\ell(f,\mathcal{D}^{(n)})\leq k_{2}\})}{\nu( \mathcal{R}^{\varepsilon^{\prime}})},\]
Figure 6: A visual overview of the proof of Lemma 1. In **Case 1**, we consider loss values that are achieved by no models in the model class, so each loss distribution has 0 mass below \(k\) in this case. **Case 2** covers each value of \(k\) such that \(k\) is larger than \(\ell^{*}\), so \(\mathbb{P}(LD^{*}(\ell)\leq k)=1\). The _RLD_ for the \(\varepsilon^{\prime}\) Rashomon set is closer to 1 than the \(\varepsilon\) Rashomon set because a larger proportion of this set falls below \(k\). Under **Case 3**, all models in the \(\varepsilon^{\prime}\) Rashomon set fall below \(k\).
since the set of models in the \(\varepsilon\) Rashomon set with loss less than \(k_{2}\) is the same set as set of models in the \(\varepsilon^{\prime}\) Rashomon set with loss less than \(k_{2}\) for \(k_{2}\leq\varepsilon^{\prime}+\ell^{*}\). We can further manipulate this quantity to show:
\[\frac{\nu(\{f\in\mathcal{R}^{\varepsilon}:\ell(f,\mathcal{D}^{(n)} )\leq k_{2}\})}{\nu(\mathcal{R}^{\varepsilon})}<\frac{\nu(\{f\in\mathcal{R}^{ \varepsilon^{\prime}}:\ell(f,\mathcal{D}^{(n)})\leq k_{2}\})}{\nu(\mathcal{R} ^{\varepsilon^{\prime}})}\] \[\iff 1-\frac{\nu(\{f\in\mathcal{R}^{\varepsilon}:\ell(f, \mathcal{D}^{(n)})\leq k_{2}\})}{\nu(\mathcal{R}^{\varepsilon})}>1-\frac{\nu( \{f\in\mathcal{R}^{\varepsilon^{\prime}}:\ell(f,\mathcal{D}^{(n)})\leq k_{2} \})}{\nu(\mathcal{R}^{\varepsilon^{\prime}})}\] \[\iff\left|1-\frac{\nu(\{f\in\mathcal{R}^{\varepsilon}:\ell(f, \mathcal{D}^{(n)})\leq k_{2}\})}{\nu(\mathcal{R}^{\varepsilon})}\right|> \left|1-\frac{\nu(\{f\in\mathcal{R}^{\varepsilon^{\prime}}:\ell(f,\mathcal{D}^ {(n)})\leq k_{2}\})}{\nu(\mathcal{R}^{\varepsilon^{\prime}})}\right|\] \[\iff\left|1-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RLD}(\varepsilon,\mathcal{F},\ell)\leq k_{2})\right|>\left|1-\mathbb{P }_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RLD}(\varepsilon^{\prime}, \mathcal{F},\ell)\leq k_{2})\right|\] \[\iff\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{LD}^{*}(\ell)\leq k_{2})-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P} _{n}}(\textit{RLD}(\varepsilon,\mathcal{F},\ell)\leq k_{2})\right|\] \[\qquad>\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{LD}^{*}(\ell)\leq k_{2})-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P} _{n}}(\textit{RLD}(\varepsilon^{\prime},\mathcal{F},\ell)\leq k_{2})\right|,\]
because \(\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{LD}^{*}(\ell)\leq k _{2})=1\).
**Case 3:**\(k_{3}>\varepsilon^{\prime}+\ell^{*}\)
For any \(k_{3}>\varepsilon^{\prime}+\ell^{*}\), we have
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RLD}( \varepsilon^{\prime},\mathcal{F},\ell)\leq k_{3}) =\frac{\nu(\{f\in\mathcal{R}^{\varepsilon^{\prime}}:\ell(f, \mathcal{D}^{(n)})\leq k_{3}\})}{\nu(\mathcal{R}^{\varepsilon^{\prime}})}\] \[=\frac{\nu(\mathcal{R}^{\varepsilon^{\prime}})}{\nu(\mathcal{R} ^{\varepsilon^{\prime}})}\qquad\qquad\qquad\qquad\text{ because }k_{3}>\varepsilon^{\prime}+\ell^{*}\] \[=1.\]
This immediately gives that
\[\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{LD}^{*}( \ell)\leq k_{3})-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{ RLD}(\varepsilon^{\prime},\mathcal{F},\ell)\leq k_{3})\right| =\left|1-1\right|\] \[=0,\]
the minimum possible value for this quantity. We can then use the fact that the absolute value is greater than or equal to zero to show that
\[\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{ LD}^{*}(\ell)\leq k_{3})-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RLD}(\varepsilon,\mathcal{F},\ell)\leq k_{3})\right|\] \[\geq 0=\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{LD}^{*}(\ell)\leq k_{3})-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P} _{n}}(\textit{RLD}(\varepsilon^{\prime},\mathcal{F},\ell)\leq k_{3})\right|\]
In summary, under cases 1 and 3,
\[\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{ LD}^{*}(\ell)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{ RLD}(\varepsilon,\mathcal{F},\ell)\leq k)\right|\] \[\geq\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{LD}^{*}(\ell)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RLD}(\varepsilon^{\prime},\mathcal{F},\ell)\leq k)\right|;\]
under case 2,
\[\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{ LD}^{*}(\ell)\leq k_{2})-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RLD}(\varepsilon,\mathcal{F},\ell)\leq k_{2})\right|\] \[>\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{LD}^{*}(\ell)\leq k_{2})-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P} _{n}}(\textit{RLD}(\varepsilon^{\prime},\mathcal{F},\ell)\leq k_{2})\right|.\]
Since there is some range of values \(k\in[\ell^{*},\varepsilon^{\prime}+\ell^{*})\) for which the inequality above is strict, it follows that
\[\int_{\ell_{\min}}^{\ell_{\max}}\left|\mathbb{P}_{\mathcal{D}^{(n )}\sim\mathcal{P}_{n}}(\textit{LD}^{*}(\ell)\leq k)-\mathbb{P}_{\mathcal{D}^{(n )}\sim\mathcal{P}_{n}}(\textit{RLD}(\varepsilon,\mathcal{F},\ell)\leq k)\right|dk\] \[>\int_{\ell_{\min}}^{\ell_{\max}}\left|\mathbb{P}_{\mathcal{D}^{(n )}\sim\mathcal{P}_{n}}(\textit{LD}^{*}(\ell)\leq k)-\mathbb{P}_{\mathcal{D}^{(n )}\sim\mathcal{P}_{n}}(\textit{RLD}(\varepsilon^{\prime},\mathcal{F},\ell)\leq k )\right|dk,\]
showing that \(\varepsilon>\varepsilon^{\prime}\) is a _sufficient_ condition for \(m(\varepsilon)>m(\varepsilon^{\prime})\). Observe that, for a loss function with no regularization and a fixed model class, _RLD_ is a function of only \(\varepsilon\). As such, varying \(\varepsilon\) is the only way to vary _RLD_, making \(\varepsilon>\varepsilon^{\prime}\) a _necessary_ condition for the above. Therefore, we have shown that \(\varepsilon>\varepsilon^{\prime}\iff m(\varepsilon)>m(\varepsilon^{\prime})\), i.e. \(m\) is strictly increasing.
Further, if \(g^{*}\in\mathcal{F}\), the Rashomon set with \(\varepsilon=0\) will contain only \(g^{*}\) as \(n\) approaches infinity, immediately yielding that
\[m(0)=\int_{\ell_{\min}}^{\ell_{\max}}\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(\textit{LD}^{*}(\ell)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)} \sim\mathcal{P}_{n}}(\textit{RLD}(0,\mathcal{F},\ell)\leq k)\right|dk=0.\]
Lemma 1 provides a mechanism through which _RLD_ will approach \(\mathit{LD}^{*}\) in the infinite data setting. The following lemma states that each level set of the quadratic loss surface is a hyper-ellipsoid, providing another useful tool for the propositions given in this section.
**Lemma 2**.: _The level set of the quadratic loss at \(\varepsilon\) is a hyper-ellipsoid defined by:_
\[(\theta-\theta^{*})^{T}X^{T}X(\theta-\theta^{*})=\varepsilon-c,\]
_which is centered at \(\theta^{*}\) and of constant shape in terms of \(\varepsilon\)._
Proof.: Recall that the quadratic loss for some parameter vector \(\theta\) is given by:
\[\ell(\theta)=\|y-X\theta\|^{2}\]
and that the optimal vector \(\theta^{*}\) is given by:
\[\theta^{*}=(X^{T}X)^{-1}X^{T}y\]
\[\Longleftrightarrow\ X^{T}X\theta^{*}=X^{T}y\]
With these facts, we show that the level set for the quadratic loss at some fixed value \(\varepsilon\) takes on the standard form for a hyper-ellipsoid. This is shown as:
\[\ell(\theta) =\|y-X\theta\|^{2}\] \[=\|y-X\theta\|^{2}\underbrace{y^{T}(y-X\theta^{*})+y^{T}(y-X \theta^{*})}_{\text{\small{add}}\,\,0}\] \[=\underbrace{y^{T}y-2y^{T}X\theta+\theta^{T}X^{T}X\theta}_{ \text{\small{expand quadratic}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
if and only if \(\varepsilon>\varepsilon^{\prime}\) by showing that, for any \(k\),
\[\left|\mathbb{P}_{\,\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{ RID}_{j}(\{g^{*}\},0)\leq k)-\mathbb{P}_{\,\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RID}_{j}(\mathcal{F},\varepsilon)\leq k)\right|\] \[\geq\left|\mathbb{P}_{\,\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RID}_{j}(\{g^{*}\},0)\leq k)-\mathbb{P}_{\,\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(\textit{RID}_{j}(\mathcal{F},\varepsilon^{\prime})\leq k) \right|.\]
For simplicity of notation, we denote the linear regression model parameterized by some coefficient vector \(\mathbf{\theta}\in\mathbb{R}^{p}\) simply as \(\mathbf{\theta}\). Let \(\mathbf{\theta}^{*}\in\mathbb{R}^{p}\) denote the coefficient vector for the optimal model. Additionally, we define the following quantities to represent the most extreme values for \(\theta_{j}\) (i.e., the coefficient along the \(j\)-th axis) for each Rashomon set. Let \(a_{j}\) and \(b_{j}\) be the two values defined as:
\[a_{j} :=\min_{\mathbf{v}\in\mathbb{R}^{p}}(\mathbf{\theta}^{*}+\mathbf{v}) _{j}\text{ s.t. }\ell(\mathbf{\theta}^{*}+\mathbf{v},\mathcal{D}^{(n)})=\ell^{*}+\varepsilon\] \[b_{j} :=\max_{\mathbf{v}\in\mathbb{R}^{p}}(\mathbf{\theta}^{*}+\mathbf{v}) _{j}\text{ s.t. }\ell(\mathbf{\theta}^{*}+\mathbf{v},\mathcal{D}^{(n)})=\ell^{*}+\varepsilon.\]
Similarly, let \(a^{\prime}_{j}\) and \(b^{\prime}_{j}\) be the two values defined as:
\[a^{\prime}_{j} :=\min_{\mathbf{v}\in\mathbb{R}^{p}}(\mathbf{\theta}^{*}+\mathbf{v}) _{j}\text{ s.t. }\ell(\mathbf{\theta}^{*}+\mathbf{v},\mathcal{D}^{(n)})=\ell^{*}+\varepsilon^{\prime}\] \[b^{\prime}_{j} :=\max_{\mathbf{v}\in\mathbb{R}^{p}}(\mathbf{\theta}^{*}+\mathbf{v}) _{j}\text{ s.t. }\ell(\mathbf{\theta}^{*}+\mathbf{v},\mathcal{D}^{(n)})=\ell^{*}+\varepsilon^{ \prime}.\]
Intuitively, these values represent the most extreme values of \(\mathbf{\theta}\) along dimension \(j\) that are still included in their respective Rashomon sets. Figure 7 provides a visual explanation of each of these quantities. Finally, recall that:
\[\mathbb{P}_{\,\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RID}_{j}(\{g^{*}\},0)\leq k)=\begin{cases}1&\text{ if }\theta^{*}_{j}\leq k\\ 0&\text{ otherwise},\end{cases}\]
since \(\mathbf{\theta}^{*}\) is a deterministic quantity given infinite data.
Without loss of generality, we will consider two cases:
1. The case where \(\theta^{*}_{j}\leq k\),
2. The case where \(k<\theta^{*}_{j}\).
Figures 8 and 9 give an intuitive overview of the mechanics of this proof. As depicted in Figure 8, we will show that the proportion of the volume of the \(\varepsilon^{\prime}\)-Rashomon set with \(\phi_{j}\) below \(k\) is closer to 1 than that of the \(\varepsilon\)-Rashomon set under case 1. We will than show that the opposite holds under case 2, as depicted in Fugre 9.
Figure 7: A visualization of the \(\varepsilon\) and \(\varepsilon^{\prime}\) Rashomon sets for linear regression with two input features. We highlight the extrema of each Rashomon set along axis 1 (\(a_{1}\) and \(b_{1}\) for the \(\varepsilon\) Rashomon set, \(a^{\prime}_{1}\) and \(b^{\prime}_{1}\) for the \(\varepsilon^{\prime}\) Rashomon set).
**Case 1: \(\theta_{j}^{*}\leq k\)**
Define two functions \(h:[a_{j},b_{j}]\rightarrow[0,1]\) and \(h^{\prime}:[a_{j}^{\prime},b_{j}^{\prime}]\rightarrow[0,1]\) as:
\[h(c) =\frac{c-a_{j}}{b_{j}-a_{j}}\] \[h^{\prime}(c) =\frac{c-a_{j}^{\prime}}{b_{j}^{\prime}-a_{j}^{\prime}}.\]
These functions map each value \(c\) in the original space of \(\theta_{j}\) to its _relative position_ along each axis of the \(\varepsilon\)-Rashomon set and the \(\varepsilon^{\prime}\)-Rashomon set respectively, with \(h(b_{j})=h^{\prime}(b_{j}^{\prime})=1\) and \(h(a_{j})=h^{\prime}(a_{j}^{\prime})=0\).
Define \(\delta\in[0,b_{j}-\theta_{j}^{*}]\) to be the value such that \(k=\theta_{j}^{*}+\delta\). Since in this case \(\theta_{j}^{*}\leq k\), it follows that \(\delta\geq 0\). As such, we can then quantify the proportion of the \(\varepsilon\)-Rashomon set along the j-th axis such that \(\theta_{j}^{*}\leq\theta_{j}\leq k\) as:
\[h(\theta_{j}^{*}+\delta)-h(\theta_{j}^{*}) =\frac{(\theta_{j}^{*}+\delta)-a_{j}}{b_{j}-a_{j}}-\frac{(\theta_ {j}^{*}-a_{j})}{(b_{j}-a_{j})}\] \[=\frac{\theta_{j}^{*}+\delta-a_{j}-\theta_{j}^{*}+a_{j}}{b_{j}-a_ {j}}\] \[=\frac{\delta}{b_{j}-a_{j}}\]
Similarly, we can quantify the proportion of the \(\varepsilon^{\prime}\)-Rashomon set along the \(j\)-th axis with \(\theta_{j}\) between \(k\) and \(\theta_{j}^{*}\) as:
\[h^{\prime}(\delta+\theta_{j}^{*})-h^{\prime}(\theta_{j}^{*}) =\frac{\theta_{j}^{*}+\delta-a_{j}^{\prime}-\theta_{j}^{*}+a_{j}^ {\prime}}{b_{j}^{\prime}-a_{j}^{\prime}}\] \[=\frac{\delta}{b_{j}^{\prime}-a_{j}^{\prime}}.\]
Recalling that, by definition, \(a_{j}<a_{j}^{\prime}<b_{j}^{\prime}<b_{j}\), as well as the fact that \(\delta\geq 0\) we can see that:
\[b_{j}-a_{j}>b_{j}^{\prime}-a_{j}^{\prime} \iff\frac{1}{b_{j}-a_{j}}<\frac{1}{b_{j}^{\prime}-a_{j}^{\prime}}\] \[\iff\frac{\delta}{b_{j}-a_{j}}\leq\frac{\delta}{b_{j}^{\prime}-a_ {j}^{\prime}}\] \[\iff h(\theta_{j}^{*}+\delta)-h(\theta_{j}^{*})\leq h^{\prime}(\theta_ {j}^{*}+\delta)-h^{\prime}(\theta_{j}^{*})\] \[\iff h(k)-h(\theta_{j}^{*})\leq h^{\prime}(k)-h^{\prime}(\theta_{j}^{*}).\]
Figure 8: A simple illustration of the key idea in case 1 of the proof of Proposition 1. For two concentric ellipsoids of the same shape, the proportion of each ellipsoid’s volume falling below some point greater than the center along axis \(j\) is greater for the smaller ellipsoid than for the larger ellipsoid.
That is, the proportion of the \(\varepsilon\)-Rashomon set along the \(j\)-th axis with \(\theta_{j}\) between \(k\) and \(\theta_{j}^{*}\) is _less than or equal to_ the proportion of the \(\varepsilon^{\prime}\)-Rashomon set along the \(j\)-th axis with \(\theta_{j}\) between \(k\) and \(\theta_{j}^{*}\). By Lemma 2, recall that the \(\varepsilon\)-Rashomon set and the \(\varepsilon^{\prime}\)-Rashomon set are concentric (centered at \(\theta^{*}\)) and similar (with shape defined by \(X^{T}X\)). Let \(\nu\) denote the volume function for some subsection of a hyper-ellipsoid. We then have
\[h(k)- h(\theta_{j}^{*})\leq h^{\prime}(k)-h^{\prime}(\theta_{j}^{*})\] \[\iff\frac{\nu(\{\boldsymbol{\theta}\in\mathcal{R}^{\varepsilon }:\theta_{j}^{*}\leq\theta_{j}\leq k\})}{\nu(\{\mathcal{R}^{\varepsilon}\})} \leq\frac{\nu(\{\boldsymbol{\theta}^{\prime}\in\mathcal{R}^{\varepsilon^{ \prime}}:\theta_{j}^{*}\leq\theta_{j}^{\prime}\leq k\})}{\nu(\{\mathcal{R}^{ \varepsilon^{\prime}}\})}\] \[\iff\frac{1}{2}+\frac{\nu(\{\boldsymbol{\theta}\in\mathcal{R}^{ \varepsilon}:\theta_{j}^{*}\leq\theta_{j}\leq k\})}{\nu(\{\mathcal{R}^{ \varepsilon}\})}\leq\frac{1}{2}+\frac{\nu(\{\boldsymbol{\theta}^{\prime}\in \mathcal{R}^{\varepsilon^{\prime}}:\theta_{j}^{*}\leq\theta_{j}^{\prime}\leq k \})}{\nu(\{\mathcal{R}^{\varepsilon^{\prime}}\})}\] \[\iff\frac{\nu(\{\boldsymbol{\theta}\in\mathcal{R}^{\varepsilon }:\theta_{j}\leq\theta_{j}^{*}\})}{\nu(\{\mathcal{R}^{\varepsilon}\})}+ \frac{\nu(\{\boldsymbol{\theta}\in\mathcal{R}^{\varepsilon}:\theta_{j}^{*} \leq\theta_{j}\leq k\})}{\nu(\{\mathcal{R}^{\varepsilon}\})}\] \[\qquad\leq\frac{\nu(\{\boldsymbol{\theta}^{\prime}\in\mathcal{R} ^{\varepsilon^{\prime}}:\theta_{j}^{*}\leq\theta_{j}^{*}\})}{\nu(\{\mathcal{R }^{\varepsilon^{\prime}}\})}+\frac{\nu(\{\boldsymbol{\theta}^{\prime}\in \mathcal{R}^{\varepsilon^{\prime}}:\theta_{j}^{*}\leq\theta_{j}^{*}\leq k\}) }{\nu(\{\mathcal{R}^{\varepsilon^{\prime}}\})}\] \[\iff\frac{\nu(\{\boldsymbol{\theta}\in\mathcal{R}^{\varepsilon }:\theta_{j}\leq k\})}{\nu(\{\mathcal{R}^{\varepsilon}\})}\leq\frac{\nu(\{ \boldsymbol{\theta}^{\prime}\in\mathcal{R}^{\varepsilon^{\prime}}:\theta_{j}^ {\prime}\leq k\})}{\nu(\{\mathcal{R}^{\varepsilon^{\prime}}\})}.\]
Recalling that, by definition, \(\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}(\mathcal{F}, \varepsilon^{\prime})\leq k)=\frac{\nu(\{\boldsymbol{\theta}^{\prime}\in \mathcal{R}^{\varepsilon^{\prime}}:\theta_{j}^{*}\leq k\})}{\nu(\mathcal{R}^{ \varepsilon^{\prime}})}\), it follows that:
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}( \mathcal{F},\varepsilon)\leq k)\leq\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{ P}_{n}}(RID_{j}(\mathcal{F},\varepsilon^{\prime})\leq k)\] \[\iff 1-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}( \mathcal{F},\varepsilon)\leq k)\geq 1-\mathbb{P}_{\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(RID_{j}(\mathcal{F},\varepsilon^{\prime})\leq k)\] \[\iff |1-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}( \mathcal{F},\varepsilon)\leq k)|\geq|1-\mathbb{P}_{\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(RID_{j}(\mathcal{F},\varepsilon^{\prime})\leq k)|.\]
Recalling that \(\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}(\{g^{*}\},0)\leq k)=1\), since \(k\geq\theta_{j}^{*}\), the above gives:
\[|1- \mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}( \mathcal{F},\varepsilon)\leq k)|\geq|1-\mathbb{P}_{\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(RID_{j}(\mathcal{F},\varepsilon^{\prime})\leq k)|\] \[\iff |\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}(\{g^{*}\}, \varepsilon)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}( \mathcal{F},\varepsilon)\leq k)|\] \[\qquad\geq|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{ j}(\{g^{*}\},\varepsilon)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(RID_{j}( \mathcal{F},\varepsilon^{\prime})\leq k)|\]
for all \(\theta_{j}^{*}\leq k\).
Figure 9: A simple illustration of the key idea in case 2 of the proof of Proposition 1. For two concentric ellipsoids of the same shape, the proportion of each ellipsoid’s volume falling below some point less than the center along axis \(j\) is smaller for the smaller ellipsoid than for the larger ellipsoid.
### Case 2: \(k<\theta_{j}^{*}\)
Let \(h\) and \(h^{\prime}\) be defined as in Case 1. Define \(\delta\in[a_{j}-\theta_{j}^{*},0]\) to be the quantity such that \(k=\theta_{j}^{*}+\delta\). In this case, \(k<\theta_{j}^{*}\), so it follows that \(\delta<0\). Repeating the derivation from Case 1, we then have:
\[b_{j}-a_{j}>b_{j}^{\prime}-a_{j}^{\prime} \iff\frac{1}{b_{j}-a_{j}}<\frac{1}{b_{j}^{\prime}-a_{j}^{\prime}}\] \[\iff\frac{\delta}{b_{j}-a_{j}}>\frac{\delta}{b_{j}^{\prime}-a_{j }^{\prime}}\] \[\iff h(\theta_{j}^{*}+\delta)-h(\theta_{j}^{*})>h^{\prime}(\theta_ {j}^{*}+\delta)-h^{\prime}(\theta_{j}^{*})\] \[\iff h(k)-h(\theta_{j}^{*})>h^{\prime}(k)-h^{\prime}(\theta_{j}^{ *}).\]
That is, the proportion of the \(\varepsilon\)-Rashomon set along the \(j\)-th axis with \(\theta_{j}\) between \(k\) and \(\theta_{j}^{*}\) is _greater than_ the proportion of the \(\varepsilon^{\prime}\)-Rashomon set along the \(j\)-th axis with \(\theta_{j}\) between \(k\) and \(\theta_{j}^{*}\). By similar reasoning as in Case 1, it follows that:
\[\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RID}_{j}(\mathcal{ F},\varepsilon)\leq k)>\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RID}_{j}(\mathcal{F},\varepsilon^{\prime})\leq k)\]
\[\iff|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RID}_{j}( \mathcal{F},\varepsilon)\leq k)-0|>|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P }_{n}}(\textit{RID}_{j}(\mathcal{F},\varepsilon^{\prime})\leq k)-0|\]
Recalling that \(\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RID}_{j}(\{g^{*} \},\varepsilon)\leq k)=0\), since \(k<\theta_{j}^{*}\), the above gives:
\[|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RID}_{j}( \mathcal{F},\varepsilon)\leq k)-0|>|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{ P}_{n}}(\textit{RID}_{j}(\mathcal{F},\varepsilon^{\prime})\leq k)-0|\]
\[\iff|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RID}_{j}( \mathcal{F},\varepsilon)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_ {n}}(\textit{RID}_{j}(\{g^{*}\},0)\leq k)|\]
\[>|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{RID}_{j}( \mathcal{F},\varepsilon^{\prime})\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(\textit{RID}_{j}(\{g^{*}\},0)\leq k)|\]
for all \(a_{j}\leq k<\theta_{j}^{*}\). As such, for any \(k\), we have that:
\[\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}(\textit{ RID}_{j}(\{g^{*}\},0)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RID}_{j}(\mathcal{F},\varepsilon)\leq k)\right|\] \[\leq\left|\mathbb{P}_{\mathcal{D}^{(n)}\sim\mathcal{P}_{n}}( \textit{RID}_{j}(\{g^{*}\},0)\leq k)-\mathbb{P}_{\mathcal{D}^{(n)}\sim \mathcal{P}_{n}}(\textit{RID}_{j}(\mathcal{F},\varepsilon^{\prime})\leq k) \right|,\]
showing that \(\varepsilon>\varepsilon^{\prime}\) is a _sufficient_ condition for the above. Since _RID_ is a function of only \(\varepsilon\), varying \(\varepsilon\) is the only way to vary _RID_, making \(\varepsilon>\varepsilon^{\prime}\) a _necessary_ condition for the above, yielding that \(r_{j}(\varepsilon)>r_{j}(\varepsilon^{\prime})\iff\varepsilon>\varepsilon^{\prime}\) and \(r_{j}\) is monotonically increasing.
Let \(m\) be defined as in Lemma 1, and let \(\gamma\) be some value such that \(m(\varepsilon)\leq\gamma\). Define the function \(d:=r_{j}\circ m^{-1}\) (note that \(m^{-1},\) the inverse of \(m,\) is guaranteed to exist and be strictly increasing because \(m\) is strictly increasing). The function \(d\) is monotonically increasing as the composition of two monotonically increasing functions, and:
\[m(\varepsilon)\leq\gamma\] \[\iff \varepsilon\leq m^{-1}(\gamma)\] \[\iff r_{j}(\varepsilon)\leq d(\gamma)\]
as required.
Further, Lemma 1 states that \(m(0)=0\) if \(g^{*}\in\mathcal{F}\). Note also that the Rashomon set with \(\varepsilon=0\) contains only \(g^{*}\), and as such \(r_{j}(0)=d(m^{-1}(0))=0\), meaning \(d(0)=0\).
**Proposition 2**.: _Assume the DGP is a generalized additive model (GAM). Then, Assumption 1 is guaranteed to hold for the function class of GAM's where our variable importance metric is the coefficient on each bin._
Proof.: Recall from Proposition 1 that Assumption 1 holds for the class of linear regression models with the model reliance metric \(\phi_{j}=\theta_{j}\). A generalized additive model (GAM) [22] over \(p\) variables is generally represented as:
\[g(\mathbb{E}[Y])=\omega+f_{1}(x_{1})+\ldots+f_{p}(x_{p}),\]
where \(g\) is some link function, \(\omega\) is a bias term, and \(f_{1},\ldots,f_{p}\) denote the shape functions associated with each of the variables. In practice, each shape function \(f_{j}\) generally takes the form of a linear function over binned variables [28]:
\[f_{j}(x_{i})=\sum_{j^{\prime}=0}^{\beta_{j}-1}\theta_{j^{\prime}}\mathbb{1}[b_{j ^{\prime}}\leq x_{ij}\leq b_{j^{\prime}+1}],\]
where \(\beta_{j}\) denotes the number of possible bins associated with variable \(X_{j}\), \(b_{j^{\prime}}\) denotes the \(j^{\prime}\)-th cutoff point associated with \(X_{j}\), and \(\theta_{j^{\prime}}\) denotes the weight associated with the \(j^{\prime}\)-th bin on variable \(X_{j}\). With the above shape function, a GAM is a linear regression over a binned dataset; as such, for the variable importance metric \(\phi_{j^{\prime}}=\theta_{j^{\prime}}\) on the complete, binned dataset, Assumption 1 holds by the same reasoning as Proposition 1.
Additional Experiments
### Recovering MR without Bootstrapping Baseline Methods
In this section, we evaluate the ability of each baseline method to recover the value of subtractive model reliance for the data generation process _without bootstrapping_. For this comparison, we use one training set to find the model reliance of each variable for each of the following algorithms: GOSDT, AdaBoost, Lasso, and Random Forest. Because _RID_ and VIC produce distributions/samples, we instead estimate the _median_ model reliance across _RID_ and VIC's model reliance distributions.
We then sample 500 test sets independently for each DGP. We then calculate the model reliance for each test set using the DGP as if it were a predictive model (that is, if the DGP were \(Y=X+\varepsilon\) for some Gaussian noise \(\varepsilon\), our predictive model would simply be \(f(X)=X\)). Finally, we calculate the mean absolute error between the test model reliance values for the DGP and the train model reliance values for each algorithm.
Figure 10 shows the results of this experiment. As Figure 10 illustrates, _RID_ produces more accurate point estimates than baseline methods even though this is not the goal of _RID_ - the goal of _RID_ is to produce _the entire distribution_ of model reliance across good models over bootstrap datasets, not a single point estimate.
### Width of Box and Whisker Ranges
When evaluating whether the box and whisker range (BWR) for each method captures the MR value for the DGP across test sets, a natural question is whether \(RID\) outperforms other methods simply because it produces wider BWR's. Figure 11 demonstrates the width of the BWR produced by each evaluated method across variables and datasets. As shown in Figure 11, _RID_ consistently produces BWR widths on par with baseline methods.
### The Performance of _Rid_ is Stable Across Reasonable Values for \(\varepsilon\)
The parameter \(\varepsilon\) controls what the maximum possible loss a model in the Rashomon set could be. We investigate whether this choice of \(\varepsilon\) significantly alters the performance of _RID_. In order to investigate this question, we repeat the coverage experiment from Section 4.2 of the main paper for three different values of \(\varepsilon\) for each dataset on VIC and _RID_ (the two methods affected by \(\varepsilon\)). In particular, we construct the BWR over 100 bootstrap iterations for _RID_ and over models for VIC for three different values of \(\varepsilon\) on each training dataset. These values are chosen as \(0.75\varepsilon^{*}\), \(\varepsilon^{*}\), and \(1.25\varepsilon^{*}\), where \(\varepsilon^{*}\) denotes the value of \(\varepsilon\) used in the experiments presented in the main paper. We then generate 500 test datasets for each DGP and evaluate the subtractive model reliance for the DGP on each variable; we then measure what proportion of these test model reliance values are contained in each BWR. We refer to this proportion as the "recovery percentage".
Figure 12 illustrates that _RID_ is almost entirely invariant to reasonable choices of \(\varepsilon\): the recovery proportion for _RID_ ranges from \(90.38\%\) to \(90.64\%\) on Chen's DGP, \(100\%\) to \(100\%\) on Monk 1, \(99.43\%\) to \(99.93\%\) on
Figure 10: Boxplot over variables of the mean absolute error over test sets between the MR value produced by each method without bootstrapping (except _RID_) and the model reliance of the DGP for 500 test sets.
Figure 11: Width of the box and whisker range produced by each baseline method by dataset and variable. Gray subplots represent DGPs for which such a variable does not exist. Friedman’s, Monk 1, and Monk 3 only have six variables.
Monk 3 DGP, and from \(87.23\%\) to \(88.8\%\) on Friedman's DGP. We find that VIC is somewhat more sensitive to choices of \(\varepsilon\): the recovery proportion for VIC ranges from \(83.44\%\) to \(89.62\%\) on Chen's DGP, \(100\%\) to \(100\%\) on Monk 1, \(75.30\%\) to \(79.17\%\) on Monk 3 DGP, and from \(60.53\%\) to \(75.57\%\) on Friedman's DGP.
### Full Stability Results
In this section, we demonstrate each interval produced by MCR, the BWR of VIC, and the BWR of _RID_ over 50 datasets generated from each DGP. We construct _RID_ using 50 bootstraps from each of the 50 generated datasets. Figures 13, 14, 15, and 16 illustrate the 50 resulting intervals produced by each method for each non-extraneous variable on each DGP. If a method produces generalizable results, we would expect it to produce overlapping intervals across datasets drawn from the same DGP. As shown in Figures 13, 15, and 16, both MCR and the BWR for VIC produced completely non-overlapping intervals between datasets for at least one variable on each of Chen's DGP, Monk 3, and Friedman's DGP, which means their results are not generalizable. In contrast, **the BWR range for _RID never_ has zero overlap between the ranges produced for different datasets**. This highlights that _RID_ is more likely to generalize than existing Rashomon-based methods.
Figure 12: Box and whiskers plot over variables of the proportion test MR values for the DGP captured by the BWR range for _RID_ and VIC at different loss thresholds \(\varepsilon\). We find that the performance of _RID_ is invariant to reasonable changes in \(\varepsilon\).
### Timing Experiments
Finally, we perform an experiment studying how well the runtime of _RID_ scales with respect to the number of samples and the number of features in the input dataset using the HIV dataset [38]. The complete dataset used for the main paper consists of 14,742 samples measuring 100 features each. We compute _RID_ using 30 bootstrap iterations for each combination of the following sample and feature subset sizes: 14,742 samples, 7,371 samples, and 3,686 samples; 100 features, 50 features, and 25 features.
Note that, in our implementation of _RID_, any number of bootstrap datasets may be handled in parallel; as such, we report the mean runtime per bootstrap iteration in Table 1, as this quantity is independent of how many machines are in use. As shown in Table 1, _RID_ scales fairly well in the number of samples included, and
Figure 14: We generate 50 independent datasets from the Monk 1 DGP and calculate MCR, BWRs for VIC, and BWRs for RID. The above plot shows the interval for each dataset for each non-null variable in Monk 1 DGP. All red-colored intervals (there are none in this plot) do not overlap with at least one of the remaining 49 intervals.
Figure 13: We generate 50 independent datasets from Chen’s DGP and calculate MCR, BWRs for VIC, and BWRs for RID. The above plot shows the interval for each dataset for each non-null variable in Chen’s DGP. All red-colored intervals do not overlap with at least one of the remaining 49 intervals.
somewhat less well in the number of features. This is because the number of possible decision trees grows rapidly with the number of input features, making finding the Rashomon set a more difficult problem and leading to larger Rashomon sets. Nonetheless, even for a large number of samples and features, _RID_ can be computed in a tractable amount of time: with 100 features and 14,742 samples, we found an average time per bootstrap of about 52 minutes.
Figure 16: We generate 50 independent datasets from Friedmans DGP and calculate MCR, BWRs for VIC, and BWRs for RID. The above plot shows the interval for each dataset for each non-null variable in Friedman’s DGP. All red-colored intervals do not overlap with at least one of the remaining 49 intervals.
Figure 15: We generate 50 independent datasets from the Monk 3 DGP and calculate MCR, BWRs for VIC, and BWRs for RID. The above plot shows the interval for each dataset for each non-null variable in the Monk 3 DGP. All red-colored intervals do not overlap with at least one of the remaining 49 intervals.
## Appendix E Detailed Experimental Setup
In this work, we considered the following four simulation frameworks:
* Chen's [10]: \(Y=\mathbb{1}[-2\sin(X_{1})+\max(X_{2},0)+X_{3}+\exp(-X_{4})+\varepsilon\geq 2.048],\) where \(X_{1},\ldots,X_{10},\varepsilon\sim\mathcal{N}(0,1).\) Here, only \(X_{1},\ldots,X_{4}\) are relevant.
* Friedman's [18]: \(Y=\mathbb{1}[10\sin(\pi X_{1}X_{2})+20(X_{3}-0.5)^{2}+10X_{4}+5X_{5}+ \varepsilon\geq 15],\) where \(X_{1},\ldots,X_{6}\sim\mathcal{U}(0,1),\varepsilon\sim\mathcal{N}(0,1).\) Here, only \(X_{1},\ldots,X_{5}\) are relevant.
* Monk 1 [45]: \(Y=\max\left(\mathbb{1}[X_{1}=X_{2}],\mathbb{1}[X_{5}=1]\right),\) where the variables \(X_{1},\ldots,X_{6}\) have domains of 2, 3, or 4 unique integer values. Only \(X_{1},X_{2},X_{5}\) are important.
* Monk 3 [45]: \(Y=\max\left(\mathbb{1}[X_{5}=3\text{ and }X_{4}=1],\mathbb{1}[X_{5}\neq 4\text{ and }X_{2}\neq 3]\right)\) for the same covariates in Monk 1. Here, \(X_{2},X_{4},\) and \(X_{5}\) are relevant, and \(5\%\) label noise is added.
For our experiments in Sections 4.1 and 4.2 of the main paper, we trained and evaluated all models using the standard training set provided by [45] for Monk 1 and Monk 3. We generated 200 samples following the above process for Friedman's DGP, and 1000 samples following the above process for Chen's DGP. Table 2 summarizes the size of each dataset we considered. In all cases, we used random seed 0 for dataset generation, model training, and evaluation unless otherwise specified.
We compared the rankings produced by _RID_ with the following baseline methods:
* Subtractive model reliance \(\phi^{\text{sub}}\) of a random forest (RF) [7] using scikit-learn's implementation [36] of RF
* Subtractive model reliance \(\phi^{\text{sub}}\) of an L1 regularized logistic regression model (Lasso) using scikit-learn's implementation [36] of Lasso
* Subtractive model reliance \(\phi^{\text{sub}}\) of boosted decision trees [17] using scikit-learn's implementation [36] of AdaBoost
* Subtractive model reliance \(\phi^{\text{sub}}\) of a generalized optimal sparse decision tree (GOSDT) [27] using the implementation from [52]
* a metric designed to capture only the unique information of a variable
- of RF using scikit-learn's implementation [36] of RF
* Subtractive conditional model reliance (CMR) [16] of Lasso using scikit-learn's implementation [36] of Lasso
* The impurity based model reliance metric for RF from [8] using scikit-learn's implementation [36] of RF
* The LOCO algorithm reliance [26] value for RF and for Lasso using scikit-learn's implementation [36] of both models
* The Pearson correlation between each feature and the outcome
* The Spearman correlation between each feature and the outcome
\begin{table}
\begin{tabular}{c||c|c|c} \hline SamplesVariables & 25 & 50 & 100 \\ \hline
3,686 & 19.3 (0.9) & 64.2 (6.2) & 164.0 (14.6) \\
7,371 & 40.5 (2.5) & 177.7 (18.8) & 723.1 (106.4) \\
14,742 & 92.9 (6.8) & 431.4 (39.9) & 3128.7 (281.9) \\ \end{tabular}
\end{table}
Table 1: Average runtime in seconds per bootstrap for _RID_ as a function of the number of variables and number of samples included from the HIV dataset. The standard error about each average is reported in parentheses.
\begin{table}
\begin{tabular}{c||c|c|c} DGP & Num Samples & Num Features & Num Extraneous Features \\ \hline Chen’s & 1,000 & 10 & 6 \\ Friedman’s & 200 & 6 & 1 \\ HIV & 14,742 & 100 & Unknown \\ Monk 1 & 124 & 6 & 3 \\ Monk 3 & 124 & 6 & 3 \\ \end{tabular}
\end{table}
Table 2: Overview of the size of each dataset considered (or generated from a DGP) in this paper.
* The mean of the partial dependency plot (PDP) [20] for each feature using scikit-learn's implementation [36]
* The SHAP value [30] for RF using scikit-learn's implementation [36] of RF
* The mean of variable importance clouds (VIC) [13] for the Rashomon set of sparse decision trees, computed using TreeFarms [52].
We used the default parameters in scikit-learn's implementation [36] of each baseline model. The parameters used for _RID_, VIC, and GOSDT for each dataset are summarized in Table 3. In all cases, we constructed each of _RID_, VIC, and GOSDT using the code from [52].
### Computational Resources
All experiments for this work were performed on an academic institution's cluster computer. We used up to 40 machines in parallel, selected from the specifications below:
* 2 Dell R610's with 2 E5540 Xeon Processors (16 cores)
* 10 Dell R730's with 2 Intel Xeon E5-2640 Processors (40 cores)
* 10 Dell R610's with 2 E5640 Xeon Processors (16 cores)
* 10 Dell R620's with 2 Xeon(R) CPU E5-2695 v2's (48 cores)
* 8 Dell R610's with 2 E5540 Xeon Processors (16 cores)
We did not use GPU acceleration for this work.
| 変数の重要度を定量化する事は、遺伝学、公共政策、医学のような分野で重要な質問に答えるために不可欠です。現状の方法では、与えられたモデルで計算されますが、与えられたデータセットでは、多数のモデルが目標結果を等しく説明できる可能性があります。その全ての説明を考慮しない限り、異なる研究者たちは同じデータでも、様々な矛盾点と同時に全く同じ結論を導き出す可能性があります。さらに、与えられたデータセットの全ての説明を考慮しても、その洞察は一般化できない可能性があります。これは、全ての良い説明が、現実的なデータ擾乱に対して安定しているわけではないからです。私たちは、あらゆる良いモデルの集合において、変数の重要度を定量化する新しい変数の重要度フレームワークを提案します。このフレームワークは、データ分布に対して安定しています。このフレームワークは非常に柔軟で、既存のモデルクラスとグローバルな変数の重要度 |
2306.17396 | Koopman operator learning using invertible neural networks | In Koopman operator theory, a finite-dimensional nonlinear system is
transformed into an infinite but linear system using a set of observable
functions. However, manually selecting observable functions that span the
invariant subspace of the Koopman operator based on prior knowledge is
inefficient and challenging, particularly when little or no information is
available about the underlying systems. Furthermore, current methodologies tend
to disregard the importance of the invertibility of observable functions, which
leads to inaccurate results. To address these challenges, we propose the
so-called FlowDMD, aka Flow-based Dynamic Mode Decomposition, that utilizes the
Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages
the intrinsically invertible characteristics of the CF-INN to learn the
invariant subspaces of the Koopman operator and accurately reconstruct state
variables. Numerical experiments demonstrate the superior performance of our
algorithm compared to state-of-the-art methodologies. | Yuhuang Meng, Jianguo Huang, Yue Qiu | 2023-06-30T04:26:46 | http://arxiv.org/abs/2306.17396v2 | # Physics-informed invertible neural network for the Koopman operator learning 1
###### Abstract
In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invariant subspace of the Koopman operator based on prior knowledge is inefficient and challenging, particularly when little or no information is available about the underlying systems. Furthermore, current methodologies tend to disregard the importance of the invertibility of observable functions, which leads to inaccurate results. To address these challenges, we propose the so-called FlowDMD, a Flow-based Dynamic Mode Decomposition that utilizes the Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages the intrinsically invertible characteristics of the CF-INN to learn the invariant subspaces of the Koopman operator and accurately reconstruct state variables. Numerical experiments demonstrate the superior performance of our algorithm compared to state-of-the-art methodologies.
keywords: Koopman operator, Generative models, Invertible neural networks +
## 1 Introduction
Nonlinear dynamic systems are widely prevalent in both theory and engineering applications. Since the governing equations are generally unknown in many situations, it can be challenging to study the systems directly based on the first principles. Fortunately, the data about the systems of interest could be available by experiments or observations. Instead, one could seek to understand the behavior of the nonlinear system through the data-driven approaches [1; 2; 3; 4; 5].
The Koopman operator [6], which embeds the nonlinear system of interest into an infinite dimensional linear space by observable functions has attracted lots of attention. The Koopman operator acts on the infinite dimensional Hilbert space and aims to capture the full representations of the nonlinear systems. Dynamic mode decomposition (DMD) calculates the spectral decomposition of the Koopman operator numerically by extracting dynamic information from the collected data. Concretely, DMD devises a procedure to extract the spectral information directly from a data sequence without an explicit formulation of the Koopman operator, which is efficient for handling high dimensional data [7]. Variants of DMD are proposed to address challenges in different scenarios [8; 9; 10; 11; 12; 13; 14; 15].
The selection of observable functions plays an essential role in the DMD algorithm. Exact DMD [8] exploits the identity mapping as the observables. This implies that one uses a linear system to approximate a nonlinear system with given data [16]. This would yield inaccurate or even completely mistaken outcomes. Furthermore, the short-term prediction of Exact DMD might be acceptable for some cases, but the long-term prediction is probably unreliable. Typically, prior knowledge is required to select the observable functions that span the invariant subspace of the Koopman operator. However, the invariant subspace is not simply available. In order to overcome the limitations of the Exact DMD algorithm and capture the full feature of the nonlinear system, several data-driven selection strategies for observable functions have been proposed. Extended DMD (EDMD) [17] lifts the state variables from the original space into a higher dimensional space using the dictionary functions. The accuracy and rate of convergence of EDMD depend on the choice of the dictionary functions. Therefore, EDMD needs as many dictionary functions as possible. This implies that the set of dictionary functions (nonlinear transformations) should be sufficiently complex, which results in enormous computational cost. Kernel based DMD (KDMD) [18] differs from EDMD in that it utilizes the kernel trick to exploit the implicit expression of dictionary functions, whereas EDMD uses the explicit expression of dictionary functions. Nonetheless, both EDMD and KDMD are prone to overfitting [19], which leads to large generalization error. How to efficiently choose the observable functions that span the invariant subspace of the Koopman operator
becomes a significant challenge.
In contrast to EDMD and KDMD, observable functions can be represented by neural networks. Dictionary learning [20] couples the EDMD with a set of trainable dictionary functions, where dictionary functions are represented by a fully connected neural network and an untrainable component. Fixing the partial dictionary function facilitates the reconstruction of the state variables, however, this setting implicitly assumes that linear term lies in the invariant subspace of the Koopman operator. Yeung et al. [21] select low-dimensional dictionary functions more efficiently using deep neural networks.
Autoencoder (AE) neural networks have been widely applied to learn the optimal observable functions and reconstruction functions in Koopman embedding [19; 22; 23; 24; 25; 26]. Concretely, the invariant subspace of the Koopman operator and reconstruction functions are represented by the encoder and decoder network in AE, respectively. Lusch et al. [23] utilize neural networks to identify the Koopman eigenfunctions and introduced an auxiliary network to cope with the dynamic systems with continuous spectrum. Azencot et al. [24] propose the Consistent Koopman AE model that combines the forward-backward DMD method [27] with the AE model. This approach extracts the latent representation of high-dimensional non-linear data and eliminates the effect of noise in the data simultaneously. Pan and Duraisamy [25] parameterize the structure of the transition matrix in linear space and construct an AE model to learn the residual of the DMD. Li and Jiang [26] utilize deep learning and the Koopman operator to model the nonlinear multiscale dynamical problems, where coarse-scale data is used to learn the fine-scale information through a set of multiscale basis functions. Wang et al. [28] propose Koopman Neural Forecaster (KNF) combining AE with Koopman operator theory to predict the data with distributional shifts.
Representing Koopman embedding by dictionary learning or AE networks has several drawbacks. Firstly, the reconstruction in dictionary learning partially fixes the dictionary functions, which leads to a low level of interpretability of the model. Secondly, the encoder and decoder in an AE model are trained simultaneously, but neither of them is invertible, cf. [29] for more details. Moreover, due to the structural noninvertibility of the encoder and decoder, it typically requires a large amount of training data in order to obtain accurate representations, which makes the AE model prone to overfitting. Alford-Lago et al. [29] analyze the property of both the encoder and decoder in AE and proposed the deep learning dynamic mode decomposition (DLDMD). Bevanda et al. [30] constructed a conjugate map between the nonlinear system and its Jacobian linearization, which is learned by a diffeomorphic neural network.
In this paper, we develop a novel architecture that incorporates physical knowledge to learn the Koopman embedding. Specifically, we apply the coupling flow invertible neural networks (CF-INN) to learn the observable functions and reconstruction functions. The invertibility of the learned observable functions makes our method more flexible than dictionary learning or AE learning. Our contributions are three-folds:
1. We utilize an structurally invertible mapping to reconstruct state variables, which increases the interpretability of the neural network and alleviates the overfitting of AE.
2. The difficulty of learning the observable functions and observable functions is reduced by exploiting their structural invertibility of neural networks. Therefore, the reconstruction error in the loss function could be eliminated.
3. As the physical information is embedded into the model, the number of parameters is reduced to achieve comparable accuracy with other methods. Additionally, the parameters to be optimized are reduced dramatically since the learned mappings and their inverse share the same parameters.
This paper is organized as follows. In Section 2, we briefly review the Koopman operator theory and DMD. In Section 3, we present the structure of CF-INN and introduce how to learn the invariant subspace of the Koopman operator and the reconstruction functions. In Section 4, several numerical experiments are performed to demonstrate the performance of our method, and we summarize our work in Section 5.
## 2 Preliminaries
### Koopman operator theory
Consider the nonlinear autonomous system in discrete form,
\[\mathbf{x}_{k+1}=f(\mathbf{x}_{k}),\quad\mathbf{x}_{k}\in\mathcal{M}\subset \mathbb{R}^{m}, \tag{1}\]
where \(\mathcal{M}\) represents the set of state space, \(f:\mathcal{M}\rightarrow\mathcal{M}\) is an unknown nonlinear map, and \(k\) is the time index.
**Definition 1** (Koopman operator [16]).: _For the nonlinear system (1), the Koopman operator \(\mathcal{K}\) is an infinite-dimensional linear operator that acts on all observable functions \(g:\mathcal{M}\rightarrow\mathbb{C}\) such that_
\[\mathcal{K}g(\mathbf{x})=g(f(\mathbf{x})).\]
_Here, \(g(x)\in\mathcal{H}\) and \(\mathcal{H}\) represents the infinite dimensional Hilbert space._
Through the observable functions, the nonlinear system (1) could be transformed into an infinite-dimensional linear system using the Koopman operator,
\[g(\mathbf{x}_{k+1})=g(f(\mathbf{x}_{k}))=\mathcal{K}g(\mathbf{x}_{k}). \tag{2}\]
Note that the Koopman operator is linear, _i.e._, \(\mathcal{K}(\alpha_{1}g_{1}(\mathbf{x})+\alpha_{2}g_{2}(\mathbf{x}))=\alpha_{1} g_{1}(f(\mathbf{x}))+\alpha_{2}g_{2}(f(\mathbf{x}))\), with \(g_{1}(\mathbf{x}),g_{2}(\mathbf{x})\in\mathcal{H}\) and \(\alpha_{1},\alpha_{2}\in\mathbb{R}\). As \(\mathcal{K}\) is an infinite-dimensional operator, we denote its eigenfunctions and eigenvalues by \(\{\lambda_{i},\varphi_{i}(x)\}_{i=0}^{\infty}\) such that \(\mathcal{K}\varphi_{i}(\mathbf{x})=\lambda_{i}\varphi_{i}(\mathbf{x})\), where \(\varphi_{i}(\mathbf{x}):\mathcal{M}\rightarrow\mathbb{R}\), \(\lambda_{i}\in\mathbb{C}\).
The Koopman eigenfunctions define a set of intrinsic measurement coordinates, then a vector-valued observable function \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\) could be written in terms of the Koopman eigenfunctions,
\[\mathbf{g}(\mathbf{x}_{k})=\begin{bmatrix}g_{1}(\mathbf{x}_{k})\\ \vdots\\ g_{n}(\mathbf{x}_{k})\end{bmatrix}=\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{ k})\begin{bmatrix}<\varphi_{i},g_{1}>\\ \vdots\\ <\varphi_{i},g_{n}>\end{bmatrix}=\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{k}) \mathbf{v}_{i}, \tag{3}\]
where \(\mathbf{v}_{i}\) refers to the \(i\)-th Koopman mode with respect to the Koopman eigenfunction \(\varphi_{i}(\mathbf{x})\). Combining (2) and (3), we have the decomposition of a vector-valued observable functions
\[\mathbf{g}(\mathbf{x}_{k+1})=\mathcal{K}\mathbf{g}(\mathbf{x}_{k})=\mathcal{K }\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{k})\mathbf{v}_{i}=\sum_{i=1}^{ \infty}\lambda_{i}\varphi_{i}(\mathbf{x}_{k})\mathbf{v}_{i}.\]
Furthermore, the decomposition could be rewritten as
\[\mathbf{g}(\mathbf{x}_{k})=\sum_{i=1}^{\infty}\lambda_{i}^{k}\varphi_{i}( \mathbf{x}_{0})\mathbf{v}_{i}.\]
In practice, we need a finite-dimensional representation of the infinite-dimensional Koopman operator. Denote the \(n\)-dimensional invariant subspace of the Koopman operator \(\mathcal{K}\) by \(\mathcal{H}_{g}\), _i.e._, \(\forall g(\mathbf{x})\in\mathcal{H}_{g},\mathcal{K}g(\mathbf{x})\in\mathcal{H }_{g}\). Let \(\{g_{i}(\mathbf{x})\}_{i=1}^{n}\) be one set of basis of \(\mathcal{H}_{g}\), this induces a finite-dimensional linear operator \(\mathbf{K}\)[16], which projects the Koopman operator \(\mathcal{K}\) onto \(\mathcal{H}_{g}\), _i.e._, for the \(n\)-dimensional vector-valued observable functions \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\), we have
\[\mathbf{g}(x_{k+1})=\begin{bmatrix}g_{1}(x_{k+1})\\ \vdots\\ g_{n}(x_{k+1})\end{bmatrix}=\begin{bmatrix}\mathcal{K}g_{1}(x_{k})\\ \vdots\\ \mathcal{K}g_{n}(x_{k})\end{bmatrix}=\mathbf{K}\begin{bmatrix}g_{1}(x_{k})\\ \vdots\\ g_{n}(x_{k})\end{bmatrix}=\mathbf{K}\mathbf{g}(x_{k}) \tag{4}\]
### Dynamic mode decomposition
DMD approximates the spectral decomposition of the Koopman operator numerically. Given the state variables \(\{\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{p}\}\) and a vector-valued observable function \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\), then we get the sequence \(\{\mathbf{g}(\mathbf{x}_{0}),\mathbf{g}(\mathbf{x}_{1}),\cdots,\mathbf{g}( \mathbf{x}_{p})\}\), where each \(\mathbf{g}(\mathbf{x}_{k})\in\mathbb{R}^{n}\) is the observable snapshot of the \(k\)-th time step. According to (4), we have
\[\mathbf{g}(\mathbf{x}_{k+1})=\mathbf{K}\mathbf{g}(\mathbf{x}_{k}),\]
where \(\mathbf{K}\in\mathbb{R}^{n\times n}\) is the matrix form of the finite-dimensional operator. For the two data matrices, \(\mathbf{X}=[\mathbf{g}(\mathbf{x}_{0}),\cdots,\mathbf{g}(\mathbf{x}_{p-1})]\) and \(\mathbf{Y}=[\mathbf{g}(\mathbf{x}_{1}),\cdots,\mathbf{g}(\mathbf{x}_{p})]\), where \(\mathbf{X}\) and \(\mathbf{Y}\) are both in \(\mathbb{R}^{n\times p}\), which satisfies \(\mathbf{Y}=\mathbf{K}\mathbf{X}\). Therefore, \(\mathbf{K}\) can be represented by
\[\mathbf{K}=\mathbf{Y}\mathbf{X}^{\dagger},\]
where \(\mathbf{X}^{\dagger}\) denotes the Moore-Penrose inverse of \(\mathbf{X}\).
The Exact DMD algorithm developed by Tu et al. [8] computes dominant eigen-pairs (eigenvalue and eigenvector) of \(\mathbf{K}\) without the explicit formulation of \(\mathbf{K}\). In Algorithm 1, we present the DMD algorithm on the observable space, which is a general form of the Exact DMD algorithm. When using the identical mapping as the observable functions, _i.e._, \(\mathbf{g}(\mathbf{x})=\mathbf{x}\), Algorithm 1 is identical to the Exact DMD algorithm.
```
1. Compute the (reduced) SVD of \(\mathbf{X}\), \(\mathbf{X}=\mathbf{U}_{\mathbf{r}}\mathbf{\Sigma}_{\mathbf{r}}\mathbf{V}_{ \mathbf{r}}^{*}\), where \(\mathbf{U}_{\mathbf{r}}\in\mathbb{C}^{n\times r}\), \(\mathbf{\Sigma}_{\mathbf{r}}\in\mathbb{R}^{r\times r}\), \(\mathbf{V}_{\mathbf{r}}\in\mathbb{C}^{p\times r}\).
2. Compute \(\tilde{\mathbf{K}}=\mathbf{U}_{\mathbf{r}}^{*}\mathbf{Y}\mathbf{V}_{\mathbf{r} }\mathbf{\Sigma}_{\mathbf{r}}^{-1}\).
3. Compute the eigen-pairs of \(\tilde{\mathbf{K}}\): \(\tilde{\mathbf{K}}\mathbf{W}=\mathbf{W}\mathbf{\Lambda}\).
4. Reconstruct the eigen-pairs of \(\mathbf{K}\), where eigenvalues of \(\mathbf{K}\) are diagonal entries of \(\Lambda\), the corresponding eigenvectors of \(\mathbf{K}\)(DMD modes) are columns of \(\mathbf{\Phi}=\mathbf{Y}\mathbf{V}_{\mathbf{r}}\mathbf{\Sigma}_{\mathbf{r}}^{ -1}\mathbf{W}\).
5. Approximate the observation data via DMD, \(\hat{\mathbf{g}}(\mathbf{x}_{k})=\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}\), where \(\mathbf{b}=\mathbf{\Phi}^{\dagger}\mathbf{g}(\mathbf{x}_{0})\).
6. Reconstruct the state variables \(\hat{\mathbf{x}}_{k}=\mathbf{g}^{-1}(\hat{\mathbf{g}}(\mathbf{x}_{k}))= \mathbf{g}^{-1}\left(\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}\right)\).
```
**Algorithm 1** DMD on observable space [16; 31]
### State reconstruction
Koopman operator theory utilizes observable functions \(\mathbf{g}\) to transform the nonlinear system (1) into a linear system while preserving the nonlinearity. Evolving the nonlinear system (1) is computationally expensive or even impossible when \(f\) is
unknown, whereas evolving through the Koopman operator (2) offers a promising and computationally efficient approach.
Figure 1 illustrates the relation between the nonlinear evolution \(f\) and the Koopman operator evolution where the system evolves linearly in the observation space \(\mathcal{H}\). By computing the Koopman eigenvalues and modes, we can make predictions of the observable functions \(\mathbf{g}(\mathbf{x})\). We could reconstruct the state \(\mathbf{x}\) by the inverse of the observable functions \(\mathbf{g}^{-1}(\mathbf{x})\) provided that \(\mathbf{g}(\mathbf{x})\) is invertible. The invertibility of observable functions is essential to ensure the reconstruction accuracy and the interpretability of the outcomes.
Typical observable functions \(\mathbf{g}(\mathbf{x})\) selection are performed manually based on prior knowledge. Exact DMD takes the identical mapping, while the EDMD utilizes a set of pre-defined functions such as polynomials, Fourier modes, radial basis functions, and so forth [17]. However, these methods can be inaccurate and inefficient for Koopman embeddings learning. Deep neural networks, as efficient global nonlinear approximators, could be applied to represent the observable function \(\mathbf{g}(\mathbf{x})\) and the reconstruction function \(\mathbf{g}^{-1}(\mathbf{x})\). Several studies have demonstrated that the encoder and decoder networks in AE correspond to \(\mathbf{g}(\mathbf{x})\) and \(\mathbf{g}^{-1}(\mathbf{x})\), respectively [19; 22; 23; 24; 25; 26].
In practical applications, it is not always guaranteed that \(\mathbf{g}(\mathbf{x})\) is invertible. In the learning Koopman embedding via AE, the invertibility of \(\mathbf{g}(\mathbf{x})\) is enforced through numerical constraints, _i.e._, the reconstruction error \(\|\mathbf{x}-\mathbf{g}^{-1}(\mathbf{g}(\mathbf{x}))\|_{2}^{2}\), which tends to result in overfitting and suboptimal performance [29]. Besides, the reconstruction error is trained simultaneously with the prediction error and the linearity error [23]. The weights assigned to each loss term are hyperparameters that can be challenging to tune. In this paper, we propose a structurally invertible mapping learning framework, which eliminates the need for the reconstruction term in the loss function and yields more robust and accurate results. We present the details of our method in Section 3.
Figure 1: Koopman operator and inverse of observable functions
## 3 Learning Koopman embedding by invertible neural networks
In this section, we first briefly review the AE neural network and demonstrate the limitation of this class of neural networks in the Koopman embedding learning. Then, we introduce our method to overcome this limitation.
### Drawback of AE in the Koopman embedding learning
Most of the work use the Autoencoder (AE) neural networks as the backbone to learn the invariant subspace of the Koopman operator and reconstruct the state variables. AE as the frequently-used unsupervised learning structure of neural networks, consists of two parts, _i.e._, the encoder \(\mathcal{E}\) and the decoder \(\mathcal{D}\). AE learns these two mappings (functions) \(\mathcal{E}\) and \(\mathcal{D}\) by optimizing
\[\min_{\mathcal{E},\mathcal{D}}\mathbb{E}_{x\sim m(x)}[\text{loss}(x,\mathcal{ D}\circ\mathcal{E}(x))]. \tag{5}\]
Here \(m(x)\) denotes the distribution of the input data, \(\text{loss}(x,y)\) describes the difference between \(x\) and \(y\), and \(\mathbb{E}(\cdot)\) represents the expectation.
**Definition 2**.: _Let \(f_{1}:S\to S^{\prime}\) be an arbitrary mapping, and it is said to be invertible if there exists a mapping \(f_{2}:S^{\prime}\to S\) such that_
\[f_{1}\circ f_{2}=\mathcal{I},f_{2}\circ f_{1}=\mathcal{I},\]
_where \(\mathcal{I}\) is the identity mapping. Then, \(f_{2}\) is said to be the inverse mapping of \(f_{1}\)._
Let \(\mathcal{E}\) and \(\mathcal{D}\) be two mappings learned by AE such that \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\). However, the reverse order of the mapping \(\mathcal{E}\circ\mathcal{D}\) is not always a good approximation to the identity mapping, moreover, \(\mathcal{E}\) and \(\mathcal{D}\) are generally not invertible [29]. The main reason is that while AE strives to reach \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\), it omits the additional constraint \(\mathcal{E}\circ\mathcal{D}\approx\mathcal{I}\) which requires the latent variable data to train. Unfortunately, the latent variables are not accessible, thus rendering it impossible for AE to satisfy \(\mathcal{E}\circ\mathcal{D}\approx\mathcal{I}\) and \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\) simultaneously.
AE learns an identity mapping \(\mathcal{I}\) from a training data set \(\mathcal{S}\), _i.e._, for any \(\mathbf{x}\in\mathcal{S},\mathcal{D}\circ\mathcal{E}(\mathbf{x})\approx\mathbf{x}\). For data out of the set \(\mathcal{S}\), the mapping learned by AE may perform badly. In other words, AE may have poor generalization capability. Next, we use a preliminary experiment to demonstrate this limitation. The details of this numerical example are given in Section 4.1. We use the structure of AE defined in [26] and randomly generate 120 trajectories to train the AE, and the results are depicted by Figure 2.
Figure 2 compares the input data points out of the distribution of the training data with the corresponding reconstructed data points using the trained AE model. Figure 2(a) shows the density distribution of training data set \(\mathcal{S}\), which provides a rough illustration of the data space \(\mathcal{S}\). For the reconstruction test of AE, we generate three types of data, _i.e._, the sin-shaped scatters, the S-shaped scatters, and scatters from the standard 2-d normal distribution. We plot the corresponding input points (blue) and reconstructed data points (red) of the AE. The results shown in the next three subfigures illustrate that AE can reconstruct the input data points nearby the training data set \(\mathcal{S}\) very well. But for the data points far away from \(\mathcal{S}\), AE performs badly. The same situation happens in learning the Koopman embedding. Specifically, in the training process of AE, one aims to find the Koopman invariant space by minimizing the error of the Koopman embedding learning and the reconstruction error. However, minimizing the error between latent variables and their corresponding reconstruction denoted by \(\text{loss}(\mathbf{x},\mathcal{E}\circ\mathcal{D}(\mathbf{x}))\) is intractable. This result is in poor stability and generalization capability.
### Structure of Cf-Inn
We have shown that the mapping learned by AE performs poorly, which inspires us that invertibility can greatly reduce computational complexity and yields better
Figure 2: Generalization capability test of AE. (a) the training data distribution. (b) the \(sin(x)\) test function. (c) S-shaped scatters test. (d) random scatters from 2-d standard normal distribution.
generalization capability. Next, we introduce an invertible neural network to overcome the drawback of AE. Let \(\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{x}):\mathbf{X}\rightarrow\mathbf{Y}\) denote the input-output mapping of the invertible neural network, where \(\boldsymbol{\theta}\) represents the parameters of the neural network. Let \(\mathbf{f}_{\boldsymbol{\theta}}\) be the inverse mapping of \(\mathbf{g}_{\boldsymbol{\theta}}\) which shares the same parameters with \(\mathbf{g}_{\boldsymbol{\theta}}\). Then we can reconstruct \(x\) in the backward direction by \(\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{y}):\mathbf{Y}\rightarrow\mathbf{X}\). In generative tasks of machine learning, the forward generating direction is called the flow direction and the backward direction is called the normalizing direction. Next, we introduce the concept of coupling flows, which belongs to the invertible neural networks.
**Definition 3** (Coupling flow [32]).: _Let \(m\in\mathbb{N}\) and \(m\geq 2\), for a vector \(\mathbf{z}\in\mathbb{R}^{m}\) and \(2\leq q\leq m-1\), we define \(\mathbf{z}_{up}\) as the vector \((z_{1},\ldots,z_{q})^{\top}\in\mathbb{R}^{q}\) and \(\mathbf{z}_{low}\) as the vector \((z_{q+1},\ldots,z_{m})^{\top}\in\mathbb{R}^{m-q}\). A coupling flow (CF), denoted by \(h_{q,\tau}\), has the following form,_
\[h_{q,\tau}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\tau(\mathbf{z}_ {low},\sigma(\mathbf{z}_{up}))),\]
_where \(\sigma:\mathbb{R}^{q}\rightarrow\mathbb{R}^{l}\), and \(\tau(\cdot,\sigma(\mathbf{y})):\mathbb{R}^{m-q}\times\mathbb{R}^{l}\rightarrow \mathbb{R}^{m-q}\) is a bijection mapping for any \(\mathbf{y}\in\mathbb{R}^{q}\)._
A coupling flow defined in _Definition 3_ is invertible if and only if \(\tau\) is invertible and its inverse \(h_{q,\tau}^{-1}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\tau^{-1}( \mathbf{z}_{low},\sigma(\mathbf{z}_{up})))\)[33]. The key point of making the CF invertible is the invertibility of \(\tau\). One of the mostly used CF is the affine coupling function (ACF) [34, 35, 36], where \(\tau\) is an invertible element-wise function.
**Definition 4** (Affine coupling function [33]).: _Define an affine coupling function by the mapping \(\Psi_{q,s,t}\) from \(\mathbb{R}^{q}\times\mathbb{R}^{m-q}\) to \(\mathbb{R}^{m}\) such that_
\[\Psi_{q,s,t}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},(\mathbf{z}_{ low}+t(\mathbf{z}_{up}))\odot s(\mathbf{z}_{up})), \tag{6}\]
_where \(\odot\) is the Hadamard product, \(s,t:\mathbb{R}^{q}\rightarrow\mathbb{R}^{m-q}\) are two arbitrary vector-valued mappings._
Definition 4 defines the forward direction of computations, and the backward direction of computations is given by \(\Psi_{q,s,t}^{-1}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\mathbf{ z}_{low}\oslashs(\mathbf{z}_{up})-t(\mathbf{z}_{up}))\), where \(\oslash\) denotes the element-wise division of vectors. The mappings \(s\) and \(t\) in Definition 4 can be any nonlinear functions, neural networks such as fully-connected neural network (FNN) are typically used to parameterize \(t\) and \(s\).
Let \(\Psi_{1},\ldots,\Psi_{L}\) be a sequence of \(L\) affine coupling functions and define \(\mathbf{g}_{\boldsymbol{\theta}}=\Psi_{L}\circ\Psi_{L-1}\circ\cdots\Psi_{1}\), where \(\boldsymbol{\theta}\) represents the parameters of \(\{\Psi_{i}\}_{i=1}^{L}\). The resulted vector-valued function \(\mathbf{g}_{\boldsymbol{\theta}}\) is an invertible neural network and called by coupling flow invertible neural network (CF-INN) in this paper. Moreover, for any \(\Psi_{i}\), the division
index \(q\) of the input vector \(x\) is user-guided. In this paper, we set \(q=\lceil m/2\rceil\), where \(\lceil\cdot\rceil\) is the rounding function. Furthermore, in order to mix the information sufficiently, we can flip the ACF by using the form \(\bar{\Psi}_{q,s,t}(\mathbf{z}_{up},\mathbf{z}_{low})=((\mathbf{z}_{up}+t( \mathbf{z}_{low}))\odot s(\mathbf{z}_{low}),\mathbf{z}_{low})\). We plot the computation process of an ACF and a flipped ACF in Figure 3, where the network structure diagram left shows the forward direction and the network structure diagram right shows the backward direction. The red area is an ACF block and consists of a standard ACF and a flipped ACF, which is a CF-INN of depth 2.
When the depth (L) of a CF-INN is large, its training becomes challenging. The main curse is that the dividend term \(s\) is too small in \(\Psi\) in the backward direction computations. This can be solved by replacing the affine coupling functions with residual coupling functions. Similar idea has also been applied in the residual term of ResNet.
**Definition 5** (Residual coupling functions [37]).: _Define a residual affine coupling function (RCF) by the map \(\Psi_{q,s,t}\) from \(\mathbb{R}^{q}\times\mathbb{R}^{m-q}\) to \(\mathbb{R}^{m}\) such that_
\[\Psi_{q,t}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\mathbf{z}_{low }+t(\mathbf{z}_{up})),\]
_where \(t:\mathbb{R}^{q}\rightarrow\mathbb{R}^{m-q}\) is a nonlinear mapping._
RCFs are simplifications of ACFs and when we connect a RCF with a flipped RCF, we obtain a RCF block, which is a simplified ACF block in Figure 3.
### Loss function for Koopman embedding
In this paper, we use the CF-INN to learn the Koopman invariant subspace and the reconstructions simultaneously, where the forward direction of CF-INN is represented by \(\mathbf{g}_{\boldsymbol{\theta}}\) and its backward direction is represented by \(\mathbf{f}_{\boldsymbol{\theta}}\). The observable
Figure 3: The illustration of the forward and backward direction in an ACF block.
functions evolve linearly in the Koopman invariant subspace. Hence, the linearity constrained loss function that represents the DMD approximation error is given by
\[\mathcal{L}_{\text{linear}}=\sum_{t=1}^{T}||\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{t})-\Phi\Lambda^{t}\Phi^{\dagger}\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{0})||^{2}=\sum_{t=1}^{T}||\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{t})-\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})||^{2},\]
where \(\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})=\Phi\Lambda^{t}\Phi^{ \dagger}\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{x}_{0})\) is the DMD approximation of the observable functions \(\{\mathbf{g}(\mathbf{x}_{t})\}_{t=1}^{T}\) by using Algorithm 1. To reconstruct the states \(x_{t}\), the inverse mapping of \(\mathbf{g}\), _i.e_, \(\mathbf{f}_{\theta}\) corresponds to the backward direction of CF-INN. \(\mathbf{f}_{\theta}\) shares the same network structure and parameters with \(\mathbf{g}_{\theta}\). Therefore, the computational cost is greatly reduced, compared with AE that another neural network is required to parameterize the inverse mapping of \(\mathbf{g}_{\theta}\). The reconstruction loss due to the DMD approximation error is given by
\[\mathcal{L}_{\text{rec}}=\sum_{t=1}^{T}||\mathbf{x}_{t}-\mathbf{f}_{ \boldsymbol{\theta}}(\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})) ||^{2}.\]
The optimal parameters \(\boldsymbol{\theta}^{*}\) is given by
\[\boldsymbol{\theta}^{*}=\operatorname*{arg\,min}_{\boldsymbol{\theta}} \mathcal{L}_{\text{linear}}+\alpha\mathcal{L}_{\text{rec}},\]
where \(\alpha\) is a user-guard hyperparameter.
Compared with other Koopman embedding learning frameworks, the loss function in our approach is much more simplified. We summarize our CF-INN framework for the Koopman embedding learning in Figure 4 and our method is called FlowDMD since this framework uses a flow model based Dynamic Model Decomposition to compute the finite dimensional Koopman operator approximation and reconstruct system states.
Figure 4: The general framework of FlowDMD.
## 4 Numerical experiments
In this section, we use three numerical examples to demonstrate the efficiency of our method for learning the Koopman embedding and compare its performance with LIR-DMD [26] and Exact DMD. We use the Python library _FEniCS_[38] to compute the numerical solutions of PDEs, the Python library _PyDMD_[39] to complete the calculations of Exact DMD, and the Python library _PyTroch_[40] to train the neural networks. Besides, the Xavier normal initialization scheme [41] is utilized to initialize the weights of all neural networks, while the biases of all nodes are set to zero. All the networks are trained by the Adam optimizer [42] with an initial learning rate of \(10^{-3}\). In order to find the optimal parameters of the network, we use _ReduceLROnPlateau_[43] to adjust the learning rate during the training process for all numerical examples. For fairness, all the methods share the same training strategies. Denote \(x\) as the "true" value of the states and \(\hat{x}\) as its reconstruction. We use three metrics to evaluate different methods synthetically., i.e., the relative \(L_{2}\) error
\[\text{RL2E}(t)=\frac{||\hat{x}_{t}-x_{t}||_{2}}{||x_{t}||_{2}},\]
the mean squared error (MSE),
\[\text{MSE}(t)=\frac{||\hat{x}_{t}-x_{t}||_{2}^{2}}{m},\]
and the total relative \(L_{2}\) error
\[\text{TRL2E}=\sqrt{\frac{\sum_{t=1}^{T}||\hat{x}_{t}-x_{t}||_{2}^{2}}{\sum_{i= 1}^{T}||x_{t}||_{2}^{2}}}.\]
### Fixed-point attractor
The fixed-point attractor example [23] is given by
\[\begin{cases}x_{t+1,1}=\lambda x_{t,1},\\ x_{t+1,2}=\mu x_{t,2}+(\lambda^{2}-\mu)x_{t,1}^{2}.\end{cases}\]
The initial state is chosen randomly by \(x_{0,1}\sim U(0.2,4.2)\), \(x_{0,2}\sim U(0.2,4.2)\) and \(\lambda=0.9,\mu=0.5\). We divide the data set into three parts where the ratio of training, validation, and test is \(60\%,20\%\), and \(20\%\), respectively. The number of neurons of each layer for the encoder network in LIR-DMD is \(2,10,10,3\) and the number of neurons of decoder network is \(3,10,10,2\). This results in \(345\) trainable parameters for
LIR-DMD. We use three ACFs for this problem. The mappings \(t\) and \(s\) are parameterized by FNN with three layers and the width of each layer is 1,8,2, respectively. This results in 102 trainable parameters in total.
We randomly choose one example from the test set and plot its results in Figure 5. Both Figure 5(a) and Figure 5(b) show that the reconstruction calculated by LIR-DMD and FlowDMD are better than that by the Exact DMD and the difference of trajectories between LIR-DMD and FlowDMD is very small. Figure 5(c) and Figure 5(d) illustrate that the reconstruction error of FlowDMD is the smallest. In the first 30 time steps, LIR-DMD has a similar error to FlowDMD. The error of FlowDMD increases much more slowly than that of LIR-DMD for the following 30 time steps. We conclude that FlowDMD has better generalization ability than LIR-DMD.
We test FlowDMD, LIR-DMD and Exact DMD using 40 randomly generated examples and the results are depicted by Figure 6. We use the total relative \(L_{2}\) error to evaluate the reconstruction results of trajectories. For FlowDMD, the reconstruction error is the lowest among almost all of the test examples, and the average total relative \(L_{2}\) error is only 0.3%. Compared with LIR-DMD, FlowDMD has better generalization ability and learning ability of the Koopman invariant subspace.
Figure 5: Comparison of three methods for Example 4.1. The total relative \(L_{2}\) error of the Exact DMD, LIR-DMD, and FlowDMD are 0.2448, 0.0111 and 0.0018, respectively.
### Burgers' equation
The 1-D Burgers' equation [44] is given by
\[\begin{cases}\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\frac{0.01}{\pi}\frac{\partial^{2}u}{\partial x^{2}}\quad x\in(-1,1),t\in(0,1],\\ u(1,t)=u(-1,t)=0,\\ u(x,0)=-\xi*sin(\pi x),\end{cases} \tag{7}\]
where \(\xi\) is a random variable that satisfies a uniform distribution \(U(0.2,1.2)\). We use the finite element method with 30 equidistant grid points for the spatial discretization and the implicit Euler method with a step size of 0.01 for temporal discretization. We generate 100 samples of \(\xi\) for the initial state and compute the corresponding solutions. The examples are then divided into three parts, with proportions 60% for training, 20% for validation, and 20% for test. We test the performance of the Exact DMD, LIR-DMD, and FlowDMD. The rank of Exact DMD is 3 and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity. The structure of the encoder network for LIR-DMD is \([30,40,50,40]\), and the decoder network is \([40,50,40,30]\) where the numbers in the brackets represent the width of each layer and we use RCFs to replace ACFs. This results in an invertible neural network of depth of 3 with one RCF block and one RCF. In each RCF, the width of each layer in FNN to parameterize the mapping \(t\) is 15, 40, 15, which results in 7530 parameters in FlowDMD, whereas LIR-DMD has 10650 parameters.
Figure 7 depicts that FlowDMD has the smallest absolute reconstruction error and total relative reconstruction error. Figure 8(a) and Figure 8(b) show that the reconstruction error of Exact DMD and LIR-DMD increase with time, but FlowDMD maintains in a very low level. Figure 9 summarizes the TRL2E of reconstruction on all test examples and depicts that the FlowDMD has the smallest error on almost all test examples, where the average TRL2E of FlowDMD is 1.5%. For some test examples, Exact DMD has the same TRL2E with FlowDMD, but for most test
Figure 6: Total relative \(L_{2}\) error in Example 4.1.
examples, FlowDMD performs better than Exact DMD. The TRL2E of LIR-DMD are bigger than FlowDMD over all the test examples and are slightly better than Exact DMD for some test examples.
### Allen-Cahn equation
The 1-D Allen-Cahn equation [44] is given by
\[\begin{cases}\dfrac{\partial u}{\partial t}-\gamma_{1}\dfrac{ \partial^{2}u}{\partial x^{2}}+\gamma_{2}\left(u^{3}-u\right)=0,x\in(-1,1),t \in(0,1],\\ u(0,x)=\xi*x^{2}\cos(2\pi x),\\ u(t,-1)=u(t,1),\end{cases} \tag{8}\]
where \(\gamma_{1}=0.0001\), \(\gamma_{2}=5\), and \(\xi\sim\mathcal{N}(-0.1,0.04)\). We use the finite element method with 20 equidistant grid points for the spatial discretization and the implicit Euler with a step size of 0.02 for the temporal discretization. Furthermore, we generate 100 samples of \(\xi\) and use _FEniCS_ to compute the numerical solutions. The data set is segmented according to a ratio of 60%, 20%, 20%, respectively to be used as
Figure 7: Comparison of three methods in Example 4.2. The total relative \(L_{2}\) errors for exact DMD, LIR-DMD, and FlowDMD are 0.08, 0.119, and 0.017, respectively.
the training set, the validation set, and the test set. The structure of the encoder network for LIR-DMD is [20, 30, 40, 30] and the decoder network is [30, 40, 30, 20], where the numbers in the bracket indicate the width of each layer. This results in 6190 parameters for LIR-DMD. For FlowDMD, we also use RCFs to replace the ACFs. The neural network for FlowDMD consists of one RCF block and one RCF, which results in a network with depth \(L=3\). In each RCF, the width of each layer of the FNN to parameterize \(t\) is 10, 20, 10. Finally, we obtain 2580 parameters for FlowDMD. The rank of Exact DMD is 3, and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity.
Figure 10 clearly shows that FlowDMD can reconstruct the original state most accurately. It reveals that the absolute error of both exact DMD and LIR-DMD increase over time, but FlowDMD can maintain the error in a low level all the time. Numerical results show that FlowDMD is more robust and generalizes better than Exact DMD and LIR-DMD. The error of the state reconstruction for three methods are given in Figure 11. At the beginning time, FlowDMD has the biggest relative error because the norm of the true state variables is too small, which leads to a
Figure 8: Relative error of three methods for Example 4.2.
Figure 9: Total relative \(L_{2}\) error in Example 4.2.
large relative error. As time evolves, the error of FlowDMD reaches the lowest level among all three methods. In Figure 12, we use the test data set to evaluate the generalization ability. The FlowDMD has almost the smallest total relative \(L_{2}\) error in most examples and the average of the total relative \(L_{2}\) error is 9%. It also shows that the fluctuation of error for FlowDMD is smaller than that of LIR-DMD, which demonstrates that FlowDMD has a better generalization ability and is more robust than LIR-DMD.
### Sensitivity study
Here, we study the sensitivity of FlowDMD systematically with respect to the following four aspects:
1. The neural network initialization.
2. The hyperparameter \(\alpha\) in the loss function.
3. The structure of neural networks.
4. The rank \(r\) used by DMD in Algorithm 1.
We study the sensitivity of FlowDMD using the Allen-Cahn equation in Section 4.3.
Figure 10: Comparison of three methods in Example 4.3. The total relative \(L_{2}\) error for exact DMD, LIR-DMD, and FlowDMD are 0.6129, 0.4038, and 0.0725, respectively.
#### 4.4.1 Sensitivity with respect to the neural network initialization
In order to quantify the sensitivity of FlowDMD with respect to the initialization, we consider the same data set with Section 4.3. Simultaneously, we fix the structure for FlowDMD to include only one RCF block and one RCF. Each RCF has a FNN to parameterize \(t\) where the width of each layer is \(10,20,10\). Moreover, all FNNs use the rectified linear unit (ReLU) as activation functions. We use \(15\) random seeds to initialize models and train all the models with the same setting. In Figure 13, we report the total relative \(L_{2}\) error between the reconstructed states and the"true" states. Evidently, the TRL2E remains stable for different initializations of neural networks, as demonstrated by the consistent results obtained within the following interval,
\[[\mu_{TRL2E}-\sigma_{TRL2E},\mu_{TRL2E}+\sigma_{TRL2E}]=[6.5\times 10^{-2}-1.6 \times 10^{-2},6.5\times 10^{-2}+1.6\times 10^{-2}]\]
#### 4.4.2 Sensitivity with respect to \(\alpha\)
We utilize the same training set with Section 4.3 and select \(\alpha\) from the list \([0.01,0.1,1,10,100]\). As shown in Table 1, the different weights \(\alpha\) in the loss function
Figure 11: Relative error of three methods for Example 4.3.
Figure 12: Total relative \(L_{2}\) error in Example 4.3.
have little influence on the final results. We observe that the error is minimized when \(\alpha=10\), which suggests the use of an adaptive weight selection algorithm. The gradient flow provided by the neural tangent kernel [45] can be employed to adjust the weight \(\alpha\) and accelerate the training process, and we leave this for our future work.
#### 4.4.3 Sensitivity with respect to the structure of neural networks
We study the impact of the number of RCFs and the number of neurons in the FNN to parameterize the mapping \(t\) on the performance of the FlowDMD. Specifically, the sensitivity of FlowDMD is being quantified with respect to two parameters: the number of RCFs (\(N_{f}\)) and the number of neurons (\(N_{n}\)) in the middle layer of the FNN. Here, the FNN used to parameterize \(t\) is restricted to a three layer structure of \([10,N_{n},10]\). The results are summarized in Table 2, which indicate that the reconstruction of FlowDMD has little to do with its structure while adding more neurons or more RCFs will not improve the final results to a big extent.
#### 4.4.4 Sensitivity with respect to the rank of DMD
As we increase the rank \(r\) used for the DMD computations in Algorithm 1, we include more physical information, but the computation time also increases. In this study, we investigate how the DMD rank affects the model and its reconstruction. The results in Table 3 show that as we increase the rank \(r\), the corresponding error decreases rapidly.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(\alpha\) & 0.01 & 0.1 & 1 & 10 & 100 \\ \hline TRL2E & 6.2e-02 & 6.8e-02 & 8.2e-02 & 3.2e-02 & 6.9e-02 \\ \hline \end{tabular}
\end{table}
Table 1: Total relative \(L_{2}\) error for different \(\alpha\).
Figure 13: Total relative \(L_{2}\) error for different neural network initializations.
## 5 Conclusion
In this paper, we propose a coupling flow invertible neural network approach to learn both the observable functions and reconstruction functions for the Koopman operator learning. Our method generate more accurate Koopman embedding model and better approximations of the Koopman operator than state-of-the-art methods. Our FlowDMD is structurally invertible, which simplifies the loss function and improves the accuracy of the state reconstruction. Numerical experiments show that our approach is more accurate, efficient, and interpretable than the state-of-the-art methods.
## Acknowledgments
The authors would like to thank Mengnan Li and Lijian Jiang for sharing their code.
| Koopman演算理論において、有限次元非線形系は、可視可能な関数セットを用いて、無限次元でも線形系に変換されます。しかし、 Koopman演算子のinvariant subspaceを基盤となる知識に基づいて可視可能な関数を手動で選択することは、効率的で挑戦的であり、特に、その基礎システムに関する情報が限られている場合に顕著です。さらに、現在の方法論は、可視可能な関数の可逆性についての重要性を軽視しており、これにより、不正確な結果が得られる可能性があります。これらの課題に対処するため、私たちはFlowDMD(Flow-based Dynamic Mode Decomposition)を提案しました。FlowDMDはCoupling Flow Invertible Neural Network(CF-INN)フレームワークを用いて動作します。FlowDMDは、CF-INNの可逆性という内在性を活用して、Koopman演算子のinvariant subspaceを学習し、状態変数の正確な再 |
2309.07700 | On Supmodular Matrices | We consider the problem of determining which matrices are permutable to be
supmodular. We show that for small dimensions any matrix is permutable by a
universal permutation or by a pair of permutations, while for higher dimensions
no universal permutation exists. We raise several questions including to
determine the dimensions in which every matrix is permutable. | Shmuel Onn | 2023-09-14T13:18:28 | http://arxiv.org/abs/2309.07700v1 | # On Supmodular Matrices
###### Abstract
We consider the problem of determining which matrices are permutable to be supmodular. We show that for small dimensions any matrix is permutable by a universal permutation or by a pair of permutations, while for higher dimensions no universal permutation exists. We raise several questions including to determine the dimensions in which every matrix is permutable.
**Keywords:** submodular, totally positive, permutation, transportation
**MSC:** 15A39, 90B06, 05A05, 68R05, 90C05
## 1 Introduction
A real \(m\times n\) matrix is _supmodular_ if for all \(1\leq i<r\leq m\) and \(1\leq j<s\leq n\), we have \(A_{i,j}+A_{r,s}\geq A_{i,s}+A_{r,j}\). Such matrices arise in discrete optimization: if \(A\) is the utility matrix of a transportation problem then an optimal transportation matrix \(X\) is quickly obtained by the greedy algorithm that increases its entries in the order \(X_{1,1},\ldots,X_{1,n},\ldots,X_{m,1},\ldots,X_{m,n}\), each from zero to the maximum possible value not exceeding prescribed column sums (demands) and row sums (supplies), see [4]. See also [2] for related matrix properties under which the greedy algorithm works.
Supmodular matrices are also known as _anti-Monge matrices_, see [3]. They are also related to total positivity: a matrix \(A\) is supmodular if and only if the matrix \(P:=\exp(A)\) defined by \(P_{i,j}:=\exp(A_{i,j})\) is \(2\)_-totally-positive_, namely, all its minors of order up to \(2\) are nonnegative; see [5] for this theory and its applications.
Here we are interested in studying which matrices have the following property.
**Definition 1.1**: We say that a real \(m\times n\) matrix \(A\) is _permutable_ if its entries can be permuted in such a way that the permuted matrix is a supmodular matrix.
If an \(m\times n\) matrix \(A\) is permutable then so is \(A^{T}\) and so we will assume \(m\leq n\). For an \(m\times n\) matrix \(\sigma\) whose entries form a permutation of \(1,2,\ldots,mn\), let \(A^{\sigma}\) be obtained from \(A\) by permuting its entries such that \(\sigma_{i,j}<\sigma_{r,s}\) implies \(A^{\sigma}_{i,j}\leq A^{\sigma}_{r,s}\).
For instance,
\[A\ =\ \left(\begin{array}{rrr}1&1&3\\ 10&3&7\\ 8&10&6\end{array}\right),\quad\sigma\ =\ \left(\begin{array}{rrr}8&7&1\\ 4&5&3\\ 2&6&9\end{array}\right),\quad A^{\sigma}\ =\ \left(\begin{array}{rrr}10&8&1\\ 3&6&3\\ 1&7&10\end{array}\right).\]
We prove the following theorem.
**Theorem 1.2**: _Let \(A\) be any \(m\times n\) real matrix with \(m\leq n\). Then we have:_
1. _If_ \(m=1\) _then_ \(A^{\sigma}\) _is trivially supmodular for any_ \(\sigma\)_._
2. _If_ \(m=2\) _then_ \(A^{\sigma}\) _is supmodular, where_ \[\sigma\ =\ \left(\begin{array}{rrr}n&n-1&\cdots&2&1\\ n+1&n+2&\cdots&2n-1&2n\end{array}\right).\]
3. _If_ \(m=n=3\) _then_ \(A^{\sigma}\) _is supmodular, where_ \[\sigma\ =\ \left(\begin{array}{rrr}8&7&1\\ 4&5&3\\ 2&6&9\end{array}\right).\]
4. _If_ \(m=3,n=4\) _then either_ \(A^{\sigma}\) _or_ \(A^{\tau}\) _is supmodular, where_ \[\sigma\ =\ \left(\begin{array}{rrr}9&8&7&3\\ 2&6&5&4\\ 1&10&11&12\end{array}\right),\quad\tau\ =\ \left(\begin{array}{rrr}12&3&2&1\\ 11&7&8&9\\ 4&5&6&10\end{array}\right).\]
Theorem 1.2 asserts that for \(m=1,2\) and any \(n\geq m\), for \(m=n=3\), and for \(m=3,n=4\), any real \(m\times n\) matrix \(A\) is permutable to a supmodular one. But moreover, for all cases but the last one, the theorem provides a _universal_\(\sigma\), that is, one such that \(A^{\sigma}\) is supmodular for every \(m\times n\) real matrix. Note that the universal permutations are not unique: for \(m=1\) all \(n!\) permutations are universal, and for \(m=2\) and, say, \(n=2\), and for \(m=n=3\), the following are universal as well,
\[\sigma\ =\ \left(\begin{array}{rrr}4&3\\ 1&2\end{array}\right),\quad\sigma\ =\ \left(\begin{array}{rrr}9&6&2\\ 3&5&4\\ 1&7&8\end{array}\right).\]
These permutations, as well as these appearing in the theorem, were obtained by using the notion of goodness of a permutation defined and used in the next section.
Next we show these are the only values of \(m,n\) for which a universal \(\sigma\) exists.
**Theorem 1.3**: _For \(m\leq n\), a universal \(\sigma\) exists if and only if \(m=1,2\) or \(m=n=3\)._
So already for \(m=3,n=4\), none of the \(12!=479,001,600\) potential \(\sigma\) is universal.
Consider any \(m\leq n\). If every real \(m\times n\) matrix \(A\) is permutable then let \(p(m,n)\leq(mn)!\) be the smallest positive integer for which there are \(\sigma_{1},\ldots,\sigma_{p(m,n)}\) such that, for every real \(m\times n\) matrix \(A\), some \(A^{\sigma_{i}}\) is supmodular. And if not every \(A\) is permutable then let \(p(m,n)=\infty\). Theorem 1.2 and Theorem 1.3 show that \(p(m,n)=1\) if and only if either \(m=1,2\) or \(m=n=3\), and that \(p(3,4)=2\).
Theorem 1.2 and Theorem 1.3 suggest the following question.
**Question 1.4**:
1. _What is_ \(p(m,n)\) _for all_ \(m\leq n\) _and in particular for which_ \(m\leq n\) _is it finite?_
2. _What is the smallest_ \(m+n\) _admitting a non permutable_ \(m\times n\) _matrix if any?_
3. _What is the complexity of deciding if a given integer matrix is permutable?_
The problem of studying which matrices are permutable is interesting on its own right, but one possible application is the following. Suppose we need to solve very quickly, in real time, repeated \(m\times n\) transportation problems, with arbitrarily varying demands and supplies satisfying an upper bound \(u\). Suppose we have access to \(mn\) transporters, where each transporter \(k\) charges \(p_{k}\) per unit flow and can transport at most \(u\) units of flow, so that we cannot simply use only the cheapest. Our primary objective is to solve the repeated problems very quickly in real time, and a secondary objective is to solve each with minimum cost. With preprocessing done once and for all, we try to assign each transporter \(k\) to a pair of supplier \(i\) and consumer \(j\) to transport the flow from \(i\) to \(j\), so that the resulting utility matrix is supmodular, and so the repeated problems could be solved very quickly by the greedy algorithm. This preprocessing is reduced to the problem studied here as follows. We arrange the negations \(-p_{k}\) of the \(mn\) costs arbitrarily in an \(m\times n\) matrix \(A\), and if \(A\) is permutable, search for a permutation \(\sigma\) such that \(A^{\sigma}\) is supmodular, and assign the transporters to pairs \(i,j\) according to this permutation. The permuted matrix \(A^{\sigma}\) is then the utility matrix of all the transportation problems that we solve (maximizing the utility, so the cost represented by \(-A^{\sigma}\) is minimized), and all problems can be solve very quickly in real time using the greedy algorithm.
## 2 Proofs
**Lemma 2.1**: _An \(m\times n\) real matrix \(A\) is supmodular if and only if we have that, for every \(1\leq i<m\) and \(1\leq j<n\), the inequality \(A_{i,j}+A_{i+1,j+1}\geq A_{i,j+1}+A_{i+1,j}\) holds._
_Proof._ Clearly if \(A\) is supmodular then the above condition holds. For the converse, we prove that if the condition holds then, for all \(1\leq i<r\leq m\) and \(1\leq j<s\leq n\)
we have \(A_{i,j}+A_{r,s}\geq A_{i,s}+A_{r,j}\), by induction on \(t=(r-i)+(s-j)\). If \(t=2\) this holds by the condition. Suppose \(t>2\) and, say, \(s-j>1\). By induction,
\[(A_{i,j}+A_{r,s}) - (A_{i,s}+A_{r,j})\] \[= ((A_{i,j}+A_{r,s-1})-(A_{i,s-1}+A_{r,j}))\] \[+ ((A_{i,s-1}+A_{r,s})-(A_{i,s}+A_{r,s-1}))\ \geq\ 0+0\ =\ 0\.\ \
Proof of Theorem 1.2.: Consider any \(m\leq n\), any \(m\times n\) real matrix \(A\), and any \(m\times n\) matrix \(\sigma\) whose entries form a permutation of \(1,2,\ldots,mn\). Part 1 with \(m=1\) holds since any \(1\times n\) real matrix is trivially supmodular.
So assume \(m\geq 2\). By Lemma 2.2, if \(\sigma\) is good on \(i,j\) for all \(1\leq i<m\) and \(1\leq j<n\) then \(A^{\sigma}_{i,j}+A^{\sigma}_{i+1,j+1}\geq A^{\sigma}_{i,j+1}+A^{\sigma}_{i+1,j}\) holds for all such \(i,j\) and then \(A^{\sigma}\) is supmodular by Lemma 2.1. Part 2 therefore follows since
\[\sigma\ =\ \left(\begin{array}{cccc}n&n-1&\cdots&2&1\\ n+1&n+2&\cdots&2n-1&2n\end{array}\right)\]
is good on \(1,j\) for all \(1\leq j<n\) since the maximum among \(\sigma_{1,j},\sigma_{1,j+1},\sigma_{2,j},\sigma_{2,j+1}\) is \(\sigma_{2,j+1}\) and the minimum among these entries is \(\sigma_{1,j+1}\). Part 3 also follows since
\[\sigma\ =\ \left(\begin{array}{cccc}8&7&1\\ 4&5&3\\ 2&6&9\end{array}\right)\]
is good on \(i=1,2\) and \(j=1,2\) as can be verified by direct inspection.
Finally, we prove Part 4. Let \(a_{1}\leq a_{2}\leq\cdots\leq a_{12}\) be the entries of \(A\) arranged in nondecreasing order. Surely either \(a_{8}+a_{5}\geq a_{7}+a_{6}\) or \(a_{8}+a_{5}\leq a_{7}+a_{6}\) (or both).
First, suppose that \(a_{8}+a_{5}\geq a_{7}+a_{6}\) and consider \(A^{\sigma}\) where
\[\sigma\ =\ \left(\begin{array}{cccc}9&8&7&3\\ 2&6&5&4\\ 1&10&11&12\end{array}\right)\.\]
We claim that \(A^{\sigma}_{i,j}+A^{\sigma}_{i+1,j+1}\geq A^{\sigma}_{i,j+1}+A^{\sigma}_{i+1,j}\) for \(i=1,2\) and \(j=1,2,3\) and therefore \(A^{\sigma}\) is supmodular by Lemma 2.1. Indeed, for \(i=1,j=2\) this holds since
\[(A^{\sigma}_{1,2}+A^{\sigma}_{2,3})-(A^{\sigma}_{1,3}+A^{\sigma}_{2,2})\ =\ (a_{8}+a_{5})-(a_{7}+a_{6})\ \geq\ 0\,\]
and for every other \(i,j\), this follows from Lemma 2.2 since \(\sigma\) is good on \(i,j\) as can be verified by inspection. So the claim follows.
Second, suppose that \(a_{8}+a_{5}\leq a_{7}+a_{6}\) and consider \(A^{\tau}\) where
\[\tau\ =\ \left(\begin{array}{cccc}12&3&2&1\\ 11&7&8&9\\ 4&5&6&10\end{array}\right)\.\]
We claim that \(A^{\tau}_{i,j}+A^{\tau}_{i+1,j+1}\geq A^{\tau}_{i,j+1}+A^{\tau}_{i+1,j}\) for \(i=1,2\) and \(j=1,2,3\) and therefore \(A^{\sigma}\) is supmodular by Lemma 2.1. Indeed, for \(i=2,j=2\) this holds since
\[(A^{\tau}_{2,2}+A^{\tau}_{3,3})-(A^{\tau}_{2,3}+A^{\tau}_{3,2})\ =\ (a_{7}+a_{6})-(a_{8}+a_{5})\ \geq\ 0\,\]
and for every other \(i,j\), this follows from Lemma 2.2 since \(\tau\) is good on \(i,j\) as can be verified by inspection. So the claim follows.
This completes the proof of Part 4 and the proof of the theorem.
Proof of Theorem 1.3.: By Theorem 1.2 just proved, there exists a universal \(\sigma\) for \(m=1,2\) and \(m=n=3\). So we need only prove that for all other \(m\leq n\) there is no universal \(\sigma\). Suppose for a contradiction that for some \(m\geq 3\), \(n\geq 4\) there exists a universal \(\sigma\), that is, an \(m\times n\) matrix whose entries form a permutation of \(1,2,\ldots,mn\), such that \(A^{\sigma}\) is supmodular for every real \(m\times n\) matrix \(A\). Let the restriction of \(\sigma\) to its top left \(3\times 4\) submatrix be
\[\left(\begin{array}{cccc}a&b&c&d\\ e&f&g&h\\ i&j&k&l\end{array}\right)\.\]
Since \(\sigma\) is assumed to be universal, by Lemma 2.2 it must be good on any \(2\times 2\) submatrix consisting of consecutive rows and consecutive columns, that is, among the four entries of such a submatrix, the maximum must be on the main diagonal and the minimum on the opposite diagonal.
First, suppose \(f>g\). Considering \(b,c,f,g\), with \(f>g\), we obtain \(c<g\) and \(b>f\). Considering \(c,d,g,h\), with \(c<g\), we obtain \(h>g\). Considering \(a,b,e,f\), with \(b>f\), we obtain \(e<f\). Considering \(e,f,i,j\), with \(e<f\), we obtain \(j>f\). Considering \(f,g,j,k\), with \(j>f\), we obtain \(k>g\). Considering \(g,h,k,l\), with \(k>g\), we obtain \(g>h\). So we obtain the contradiction \(g<h<g\).
Second, suppose \(f<g\). Considering \(f,g,j,k\), with \(f<g\), we obtain \(j<f\) and \(k>g\). Considering \(e,f,i,j\), with \(j<f\), we obtain \(f<e\). Considering \(a,b,e,f\), with \(f<e\), we obtain \(b<f\). Considering \(b,c,f,g\), with \(b<f\), we obtain \(c<g\). Considering \(c,d,g,h\), with \(c<g\), we obtain \(g<h\). Considering \(g,h,k,l\), with \(g<h\), we obtain \(k<g\). So we obtain the contradiction \(g<k<g\).
## Acknowledgments
Shmuel Onn thanks Steffen Borgwardt for useful related conversations [1]. He was supported by a grant from the Israel Science Foundation and by the Dresner chair.
| どの行列が可換であるかを判定する問題について検討しています。これは、小規模な次元であれば、任意の行列は普遍的置換または一対の置換によって可換になることを示しています。一方、高い次元では普遍的置換が存在しません。いくつかの質問も提起しました。それは、どの次元ですべての行列が可換であるかを判定する問題です。
Please note:
* Use formal Japanese.
* Avoid using slang or colloquialisms.
* Be concise and accurate. |
2305.19833 | Homogenization of nondivergence-form elliptic equations with
discontinuous coefficients and finite element approximation of the
homogenized problem | We study the homogenization of the equation
$-A(\frac{\cdot}{\varepsilon}):D^2 u_{\varepsilon} = f$ posed in a bounded
convex domain $\Omega\subset \mathbb{R}^n$ subject to a Dirichlet boundary
condition and the numerical approximation of the corresponding homogenized
problem, where the measurable, uniformly elliptic, periodic and symmetric
diffusion matrix $A$ is merely assumed to be essentially bounded and (if $n>2$)
to satisfy the Cordes condition. In the first part, we show existence and
uniqueness of an invariant measure by reducing to a Lax--Milgram-type problem,
we obtain $L^2$-bounds for periodic problems in double-divergence-form, we
prove homogenization under minimal regularity assumptions, and we generalize
known corrector bounds and results on optimal convergence rates from the
classical case of H\"{o}lder continuous coefficients to the present case. In
the second part, we suggest and rigorously analyze an approximation scheme for
the effective coefficient matrix and the solution to the homogenized problem
based on a finite element method for the approximation of the invariant
measure, and we demonstrate the performance of the scheme through numerical
experiments. | Timo Sprekeler | 2023-05-31T13:21:30 | http://arxiv.org/abs/2305.19833v1 | Homogenization of nondivergence-form elliptic equations with discontinuous coefficients and finite element approximation of the homogenized problem
###### Abstract.
We study the homogenization of the equation \(-A(\frac{\cdot}{\varepsilon}):D^{2}u_{\varepsilon}=f\) posed in a bounded convex domain \(\Omega\subset\mathbb{R}^{n}\) subject to a Dirichlet boundary condition and the numerical approximation of the corresponding homogenized problem, where the measurable, uniformly elliptic, periodic and symmetric diffusion matrix \(A\) is merely assumed to be essentially bounded and (if \(n>2\)) to satisfy the Cordes condition. In the first part, we show existence and uniqueness of an invariant measure by reducing to a Lax-Milgram-type problem, we obtain \(L^{2}\)-bounds for periodic problems in double-divergence-form, we prove homogenization under minimal regularity assumptions, and we generalize known corrector bounds and results on optimal convergence rates from the classical case of Holder continuous coefficients to the present case. In the second part, we suggest and rigorously analyze an approximation scheme for the effective coefficient matrix and the solution to the homogenized problem based on a finite element method for the approximation of the invariant measure, and we demonstrate the performance of the scheme through numerical experiments.
Key words and phrases:Homogenization, nondivergence-form elliptic PDE, Cordes condition, finite element methods 2010 Mathematics Subject Classification: 35B27, 35J15, 65N12, 65N30
## 1. Introduction
In this paper, we discuss the homogenization of the linear elliptic nondivergence-form problem
\[-A\left(\frac{\cdot}{\varepsilon}\right):D^{2}u_{\varepsilon} =f\quad\text{in }\Omega, \tag{1.1}\] \[u_{\varepsilon} =g\quad\text{on }\partial\Omega,\]
where \(\varepsilon>0\) is a small parameter, \(\Omega\subset\mathbb{R}^{n}\) is a bounded convex domain, \(f\in L^{2}(\Omega)\), \(g\in H^{2}(\Omega)\), and \(A\in L^{\infty}(\mathbb{R}^{n};\mathbb{R}^{n\times n}_{\text{sym}})\) is \(\mathbb{Z}^{n}\)-periodic, uniformly elliptic, and (if \(n>2\)) satisfies the Cordes condition (2.2) which dates back to [10]. In this setting, it is known that there exists a unique solution \(u_{\varepsilon}\in H^{2}(\Omega)\) to (1.1); see [27, 29]. We study homogenization of the problem (1.1) as well as the numerical approximation of the corresponding homogenized problem.
The theory of periodic homogenization for (1.1) is classical and well-understood if \(A\) is Holder continuous and \(f,g\) and \(\partial\Omega\) are sufficiently regular. In this case, it is known that as \(\varepsilon\searrow 0\) we have that \((u_{\varepsilon})_{\varepsilon>0}\) converges in a suitable sense to the solution of the homogenized problem, i.e., the linear elliptic constant-coefficient problem
\[-\bar{A}:D^{2}u =f\quad\text{in }\Omega, \tag{1.2}\] \[u =g\quad\text{on }\partial\Omega,\]
where the effective coefficient matrix \(\bar{A}:=\int_{Y}rA\) is obtained by integrating \(A\) against an invariant measure \(r\), defined as the solution to the periodic problem
\[-D^{2}:(rA)=0\quad\text{in $Y$},\qquad r\text{ is $Y$-periodic},\qquad\int_{Y}r=1, \tag{1.3}\]
where \(Y:=(0,1)^{n}\); see e.g., [4, 6, 11, 22]. Further, optimal convergence rates and corrector bounds in suitable norms are available in the literature; see e.g., [17, 20, 25, 28]. For recent developments in stochastic homogenization of nondivergence-form problems, we refer to [1, 2, 3, 18, 19] and the references therein. It seems that the case of measurable diffusion matrices that are merely essentially bounded has not been studied yet and our first goal of this paper is to generalize various qualitative and quantitative homogenization results to this setting.
Concerning the development of numerical methods to approximate the solution to (1.1), it is well-known that, for small positive \(\varepsilon\), standard finite element methods require a very fine mesh that resolves the oscillations of the coefficients to deliver accurate approximations. Therefore, multiscale finite element methods for nondivergence-form problems have been developed in recent years; see [9, 13] for linear problems and [16, 24] for Hamilton-Jacobi-Bellman (HJB) and Isaacs problems. For some results regarding finite difference schemes we refer to [8, 12, 14].
In particular, [13] suggests a multiscale finite element scheme based on the methodology of localized orthogonal decomposition (LOD) for linear nondivergence-form equations with essentially bounded measurable coefficients satisfying the Cordes condition. In [9], a multiscale finite element scheme for (1.1) with \(A\in W^{1,q}_{\rm per}(Y;\mathbb{R}^{n\times n}_{\rm sym})\), \(q>n\), and satisfying the Cordes condition has been suggested which is based on an approximation of the effective coefficient matrix via a finite element approximation of the invariant measure, relying on rewriting (1.3) in divergence-form. Since the latter is not possible in our setting, the second goal of this paper is to provide and analyze a finite element method for the approximation of the invariant measure in our case of merely essentially bounded measurable coefficients that satisfy the Cordes condition.
The structure of this paper is as follows.
In Section 2, we provide the framework, covering the main assumptions, well-posedness of (1.1), and a uniform \(H^{2}\)-bound for the solution to (1.1). In Section 3, we study homogenization of (1.1). We begin by introducing a bilinear form, a coercivity property, and useful estimates that are used throughout the paper in Section 3.1. In Section 3.2, we analyze linear periodic problems in double-divergence-form and nondivergence-form. In particular, we find that any diffusion matrix \(A\) that satisfies the assumptions stated below (1.1) has a unique invariant measure \(r\in L^{2}_{\rm per}(Y)\) and that
\[r=C\frac{\operatorname{tr}(A)}{|A|^{2}}(1-\Delta\psi), \tag{1.4}\]
where \(C\) is a positive constant and \(\psi\) is the unique solution to a Lax-Milgram-type problem in the subspace of \(H^{2}_{\rm per}(Y)\) consisting of functions with mean zero (see Theorem 3.1). In Section 3.3, we show that \((u_{\varepsilon})_{\varepsilon>0}\) converges weakly in \(H^{2}(\Omega)\) to the unique solution \(u\in H^{2}(\Omega)\) to the homogenized problem (1.2), using the transformation argument from [4]. Thereafter, in Sections 3.4 and 3.5, we obtain \(H^{2}\) corrector estimates and generalize some results on type-\(\varepsilon^{2}\) diffusion matrices (see Definition 3.3) obtained in [17] for Holder continuous diffusion matrices to our setting.
In Section 4, we study the numerical approximation of the homogenized problem via a novel finite element method for the approximation of the invariant measure based on (1.4). We perform a rigorous error analysis and demonstrate the theoretical results in numerical experiments provided in Section 5. In Section 6, we collect the proofs of the results contained in this work.
## 2. Framework
Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded convex domain. Let \(f\in L^{2}(\Omega)\) and \(g\in H^{2}(\Omega)\). For a small parameter \(\varepsilon>0\), we consider the problem
\[\begin{split} L_{\varepsilon}u_{\varepsilon}:=-A^{\varepsilon}:D^{ 2}u_{\varepsilon}&=f\quad\text{in }\Omega,\\ u_{\varepsilon}&=g\quad\text{on }\partial\Omega,\end{split} \tag{2.1}\]
where \(A^{\varepsilon}:=A\left(\frac{\cdot}{\varepsilon}\right)\) and \(A\in\mathcal{M}(\lambda,\Lambda)\) for some constants \(\lambda,\Lambda>0\). Here, we define
\[\mathcal{M}(\lambda,\Lambda):=\left\{A\in L^{\infty}(\mathbb{R}^{n};\mathbb{R }^{n\times n}_{\text{sym}})\ \bigg{|}\text{$A$ is $Y$-periodic and }\forall\xi\in\mathbb{R}^{n}\backslash\{0\}:\lambda\leq\frac{A\xi\cdot\xi}{| \xi|^{2}}\leq\Lambda\text{ a.e. in }\mathbb{R}^{n}\right\},\]
where \(Y:=(0,1)^{n}\). We further assume that \(A\) satisfies the Cordes condition (dating back to [10]), that is,
\[\exists\,\delta\in(0,1]:\qquad\frac{|A|^{2}}{(\text{tr}(A))^{2}}\leq\frac{1}{ n-1+\delta}\quad\text{a.e. in }\mathbb{R}^{n}, \tag{2.2}\]
where \(|A|:=\sqrt{A:A}\). Note that this is no restriction for dimensions \(n\leq 2\):
**Remark 2.1**.: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\). If \(n\in\{1,2\}\), the Cordes condition (2.2) holds:_
1. _If_ \(n=1\)_, we have that (_2.2_) holds with_ \(\delta=1\)_._
2. _If_ \(n=2\)_, we have that (_2.2_) holds with_ \(\delta=\frac{\lambda}{\Lambda}\)_. Indeed, this can be seen by writing_ \(|A|^{2}\) _as sum of the squared eigenvalues of_ \(A\) _and_ \(\text{tr}(\tilde{A})\) _as sum of the eigenvalues of_ \(A\)_, and using that_ \(\frac{s^{2}+t^{2}}{(s+t)^{2}}\leq(1+\frac{s}{t})^{-1}\) _for any_ \(0<s\leq t\)_._
The Cordes condition (2.2) guarantees the existence of a function \(\gamma\in L^{\infty}_{\text{per}}(Y)\) such that \(\gamma>0\) and \(|\gamma A-I_{n}|\leq\sqrt{1-\delta}\) almost everywhere, where \(I_{n}\) denotes the identity matrix in \(\mathbb{R}^{n\times n}\).
**Remark 2.2**.: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. Then, the function_
\[\gamma:=\frac{\text{tr}(A)}{|A|^{2}}\in L^{\infty}(\mathbb{R}^{n})\]
_is \(Y\)-periodic, satisfies \(\frac{\lambda}{\Lambda^{2}}\leq\gamma\leq\frac{\Lambda}{\lambda^{2}}\) almost everywhere, and there holds \(|\tilde{A}-I_{n}|^{2}=n-\frac{(\text{tr}(A))^{2}}{|A|^{2}}\leq 1-\delta\) almost everywhere, where \(\tilde{A}:=\gamma A\). Note that \(\tilde{A}\in\mathcal{M}(\frac{\lambda^{2}}{\Lambda^{2}},\frac{\Lambda^{2}}{ \Lambda^{2}})\)._
It is then known (see [27, 29]) that (2.1) has a unique solution in \(H^{2}(\Omega)\). Further, we can obtain a uniform bound on the \(H^{2}\)-norm of the solution.
**Theorem 2.1** (well-posedness and uniform \(H^{2}\)-bound).: _Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded convex domain. Let \(A\in\mathcal{M}(\lambda,\Lambda)\), \(f\in L^{2}(\Omega)\), \(g\in H^{2}(\Omega)\), and suppose that (2.2) holds. Then, for any \(\varepsilon>0\) there exists a unique solution \(u_{\varepsilon}\in H^{2}(\Omega)\) to (2.1), and we have the bound_
\[\|u_{\varepsilon}\|_{H^{2}(\Omega)}\leq C(\|f\|_{L^{2}(\Omega)}+\|g\|_{H^{2}( \Omega)}) \tag{2.3}\]
_for some constant \(C=C(\text{diam}(\Omega),\lambda,\Lambda,n,\delta)>0\)._
## 3. Homogenization
### Preliminaries
Before we start with the main discussion we briefly highlight an important observation which will be crucial for this work. We write \(L^{2}_{\mathrm{per},0}(Y):=\{\varphi\in L^{2}_{\mathrm{per}}(Y):\int_{Y}\varphi=0\}\) and \(H^{k}_{\mathrm{per},0}(Y):=\{\varphi\in H^{k}_{\mathrm{per}}(Y):\int_{Y}\varphi=0\}\) for \(k\in\mathbb{N}\).
**Lemma 3.1** (the bilinear form \(b_{\mu}\)).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. Let \(\gamma\in L^{\infty}_{\mathrm{per}}(Y)\) be defined as in Remark 2.2. Then, for any \(\mu\geq 0\), the bilinear form_
\[b_{\mu}:H^{2}_{\mathrm{per}}(Y)\times H^{2}_{\mathrm{per}}(Y)\to\mathbb{R}, \qquad b_{\mu}(\varphi_{1},\varphi_{2}):=(\mu\varphi_{1}-\gamma A:D^{2}\varphi _{1},\mu\varphi_{2}-\Delta\varphi_{2})_{L^{2}(Y)}\]
_satisfies the bound_
\[\mu^{2}\|\varphi\|^{2}_{L^{2}(Y)}+2\mu\|\nabla\varphi\|^{2}_{L^{2}(Y)}+\|D^{2 }\varphi\|^{2}_{L^{2}(Y)}=\|\mu\varphi-\Delta\varphi\|^{2}_{L^{2}(Y)}\leq C_{ \delta}\,b_{\mu}(\varphi,\varphi)\qquad\forall\varphi\in H^{2}_{\mathrm{per}}( Y),\]
_where \(C_{\delta}:=(1-\sqrt{1-\delta})^{-1}>0\). In particular, if \(\mu>0\) we have that \(b_{\mu}\) is coercive on \(H^{2}_{\mathrm{per}}(Y)\), and if \(\mu=0\) we have that \(b_{\mu}\) is coercive on \(H^{2}_{\mathrm{per},0}(Y)\)._
The following inequalities will be used frequently:
**Remark 3.1**.: _For any \(v\in H^{1}_{\mathrm{per},0}(Y)\) we have the Poincare inequality \(\|v\|_{L^{2}(Y)}\leq\frac{\sqrt{n}}{\pi}\|\nabla v\|_{L^{2}(Y)}\); see e.g., [5]. Further, for any \(\varphi\in H^{2}_{\mathrm{per}}(Y)\) we have the identity \(\|D^{2}\varphi\|_{L^{2}(Y)}=\|\Delta\varphi\|_{L^{2}(Y)}\). In particular, there holds \(\frac{\pi^{2}}{n}\|\varphi\|_{L^{2}(Y)}\leq\frac{\pi}{\sqrt{n}}\|\nabla \varphi\|_{L^{2}(Y)}\leq\|D^{2}\varphi\|_{L^{2}(Y)}=\|\Delta\varphi\|_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{\mathrm{per},0}(Y)\)._
Using these results, we will now study periodic problems in double-divergence-form and nondivergence-form.
### Periodic problems and invariant measures
In this section, we discuss the periodic double-divergence-form problem
\[-D^{2}:(qA)=f\quad\text{in $Y$},\qquad q\text{ is Y-periodic}, \tag{3.1}\]
and the periodic nondivergence-form problem
\[-A:D^{2}v=f\quad\text{in $Y$},\qquad v\text{ is Y-periodic}, \tag{3.2}\]
where \(f\in L^{2}_{\mathrm{per}}(Y)\), \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\), and we assume that the Cordes condition (2.2) holds. We seek solutions \(q\in L^{2}_{\mathrm{per}}(Y)\) to (3.1), i.e., \((q,-A:D^{2}\varphi)_{L^{2}(Y)}=(f,\varphi)_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{\mathrm{per}}(Y)\). First, we introduce the notion of an invariant measure; see also [6, 7].
**Definition 3.1** (invariant measure).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\). A function \(r\in L^{2}_{\mathrm{per}}(Y)\) is called an invariant measure to \(A\) if \(\int_{Y}r=1\) and \((r,-A:D^{2}\varphi)_{L^{2}(Y)}=0\) for all \(\varphi\in H^{2}_{\mathrm{per}}(Y)\)._
Our first result is the existence and uniqueness of an invariant measure \(r\in L^{2}_{\mathrm{per}}(Y)\) to any \(A\in\mathcal{M}(\lambda,\Lambda)\) satisfying the Cordes condition, and an \(L^{2}\)-bound.
**Theorem 3.1** (existence, uniqueness and properties of invariant measures).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. Let \(\gamma\in L^{\infty}_{\mathrm{per}}(Y)\) be defined as in Remark 2.2 and set \(\tilde{A}:=\gamma A\). Let \(b_{0}:H^{2}_{\mathrm{per}}(Y)\times H^{2}_{\mathrm{per}}(Y)\to\mathbb{R}\) and \(C_{\delta}>0\) be defined as in Lemma 3.1. Then, we have the following:_
1. _There exists a unique_ \(\psi\in H^{2}_{\mathrm{per},0}(Y)\) _such that_ \(b_{0}(\varphi,\psi)=\int_{Y}\tilde{A}:D^{2}\varphi\) _for any_ \(\varphi\in H^{2}_{\mathrm{per},0}(Y)\) _and we have the bound_ \(\|\Delta\psi\|_{L^{2}(Y)}\leq\sqrt{n}\frac{\Lambda}{\lambda}C_{\delta}\)_._
2. _The function_ \(\tilde{r}:=1-\Delta\psi\in L^{2}_{\rm per}(Y)\) _is the unique invariant measure to_ \(\tilde{A}\)_, and there holds_ \(\tilde{r}\geq 0\) _almost everywhere._
3. _The function_ \(r:=(\gamma,\tilde{r})^{-1}_{L^{2}(Y)}\gamma\tilde{r}\in L^{2}_{\rm per}(Y)\) _is the unique invariant measure to_ \(A\)_, and there holds_ \(r\geq 0\) _almost everywhere. Further, we have the bound_ \[\|r\|_{L^{2}(Y)}\leq\frac{\Lambda^{3}}{\lambda^{3}}\|\tilde{r}\|_{L^{2}(Y)} \leq\frac{\Lambda^{3}}{\lambda^{3}}\left(\sqrt{n}\frac{\Lambda}{\lambda}C_{ \delta}+1\right).\] (3.3)
With Theorem 3.1 at hand, we can now state the main result on problems of the form (3.1).
**Theorem 3.2** (analysis of the problem (3.1)).: _Let \(f\in L^{2}_{\rm per}(Y)\), \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\), and suppose that (2.2) holds. Let \(r\in L^{2}_{\rm per}(Y)\) denote the unique invariant measure to \(A\), and let \(C_{\delta}>0\) be defined as in Lemma 3.1. Then, we have the following:_
1. _There exists a solution_ \(q\in L^{2}_{\rm per}(Y)\) _to the problem (_3.1_) if, and only if,_ \(f\in L^{2}_{\rm per,0}(Y)\)_._
2. _If_ \(f\in L^{2}_{\rm per,0}(Y)\) _and_ \(q_{1},q_{2}\in L^{2}_{\rm per}(Y)\) _are solutions to (_3.1_), then_ \(q_{1}-q_{2}=cr\) _with_ \(c:=\int_{Y}(q_{1}-q_{2})\)_. In particular, the problem (_3.1_) has a unique solution if_ \(\int_{Y}q\) _is prescribed_.
3. _If_ \(f\in L^{2}_{\rm per,0}(Y)\)_, then the unique solution_ \(q_{0}\in L^{2}_{\rm per,0}(Y)\) _to (_3.1_) subject to_ \(\int_{Y}q=0\)_, whose existence and uniqueness follows from (i)-(ii), satisfies the bound_ \[\|q_{0}\|_{L^{2}(Y)}\leq\frac{n\Lambda}{\pi^{2}\lambda^{2}}C_{\delta}\left(1+ \|r\|_{L^{2}(Y)}\right)\|f\|_{L^{2}(Y)}.\]
Next, we turn to the analysis of the problem (3.2).
**Theorem 3.3** (analysis of the problem (3.2)).: _Let \(f\in L^{2}_{\rm per}(Y)\), \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\), and suppose that (2.2) holds. Let \(\gamma\in L^{\infty}_{\rm per}(Y)\) be defined as in Remark 2.2 and let \(b_{0}:H^{2}_{\rm per}(Y)\times H^{2}_{\rm per}(Y)\to\mathbb{R}\) and \(C_{\delta}>0\) be defined as in Lemma 3.1. Let \(r\in L^{2}_{\rm per}(Y)\) denote the unique invariant measure to \(A\). Then, we have the following:_
1. _There exists a solution_ \(v\in H^{2}_{\rm per}(Y)\) _to the problem (_3.2_) if, and only if,_ \((f,r)_{L^{2}(Y)}=0\)_. Moreover, if_ \((f,r)_{L^{2}(Y)}=0\)_, we have that_ \(v\in H^{2}_{\rm per}(Y)\) _is a solution to (_3.2_) if, and only if,_ \(b_{0}(v,\varphi)=-(\gamma f,\Delta\varphi)_{L^{2}(Y)}\) _for any_ \(\varphi\in H^{2}_{\rm per}(Y)\)_._
2. _If_ \((f,r)_{L^{2}(Y)}=0\) _and_ \(v_{1},v_{2}\in H^{2}_{\rm per}(Y)\) _are solutions to (_3.2_), then_ \(v_{1}-v_{2}={\rm const}.\) _almost everywhere. In particular, the problem (_3.2_) has a unique solution if_ \(\int_{Y}v\) _is prescribed._
3. _If_ \((f,r)_{L^{2}(Y)}=0\)_, then the unique solution_ \(v_{0}\in H^{2}_{\rm per,0}(Y)\) _to (_3.2_) subject to_ \(\int_{Y}v_{0}=0\)_, whose existence and uniqueness follows from (i)-(ii), satisfies the bound_ \[\|\Delta v_{0}\|_{L^{2}(Y)}\leq\frac{\Lambda}{\lambda^{2}}C_{\delta}\|f\|_{L^{2 }(Y)}.\]
We conclude this section by noting that the results regarding existence and uniqueness of solutions to (3.1) and (3.2) can also be obtained via the Fredholm alternative, observing that \(K:L^{2}_{\rm per}(Y)\to L^{2}_{\rm per}(Y)\), \(f\mapsto(-\gamma A:D^{2}+\mu\operatorname{id})^{-1}f\) is a compact linear operator for \(\mu>0\) due to Lemma 3.1. Our approach however is a constructive one and forms the basis for a simple construction of finite element methods for the numerical approximation of (3.1) and (3.2).
### The convergence result
Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded convex domain, \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\), \(f\in L^{2}(\Omega)\), \(g\in H^{2}(\Omega)\), and suppose that (2.2) holds. We consider the problem
\[L_{\varepsilon}u_{\varepsilon}:=-A^{\varepsilon}:D^{2}u_{\varepsilon} =f\quad\text{in }\Omega, \tag{3.4}\] \[u_{\varepsilon} =g\quad\text{on }\partial\Omega,\]
where \(\varepsilon>0\) is a small positive parameter. We recall from Theorem 2.1 that there exists a unique solution \(u_{\varepsilon}\in H^{2}(\Omega)\) to (3.4) and that we have the uniform \(H^{2}\)-bound (2.3). Thus, there exists a function \(u\in H^{2}(\Omega)\) with \(u-g\in H^{1}_{0}(\Omega)\) such that, upon passing to a subsequence,
\[u_{\varepsilon}\rightharpoonup u\quad\text{weakly in $H^{2}(\Omega)$ as $\varepsilon \searrow 0$},\qquad u_{\varepsilon}\to u\quad\text{strongly in $H^{1}(\Omega)$ as $\varepsilon \searrow 0$}. \tag{3.5}\]
We denote the invariant measure to \(A\) given by Theorem 3.1 by \(r\in L^{2}_{\text{per}}(Y)\) and multiply the equation \(L_{\varepsilon}u_{\varepsilon}=f\), which holds a.e. in \(\Omega\), by \(r^{\varepsilon}:=r(\frac{z}{\varepsilon})\) to obtain
\[-[rA]^{\varepsilon}:D^{2}u_{\varepsilon}=r^{\varepsilon}f\quad\text{a.e. in $\Omega$}, \tag{3.6}\]
where \([rA]^{\varepsilon}:=r^{\varepsilon}A^{\varepsilon}\). Next, we apply the transformation argument from [4] (see also [23]) to obtain a divergence-form equation for \(u_{\varepsilon}\).
Noting that \(rA\in L^{2}_{\text{per}}(Y;\mathbb{R}^{n\times n}_{\text{sym}})\), we introduce \(\phi_{j}\in H^{1}_{\text{per},0}(Y)\), \(1\leq j\leq n\), to be the unique solution in \(H^{1}_{\text{per},0}(Y)\) to the problem
\[-\Delta\phi_{j}=\nabla\cdot(rAe_{j})\quad\text{in $Y$},\qquad\phi_{j}\text{ is $Y$-periodic},\qquad\int_{Y}\phi_{j}=0, \tag{3.7}\]
where \(e_{j}\in\mathbb{R}^{n}\) denotes the \(j\)-th column of \(I_{n}\). We set \(\phi:=(\phi_{1},\ldots,\phi_{n})\in H^{1}_{\text{per},0}(Y;\mathbb{R}^{n})\).
**Remark 3.2**.: _We make the following observations:_
* _For any_ \(L\in\mathbb{N}\) _and_ \(1\leq j\leq n\)_, the function_ \(\phi_{j,L}:=\phi_{j}\) _is the unique solution in_ \(H^{1}_{\text{per},0}(Y_{L})\) _to_ \[-\Delta\phi_{j,L}=\nabla\cdot(rAe_{j})\quad\text{in $Y_{L}$},\qquad\phi_{j,L}\text{ is $Y_{L}$-periodic},\qquad\int_{Y_{L}}\phi_{j,L}=0,\] (3.8) _where_ \(Y_{L}:=(0,L)^{n}\)_. Indeed, note_ \(\phi_{j,L}=\phi_{j,L}(\cdot+k)\) _for any_ \(k\in\mathbb{Z}^{n}\) _by uniqueness of solutions to (_3.8_). Hence,_ \(\phi_{j,L}\in H^{1}_{\text{per},0}(Y)\) _solves (_3.7_) and thus,_ \(\phi_{j,L}=\phi_{j}\)_._
* _There holds_ \(\nabla\cdot\phi=0\) _almost everywhere. Indeed, this follows from_ \(\nabla\cdot\phi\in L^{2}_{\text{per},0}(Y)\) _and_ \[(\nabla\cdot\phi,\Delta\varphi)_{L^{2}(Y)}=\sum_{j=1}^{n}(\nabla\phi_{j}, \nabla(\partial_{j}\varphi))_{L^{2}(Y)}=-\sum_{j=1}^{n}(rAe_{j},\nabla( \partial_{j}\varphi))_{L^{2}(Y)}=0\] _for any_ \(\varphi\in H^{2}_{\text{per}}(Y)\)_, where we have used in the last step that_ \((r,-A:D^{2}\varphi)_{L^{2}(Y)}=0\)_._
Next, we define the skew-symmetric matrix-valued map \(B=(b_{ij})_{1\leq i,j\leq n}\in L^{2}_{\text{per},0}(Y;\mathbb{R}^{n\times n})\) by \(b_{ij}:=\partial_{i}\phi_{j}-\partial_{j}\phi_{i}\) for \(1\leq i,j\leq n\) and we set
\[M:=rA+B\in L^{2}_{\text{per}}(Y;\mathbb{R}^{n\times n}).\]
We observe that for any \(L\in\mathbb{N}\) and \(1\leq j\leq n\), writing \(Y_{L}:=(0,L)^{n}\), we have that
\[-(Me_{j},\nabla\varphi)_{L^{2}(Y_{L})}=(\nabla\phi_{j}-Be_{j},\nabla\varphi)_ {L^{2}(Y_{L})}=(\partial_{j}\phi,\nabla\varphi)_{L^{2}(Y_{L})}=(\nabla\cdot \phi,\partial_{j}\varphi)_{L^{2}(Y_{L})}=0\]
for any \(\varphi\in H^{1}_{\text{per}}(Y_{L})\), where we have used Remark 3.2(i) in the first equality and Remark 3.2(ii) in the last equality. Hence, we have for any bounded domain \(\omega\subset\mathbb{R}^{n}\) that
\[(Me_{j},\nabla w)_{L^{2}(\omega)}=0\qquad\forall 1\leq j\leq n\quad\forall w \in H^{1}_{0}(\omega). \tag{3.9}\]
Indeed, let \(L\in\mathbb{N}\) such that \(\omega\subset k_{L}+Y_{L}\) with \(k_{L}:=-\frac{L}{2}(1,\ldots,1)\in\mathbb{R}^{n}\), extend \(w\in H^{1}_{0}(\Omega)\) to a function \(\tilde{w}\in H^{1}_{0}(k_{L}+Y_{L})\) by setting \(\tilde{w}=0\) a.e. in \((k_{L}+Y_{L})\backslash\omega\), and define \(\varphi\in H^{1}_{\text{per}}(Y_{L})\) to be the \(Y_{L}\)-periodic extension of \(\tilde{w}\) to see that \((Me_{j},\nabla w)_{L^{2}(\omega)}=(Me_{j},\nabla\tilde{w})_{L^{2}(k_{L}+Y_{L}) }=(Me_{j},\nabla\varphi)_{L^{2}(Y_{L})}=0\).
Writing \(M^{\varepsilon}:=M(\frac{\cdot}{\varepsilon})\), we then obtain that
\[(M^{\varepsilon}\nabla u_{\varepsilon},\nabla v)_{L^{2}(\Omega)}=(-M^{ \varepsilon}:D^{2}u_{\varepsilon},v)_{L^{2}(\Omega)}=(r^{\varepsilon}f,v)_{L^{ 2}(\Omega)}\qquad\forall v\in C_{c}^{\infty}(\Omega), \tag{3.10}\]
where the first equality follows from (3.9) and the second equality follows from (3.6) and skew-symmetry of \(B\). Finally, noting that due to the fact that \(r\in L^{2}_{\rm per}(Y)\), \(M\in L^{2}_{\rm per}(Y;\mathbb{R}^{n\times n})\) and \(\int_{Y}B=0\) we have
\[M^{\varepsilon}\rightharpoonup\int_{Y}M=\int_{Y}rA\;\;\text{weakly in }L^{2}( \Omega;\mathbb{R}^{n\times n}),\;\;r^{\varepsilon}\rightharpoonup\int_{Y}r=1\; \;\text{weakly in }L^{2}(\Omega)\;\;\text{as }\varepsilon\searrow 0, \tag{3.11}\]
we can use (3.5) to pass to the limit \(\varepsilon\searrow 0\) in (3.10) to obtain the following convergence result:
**Theorem 3.4** (convergence result).: _Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded convex domain, \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\), \(f\in L^{2}(\Omega)\), \(g\in H^{2}(\Omega)\), and suppose that (2.2) holds. Denoting the invariant measure to \(A\) by \(r\in L^{2}_{\rm per}(Y)\), we introduce \(\bar{A}:=\int_{Y}rA\in\mathbb{R}^{n\times n}_{\rm sym}\). Then, there holds \(\lambda|\xi|^{2}\leq\bar{A}\xi\cdot\xi\leq\Lambda|\xi|^{2}\) for all \(\xi\in\mathbb{R}^{n}\), and the sequence of solutions \((u_{\varepsilon})_{\varepsilon>0}\subset H^{2}(\Omega)\) to (3.4) converges weakly in \(H^{2}(\Omega)\) to the unique solution \(u\in H^{2}(\Omega)\) to the homogenized problem_
\[\bar{L}u:=-\bar{A}:D^{2}u =f\quad\text{in }\Omega, \tag{3.12}\] \[u =g\quad\text{on }\partial\Omega.\]
_We call the symmetric positive definite matrix \(\bar{A}\) the effective coefficient matrix._
### Corrector estimates
It is by now standard how to obtain corrector estimates to any order in the classical \(A\in C^{0,\alpha}\) setting, introducing suitable interior and boundary correctors and assuming that \(u\) is sufficiently regular; see e.g. [25, 28]. For \(A\in L^{\infty}\), using the uniform \(H^{2}\)-bound from Theorem 2.1 and using Theorem 3.3, one obtains the following results by arguing along the lines of [28]. To simplify the statement of the results, we introduce the following notation:
**Definition 3.2** (the notation \(T(A,a)\)).: _Let \(a\in L^{2}_{\rm per}(Y)\), \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\), and suppose that \(A\) satisfies (2.2). Let \(r\in L^{2}_{\rm per}(Y)\) denote the invariant measure to \(A\). Then, we define \(T(A,a):=w\) to be the unique element \(w\in H^{2}_{{\rm per},0}(Y)\) satisfying \(-A:D^{2}w=a-(a,r)_{L^{2}(Y)}\)._
**Theorem 3.5** (an \(\mathcal{O}(\varepsilon)\) corrector estimate in the \(H^{2}\)-norm).: _Suppose that we are in the situation of Theorem 3.4 and assume that \(u\in H^{4}(\Omega)\). Set \(v_{ij}:=T(A,a_{ij})\in H^{2}_{{\rm per},0}(Y)\) for \(1\leq i,j\leq n\) and set \(V:=(v_{ij})_{1\leq i,j\leq n}\). Writing \(\varphi^{\varepsilon}:=\varphi(\frac{\cdot}{\varepsilon})\) for \(\varphi\in\{V,\partial_{i}V\}\), we define \(\eta_{\varepsilon}:=V^{\varepsilon}:D^{2}u\) and_
\[p_{ij,\varepsilon}:=2[\partial_{i}V]^{\varepsilon}:D^{2}(\partial_{j}u)+ \varepsilon V^{\varepsilon}:D^{2}(\partial_{ij}^{2}u),\qquad P_{\varepsilon}: =(p_{ij,\varepsilon})_{1\leq i,j\leq n}.\]
_Then, writing \(C_{\delta}:=(1-\sqrt{1-\delta})^{-1}>0\) and assuming that_
\[\eta_{\varepsilon}\in H^{2}(\Omega)\quad\text{and}\quad P_{\varepsilon}\in L^ {2}(\Omega)\text{ with }\|P_{\varepsilon}\|_{L^{2}(\Omega)}=\mathcal{O}(1), \tag{3.13}\]
_there exists a constant \(c_{0}=c_{0}({\rm diam}(\Omega),n)>0\) such that_
\[\big{\|}u_{\varepsilon}-u-\varepsilon^{2}(\eta_{\varepsilon}+\theta_{ \varepsilon})\big{\|}_{H^{2}(\Omega)}\leq\frac{\Lambda}{\lambda}C_{\delta}c_{0} \|P_{\varepsilon}\|_{L^{2}(\Omega)}\,\varepsilon=\mathcal{O}(\varepsilon), \tag{3.14}\]
_where \(\theta_{\varepsilon}\in H^{2}(\Omega)\) denotes the unique solution to \(L_{\varepsilon}\theta_{\varepsilon}=0\) in \(\Omega\), \(\theta_{\varepsilon}=-\eta_{\varepsilon}\) on \(\partial\Omega\)._
**Remark 3.3**.: _The proof of Theorem 3.5 shows that (3.14) holds with_
\[c_{0}:=\sqrt{n}\sup_{v\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\setminus\{0\}} \frac{\|v\|_{H^{2}(\Omega)}}{\|\Delta v\|_{L^{2}(\Omega)}}.\]
**Remark 3.4**.: _If \(n\leq 3\), in view of the Sobolev embeddings, \(u\in H^{4}(\Omega)\) is sufficient to guarantee that the assumption (3.13) in Theorem 3.5 is met._
**Corollary 3.1** (\(L^{\infty}\)-rate in dimension \(n\leq 3\)).: _Suppose that we are in the situation of Theorem 3.4 and \(n\leq 3\). Then, if \(u\in H^{4}(\Omega)\), we have that \(\|u_{\varepsilon}-u\|_{L^{\infty}(\Omega)}=\mathcal{O}(\varepsilon)\)._
Note that the result of Corollary 3.1 follows directly from Theorem 3.5, Remark 3.4, the Sobolev embedding \(H^{2}(\Omega)\hookrightarrow L^{\infty}(\Omega)\), and the fact that by the maximum principle (see e.g., [26]) we have the bound \(\|\theta_{\varepsilon}\|_{L^{\infty}(\Omega)}\leq\|V^{\varepsilon}:D^{2}u\|_ {L^{\infty}(\Omega)}\leq\|V\|_{L^{\infty}(\mathbb{R}^{n})}\|D^{2}u\|_{L^{ \infty}(\Omega)}\) in dimension \(n\leq 3\).
Similarly, we can obtain an \(\mathcal{O}(\varepsilon^{2})\) corrector estimate in the \(H^{2}(\Omega)\)-norm.
**Theorem 3.6** (an \(\mathcal{O}(\varepsilon^{2})\) corrector estimate in the \(H^{2}\)-norm).: _Suppose that we are in the situation of Theorem 3.5 and assume that \(u\in H^{5}(\Omega)\). Set \(\chi_{jkl}:=T(A,Ae_{j}\cdot\nabla v_{kl})\in H^{2}_{\mathrm{per},0}(Y)\) for \(1\leq j,k,l\leq n\) and set \(X:=(\chi_{jkl})_{1\leq j,k,l\leq n}\). Writing \(\varphi^{\varepsilon}:=\varphi(\frac{\varepsilon}{\varepsilon})\) for \(\varphi\in\{V,X,\partial_{i}X\}\), we define \(\tilde{\eta}_{\varepsilon}:=X^{\varepsilon}:D^{3}u\) and_
\[q_{ij,\varepsilon}:=V^{\varepsilon}:D^{2}(\partial_{ij}^{2}u)+4[\partial_{i}X ]^{\varepsilon}:D^{3}(\partial_{j}u)+2\varepsilon X^{\varepsilon}:D^{3}( \partial_{ij}^{2}u),\qquad Q_{\varepsilon}:=(q_{ij,\varepsilon})_{1\leq i,j \leq n}.\]
_Then, writing \(C_{\delta}:=(1-\sqrt{1-\delta})^{-1}\) and assuming that_
\[\tilde{\eta}_{\varepsilon}\in H^{2}(\Omega)\quad\text{and}\quad Q_{\varepsilon }\in L^{2}(\Omega)\text{ with }\|Q_{\varepsilon}\|_{L^{2}(\Omega)}=\mathcal{O}(1), \tag{3.15}\]
_we have the bound_
\[\|u_{\varepsilon}-u+2\varepsilon z_{\varepsilon}-\varepsilon^{2}(\eta_{ \varepsilon}+\theta_{\varepsilon})-2\varepsilon^{3}(\tilde{\eta}_{\varepsilon }+\tilde{\theta}_{\varepsilon})\|_{H^{2}(\Omega)}\leq\frac{\Lambda}{\lambda} C_{\delta}c_{0}\|Q_{\varepsilon}\|_{L^{2}(\Omega)}\,\varepsilon^{2}=\mathcal{O}( \varepsilon^{2}), \tag{3.16}\]
_where \(c_{0}>0\) is as in Remark 3.3 and \(\tilde{\theta}_{\varepsilon},z_{\varepsilon}\in H^{2}(\Omega)\) denote the unique solutions to \(L_{\varepsilon}\tilde{\theta}_{\varepsilon}=0\) in \(\Omega\), \(\tilde{\theta}_{\varepsilon}=-\tilde{\eta}_{\varepsilon}\) on \(\partial\Omega\), and \(L_{\varepsilon}z_{\varepsilon}=-\sum_{j,k,l=1}^{n}(Ae_{j}\cdot\nabla v_{kl},r)_ {L^{2}(Y)}\partial_{jkl}^{3}u\) in \(\Omega\), \(z_{\varepsilon}=0\) on \(\partial\Omega\), respectively._
**Remark 3.5**.: _If \(n\leq 3\), in view of the Sobolev embeddings, \(u\in H^{5}(\Omega)\) is sufficient to guarantee that the assumption (3.15) in Theorem 3.6 is met._
**Corollary 3.2** (\(L^{\infty}\)-bound in dimension \(n\leq 3\)).: _Suppose that we are in the situation of Theorem 3.4 and \(n\leq 3\). Then, if \(u\in H^{5}(\Omega)\), we have that_
\[\|u_{\varepsilon}-u+2\varepsilon z_{\varepsilon}\|_{L^{\infty}(\Omega)}= \mathcal{O}(\varepsilon^{2}), \tag{3.17}\]
_where \(z_{\varepsilon}\in H^{2}(\Omega)\) is defined as in Theorem 3.6._
Similarly to Theorems 3.5 and 3.6, corrector estimates in the \(H^{2}(\Omega)\)-norm can be obtained to any order, assuming that \(u\) is sufficiently regular and constructing suitable corrector functions.
### Type-\(\varepsilon^{2}\) diffusion matrices
First, let us generalize the definition of type-\(\varepsilon^{2}\) diffusion matrices from [17] given for \(A\in C^{0,\alpha}\) to our present situation.
**Definition 3.3** (type-\(\varepsilon^{2}\) and type-\(\varepsilon\) diffusion matrices).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\). We set \(c_{j}^{kl}(A):=(Ae_{j}\cdot\nabla v_{kl},r)_{L^{2}(Y)}\) for \(1\leq j,k,l\leq n\), where \(r\in L^{2}_{\mathrm{per}}(Y)\) denotes the invariant measure to \(A\) and \(v_{kl}:=T(A,a_{kl})\in H^{2}_{\mathrm{per},0}(Y)\). We call \(A\) a type-\(\varepsilon^{2}\) diffusion matrix if \(C_{jkl}(A):=c_{j}^{kl}(A)+c_{k}^{jl}(A)+c_{l}^{jk}(A)=0\) for all \(1\leq j,k,l\leq n\). Otherwise, we call \(A\) a type-\(\varepsilon\) diffusion matrix._
Note that the function \(z_{\varepsilon}\) in (3.16) and (3.17) vanishes for any \(u\in H^{5}(\Omega)\) if and only if the symmetric part of the third-order homogenized tensor \((c_{j}^{kl})_{1\leq j,k,l\leq n}\) vanishes, or equivalently, if and only if \(A\) is type-\(\varepsilon^{2}\). In particular, in the situation of Corollary 3.2, if \(A\) is type-\(\varepsilon^{2}\) we have that \(\|u_{\varepsilon}-u\|_{L^{\infty}(\Omega)}=\mathcal{O}(\varepsilon^{2})\) whereas if \(A\) is type-\(\varepsilon\) we only have that \(\|u_{\varepsilon}-u\|_{L^{\infty}(\Omega)}=\mathcal{O}(\varepsilon)\) and this rate \(\mathcal{O}(\varepsilon)\) is optimal in general.
We can now extend the main results from [17] obtained for \(A\in C^{0,\alpha}\) to our present case. The following results can then be proved by using arguments almost identical to [17].
**Theorem 3.7** (diffusion matrices \(A=C+aM\) are type-\(\varepsilon^{2}\)).: _Let \(C,M\in\mathbb{R}^{n\times n}_{\rm sym}\), let \(a\in L^{\infty}_{\rm per}(Y)\), and suppose that \(A:=C+aM\) is such that \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and (2.2) holds. Set \(w:=T(A,a)\in H^{2}_{{\rm per},0}(Y)\) and \(v_{ij}:=T(A,a_{ij})\in H^{2}_{{\rm per},0}(Y)\) for \(1\leq i,j\leq n\). Then, we have the following results:_
1. _The invariant measure to_ \(A\) _is given by_ \(r=1+M:D^{2}w\)_._
2. _We have that_ \(V:=(v_{ij})_{1\leq i,j\leq n}\) _is given by_ \(V=wM\)_._
3. _There holds_ \(-C:D^{2}w=ra-\bar{a}\) _where_ \(\bar{a}:=(a,r)_{L^{2}(Y)}\)_._
4. _The third-order homogenized tensor vanishes, i.e.,_ \(c^{kl}_{j}(A)=0\) _for all_ \(1\leq j,k,l\leq n\)_._
_In particular, \(A\) is a type-\(\varepsilon^{2}\) diffusion matrix._
**Theorem 3.8** (characterization of type-\(\varepsilon^{2}\) diagonal diffusion matrices for \(n=2\)).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for \(n=2\) and some \(\lambda,\Lambda>0\), and suppose that \(\frac{1}{a}A=I_{2}+bM=:B\) for some \(b\in L^{\infty}_{\rm per}(Y)\), where \(a:=\frac{1}{2}{\rm tr}(A)\) and \(M:={\rm diag}(1,-1)\). Set \(w_{A}:=T(A,a)\in H^{2}_{{\rm per},0}(Y)\), \(w_{B}:=T(B,b)\in H^{2}_{{\rm per},0}(Y)\), and \(v_{ij}:=T(A,a_{ij})\in H^{2}_{{\rm per},0}(Y)\) for \(1\leq i,j\leq n\). Then, we have the following results:_
1. _The invariant measure_ \(r_{B}\) _to_ \(B\) _is given by_ \(r_{B}=1+M:D^{2}w_{B}\)_, and the invariant measure_ \(r_{A}\) _to_ \(A\) _is given by_ \(r_{A}=\frac{\bar{a}}{a}r_{B}\) _where_ \(\bar{a}:=(\frac{1}{a},r_{B})^{-1}_{L^{2}(Y)}=(a,r_{A})_{L^{2}(Y)}\)_._
2. _We have that_ \(V:=(v_{ij})_{1\leq i,j\leq n}\) _is given by_ \(V=w_{A}(I_{2}+\bar{b}M)+w_{B}M\) _where_ \(\bar{b}:=(b,r_{B})_{L^{2}(Y)}\)_._
3. _There holds_ \(-\Delta w_{B}=r_{B}b-\bar{b}\)_._
4. _There holds_ \(c^{kl}_{3-s}(A)=2(-1)^{s+1}\bar{a}(1+m_{kl}\bar{b})(\partial_{3-s}w_{A}, \partial^{2}_{ss}w_{B})_{L^{2}(Y)}\) _for any_ \(s,k,l\in\{1,2\}\)_._
_In particular, \(A\) is type-\(\varepsilon^{2}\) if, and only if, \((\partial_{1}w_{A},\partial^{2}_{22}w_{B})_{L^{2}(Y)}=(\partial_{2}w_{A}, \partial^{2}_{11}w_{B})_{L^{2}(Y)}=0\)._
Note that any diagonal \(A={\rm diag}(a_{11},a_{22})\in\mathcal{M}(\lambda,\Lambda)\) for \(n=2\) can be written in the form specified in Theorem 3.8 by setting \(b:=\frac{a_{11}-a_{22}}{a_{11}+a_{22}}\).
**Remark 3.6**.: _As a consequence of Theorem 3.8 we have that any diagonal constant-trace diffusion matrix \(A\in\mathcal{M}(\lambda,\Lambda)\) for \(n=2\) and some \(\lambda,\Lambda>0\) is type-\(\varepsilon^{2}\)._
## 4. Numerical Methods
### Finite element approximation of the invariant measure
Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. In this section, we discuss the finite element approximation of the invariant measure \(r\in L^{2}_{\rm per}(Y)\) to \(A\), i.e., the unique solution \(r\in L^{2}_{\rm per}(Y)\) to
\[-D^{2}:(rA)=0\quad\text{in $Y$,}\qquad r\text{ is Y-periodic},\qquad\int_{Y}r=1.\]
We refer to Section 3.2 for the existence, uniqueness, and \(L^{2}\)-bounds for the invariant measure. We can construct simple finite element methods (FEMs) for the numerical solution of this problem based on our observation from Theorem 3.1 that
\[r=c^{-1}\gamma\tilde{r},\quad\text{where}\quad\tilde{r}:=1-\Delta\psi,\quad c :=(\gamma,\tilde{r})_{L^{2}(Y)}, \tag{4.1}\]
and \(\psi\in H^{2}_{{\rm per},0}(Y)\) is the unique element in \(H^{2}_{{\rm per},0}(Y)\) such that
\[b_{0}(\varphi,\psi)=(\gamma,A:D^{2}\varphi)_{L^{2}(Y)}\qquad\forall\varphi\in H ^{2}_{{\rm per},0}(Y), \tag{4.2}\]
where \(\gamma\in L^{\infty}_{\rm per}(Y)\) is defined as in Remark 2.2 and \(b_{0}:H^{2}_{\rm per}(Y)\times H^{2}_{\rm per}(Y)\to\mathbb{R}\) is defined as in Lemma 3.1. Let us recall that \(\tilde{r}\in L^{2}_{\rm per}(Y)\) is the unique invariant measure to \(\tilde{A}:=\gamma A\in\mathcal{M}(\frac{\lambda^{2}}{\lambda^{2}},\frac{ \Lambda^{2}}{\lambda^{2}})\).
#### 4.1.1. Approximation of \(\tilde{r}\) via \(H^{2}_{\mathrm{per},0}(Y)\)-conforming FEM for \(\psi\)
Noting that \(b_{0}\) defines a coercive (Lemma 3.1) bounded bilinear form on \(H^{2}_{\mathrm{per},0}(Y)\) and \(\varphi\mapsto(\gamma,A:D^{2}\varphi)_{L^{2}(Y)}\) defines a bounded linear functional on \(H^{2}_{\mathrm{per},0}(Y)\), we obtain the following result by the Lax-Milgram theorem and standard conforming finite element theory:
**Lemma 4.1** (approximation of \(\tilde{r}\) via \(H^{2}_{\mathrm{per},0}(Y)\)-conforming FEM for \(\psi\)).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. Let \(\gamma\in L^{\infty}_{\mathrm{per}}(Y)\) be defined as in Remark 2.2 and let \(\tilde{r}\in L^{2}_{\mathrm{per}}(Y)\) denote the invariant measure to \(\tilde{A}:=\gamma A\). Let \(b_{0}:H^{2}_{\mathrm{per}}(Y)\times H^{2}_{\mathrm{per}}(Y)\to\mathbb{R}\) and \(C_{\delta}>0\) be defined as in Lemma 3.1, and let \(\Psi_{h}\subset H^{2}_{\mathrm{per},0}(Y)\) be a closed linear subspace of \(H^{2}_{\mathrm{per},0}(Y)\). Then, there exists a unique \(\psi_{h}\in\Psi_{h}\) such that_
\[b_{0}(\varphi_{h},\psi_{h})=(\gamma,A:D^{2}\varphi_{h})_{L^{2}(Y)}\qquad \forall\varphi_{h}\in\Psi_{h}. \tag{4.3}\]
_Further, setting \(\tilde{r}_{h}:=1-\Delta\psi_{h}\in L^{2}_{\mathrm{per}}(Y)\) we have that_
\[\|\tilde{r}-\tilde{r}_{h}\|_{L^{2}(Y)}=\|\Delta(\psi-\psi_{h})\|_{L^{2}(Y)} \leq\sqrt{n}\frac{\Lambda}{\lambda}C_{\delta}\inf_{\varphi_{h}\in\Psi_{h}}\| \Delta(\psi-\varphi_{h})\|_{L^{2}(Y)},\]
_where \(\psi\in H^{2}_{\mathrm{per},0}(Y)\) denotes the unique element in \(H^{2}_{\mathrm{per},0}(Y)\) satisfying (4.2)._
In practice, \(H^{2}\)-conforming elements such as the the Argyris or the HCT element need to be used on a periodic shape-regular triangulation of the unit cell. A more attractive alternative from an implementation point of view is the approach presented next.
.1.2. Approximation of \(\tilde{r}\) via \(H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\)-conforming FEM
As an alternative to the \(H^{2}_{\mathrm{per},0}(Y)\)-conforming finite element method, we propose an approximation scheme for \(\tilde{r}\) based on an \(H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\)-conforming FEM, using ideas from [15]. To this end, let us introduce the bilinear form \(b:H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{n})\times H^{1}_{\mathrm{per}}(Y;\mathbb{ R}^{n})\to\mathbb{R}\) given by
\[b(w,\tilde{w}):=(\gamma A:Dw,\nabla\cdot\tilde{w})_{L^{2}(Y)}+S(w,\tilde{w}), \quad S(w,\tilde{w}):=\frac{1}{2}(Dw-Dw^{\mathrm{T}},D\tilde{w}-D\tilde{w}^{ \mathrm{T}})_{L^{2}(Y)} \tag{4.4}\]
for \(w,\tilde{w}\in H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{n})\), where \(\gamma\in L^{\infty}_{\mathrm{per}}(Y)\) is defined as in Remark 2.2. Note that the stabilization term can be equivalently written as \(S(w,\tilde{w})=(\nabla\times w,\nabla\times\tilde{w})_{L^{2}(Y)}\) if \(w,\tilde{w}\in H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{3})\), and as \(S((w_{1},w_{2}),(\tilde{w}_{1},\tilde{w}_{2}))=(\partial_{2}w_{1}-\partial_{1 }w_{2},\partial_{2}\tilde{w}_{1}-\partial_{1}\tilde{w}_{2})_{L^{2}(Y)}\) if \((w_{1},w_{2}),(\tilde{w}_{1},\tilde{w}_{2})\in H^{1}_{\mathrm{per}}(Y;\mathbb{ R}^{2})\). We can show the following result:
**Lemma 4.2** (approximation of \(\tilde{r}\) via \(H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\)-conforming FEM).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. Let \(\gamma\in L^{\infty}_{\mathrm{per}}(Y)\) be defined as in Remark 2.2 and let \(\tilde{r}\in L^{2}_{\mathrm{per}}(Y)\) denote the invariant measure to \(\tilde{A}:=\gamma A\). Let \(b:H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{n})\times H^{1}_{\mathrm{per}}(Y;\mathbb{ R}^{n})\to\mathbb{R}\) be defined by (4.4), \(C_{\delta}>0\) be defined as in Lemma 3.1, and let \(P_{h}\subset H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\) be a closed linear subspace of \(H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\). Then, the following assertions hold._
* _There exists a unique_ \(p\in H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\) _such that_ \(b(w,p)=(\gamma,A:Dw)_{L^{2}(Y)}\) _for all_ \(w\in H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\)_. Further, there holds_ \(\tilde{r}=1-\nabla\cdot p\)_._
* _There exists a unique_ \(p_{h}\in P_{h}\) _such that_ \(b(w_{h},p_{h})=(\gamma,A:Dw_{h})_{L^{2}(Y)}\) _for all_ \(w_{h}\in P_{h}\)_._
* _Setting_ \(\tilde{r}_{h}:=1-\nabla\cdot p_{h}\in L^{2}_{\mathrm{per}}(Y)\) _we have that_ \[\|\tilde{r}-\tilde{r}_{h}\|_{L^{2}(Y)}=\|\nabla\cdot(p-p_{h})\|_{L^{2}(Y)}\leq \left(1+\sqrt{n}\frac{\Lambda}{\lambda}\right)C_{\delta}\inf_{w_{h}\in P_{h}}\|D(p- w_{h})\|_{L^{2}(Y)},\] _where_ \(p,p_{h}\in H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\) _are the functions from (i)-(ii)._
#### 4.1.3. Approximation of \(r\)
In view of Lemma 4.1 and Lemma 4.2 we are able to obtain a sequence of approximations \((\tilde{r}_{h})_{h}\subset L^{2}_{\mathrm{per}}(Y)\) to \(\tilde{r}\), i.e., \(\|\tilde{r}-\tilde{r}_{h}\|_{L^{2}(Y)}\to 0\) as \(h\searrow 0\), by choosing a suitable finite element space \(\Psi_{h}\subset H^{2}_{\mathrm{per},0}(Y)\) (for the method from Lemma 4.1) or \(P_{h}\subset H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\) (for the method from Lemma 4.2) corresponding to a shape-regular periodic triangulation \(\mathcal{T}_{h}\) with mesh-size \(h>0\). It is also standard that the results from Lemma 4.1 and 4.2 imply convergence rates depending on the regularity of \(\psi\) by using interpolation estimates. Then, in view of (4.1), we can obtain an approximation to \(r\).
**Theorem 4.1** (approximation of \(r\)).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. Let \(r\in L^{2}_{\mathrm{per}}(Y)\) denote the invariant measure to \(A\) and let \(\tilde{r}\in L^{2}_{\mathrm{per}}(Y)\) denote the invariant measure to \(\tilde{A}:=\gamma A\), where \(\gamma\in L^{\infty}_{\mathrm{per}}(Y)\) is defined as in Remark 2.2. Let \((\tilde{r}_{h})_{h>0}\subset L^{2}_{\mathrm{per}}(Y)\) with \(\|\tilde{r}-\tilde{r}_{h}\|_{L^{2}(Y)}\to 0\) as \(h\searrow 0\). Then, for \(h>0\) sufficiently small we have that_
\[r_{h}:=c_{h}^{-1}\gamma\tilde{r}_{h}\in L^{2}_{\mathrm{per}}(Y),\qquad c_{h}:= (\gamma,\tilde{r}_{h})_{L^{2}(Y)} \tag{4.5}\]
_is well-defined and satisfies the bound_
\[\|r-r_{h}\|_{L^{2}(Y)}\leq\frac{\Lambda}{\lambda^{2}}\left(1+\frac{\Lambda^{2 }}{\lambda}\right)\left(1+\frac{\Lambda}{\lambda^{2}}\|r\|_{L^{2}(Y)}\right) \|\tilde{r}-\tilde{r}_{h}\|_{L^{2}(Y)}.\]
### Approximation of the homogenized problem
In view of Lemma 4.1, Lemma 4.2, and Theorem 4.1, we now know how to obtain a sequence of approximations \((r_{h})_{h}\subset L^{2}_{\mathrm{per}}(Y)\) to \(r\) with \(\|r-r_{h}\|_{L^{2}(Y)}\to 0\). Recalling that the effective coefficient matrix \(\bar{A}\in\mathbb{R}^{n\times n}_{\mathrm{sym}}\) is defined as \(\bar{A}:=\int_{Y}rA\), we introduce the approximate effective coefficient matrix \(\bar{A}_{h}\in\mathbb{R}^{n\times n}_{\mathrm{sym}}\) by
\[\bar{A}_{h}:=\int_{Y}r_{h}A. \tag{4.6}\]
We have the following error bound for this approximation of the effective coefficient matrix:
**Lemma 4.3** (approximation of \(\bar{A}\)).: _Let \(A\in\mathcal{M}(\lambda,\Lambda)\) for some \(\lambda,\Lambda>0\) and suppose that (2.2) holds. Let \(r\in L^{2}_{\mathrm{per}}(Y)\) denote the invariant measure to \(A\) and let \((r_{h})_{h>0}\subset L^{2}_{\mathrm{per}}(Y)\) be such that \(\|r-r_{h}\|_{L^{2}(Y)}\to 0\) as \(h\searrow 0\). Let \(\bar{A}\in\mathbb{R}^{n\times n}_{\mathrm{sym}}\) denote the effective coefficient matrix corresponding to \(A\), and let \(\bar{A}_{h}\in\mathbb{R}^{n\times n}_{\mathrm{sym}}\) be defined by (4.6). Then, we have that_
\[|\bar{A}-\bar{A}_{h}|\leq\sqrt{n}\Lambda\|r-r_{h}\|_{L^{1}(Y)}\leq\sqrt{n} \Lambda\|r-r_{h}\|_{L^{2}(Y)}\]
_and \(\bar{A}_{h}\) is positive definite for \(h>0\) sufficiently small._
With the approximate effective coefficient matrix at hand, we can obtain an approximation to the solution of the homogenized problem:
**Lemma 4.4** (approximation of \(u\)).: _Suppose that we are in the situation of Lemma 4.3. Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded convex domain, \(f\in L^{2}(\Omega)\), \(g\in H^{2}(\Omega)\), and let \(u\in H^{2}(\Omega)\) be the solution to the homogenized problem (3.12). Then, for \(h>0\) sufficiently small, there exists a unique solution \(u_{h}\in H^{2}(\Omega)\) to_
\[\begin{split}\bar{L}_{h}u_{h}:=-\bar{A}_{h}:D^{2}u_{h}& =f\quad\text{in }\Omega,\\ u_{h}&=g\quad\text{on }\partial\Omega,\end{split} \tag{4.7}\]
_and we have the bound_
\[\|u-u_{h}\|_{H^{2}(\Omega)}\leq C|\bar{A}-\bar{A}_{h}|\left(\|f\|_{L^{2}( \Omega)}+\|g\|_{H^{2}(\Omega)}\right)\]
_for some constant \(C=C(\mathrm{diam}(\Omega),\lambda,\Lambda,n)>0\)._
Let us note that, since \(\bar{A}_{h}\) is a constant symmetric positive definite matrix for \(h>0\) sufficiently small, the solution to (4.7) can be approximated by a standard \(H^{1}(\Omega)\)-conforming finite element method. If an \(H^{2}\)-approximation of \(u_{h}\) is desired, the function \(\tilde{u}_{h}:=u_{h}-g\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) can be approximated by an \(H^{2}(\Omega)\)-conforming finite element method based on the variational formulation \((\bar{L}_{h}\tilde{u}_{h},\bar{L}_{h}v)_{L^{2}(\Omega)}=(f-\bar{L}_{h}g,\bar{ L}_{h}v)_{L^{2}(\Omega)}\) for all \(v\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\).
## 5. Numerical Experiments
In this section, we perform numerical experiments illustrating the theoretical results from Section 4. We take \(n=2\) and choose \(A\in L^{\infty}(\mathbb{R}^{2};\mathbb{R}^{2\times 2}_{\rm sym})\) as
\[A(y):=\operatorname{diag}(1-a(y),a(y)),\qquad a(y_{1},y_{2}):=\frac{3-2\omega (y_{1})\sin(2\pi y_{2})}{8+(\pi^{2}\theta(y_{1})-2\omega(y_{1}))\sin(2\pi y_{ 2})} \tag{5.1}\]
for \(y=(y_{1},y_{2})\in\mathbb{R}^{2}\), where
\[\omega(t):=\operatorname{sign}(\sin(2\pi t)),\qquad\theta(t):=S(t)(1-\omega(t )S(t)),\qquad S(t):=1-2(t-\lfloor t\rfloor)\]
for \(t\in\mathbb{R}\). Note that \(A\in\mathcal{M}(\frac{1}{10},\frac{9}{10})\) since \(a\in L^{\infty}_{\rm per}(Y)\) and \(\frac{1}{10}\leq a\leq\frac{5}{6}\) a.e. in \(\mathbb{R}^{2}\) (note \(|\pi^{2}\theta-2\omega|\leq 2\) a.e. in \(\mathbb{R}\)). In particular, in view of Remark 2.1, \(A\) satisfies the Cordes condition (2.2) with \(\delta=\frac{1}{9}\). An illustration of the discontinuous function \(a\) is provided in Figure 1.
By Theorem 3.7, noting that \(A=C+aM\) with \(C:=\operatorname{diag}(1,0)\) and \(M:=\operatorname{diag}(-1,1)\), the invariant measure to \(A\) is given by
\[r(y)=1+M:D^{2}[T(A,a)](y)=1+\frac{1}{8}(\pi^{2}\theta(y_{1})-2\omega(y_{1})) \sin(2\pi y_{2}) \tag{5.2}\]
for \(y=(y_{1},y_{2})\in\mathbb{R}^{2}\), where we have used that \(T(A,a)\in H^{2}_{{\rm per},0}(Y)\) (recall Definition 3.2) is given by \([T(A,a)](y)=-\frac{1}{32}\theta(y_{1})\sin(2\pi y_{2})\) for \(y=(y_{1},y_{2})\in\mathbb{R}^{2}\). The effective coefficient matrix \(\bar{A}:=\int_{Y}rA\)
Figure 1. Plot of the function \(a\) defined in (5.1) and plot of the invariant measure \(r\) to \(A=\operatorname{diag}(1-a,a)\) given by (5.2).
is then given by
\[\bar{A}=C+(r,a)_{L^{2}(Y)}M=C+\frac{3}{8}M=\frac{1}{8}\text{diag}(5,3).\]
A plot of the discontinuous function \(r\) is provided in Figure 1.
In our numerical experiment, we use FreeFem++[21] to approximate \(r\) by Theorem 4.1 in combination with the method from Lemma 4.2, where we choose \(P_{h}\) to be the finite-dimensional subspace of \(H^{1}_{\text{per}}(Y;\mathbb{R}^{2})\) consisting of vector fields whose components are continuous \(Y\)-periodic piecewise affine functions with zero mean over \(Y\) on a periodic shape-regular triangulation \(\mathcal{T}_{h}\) of \(\overline{Y}\) into triangles with vertices \(\{(ih,jh)\}_{1\leq i,j\leq N}\) where \(N=\frac{1}{h}\in\mathbb{N}\). The integrals in the computation of \(c_{h}\) in (4.5) and \(\bar{A}_{h}\) in (4.6) have been obtained using the default quadrature formula in FreeFem++ on a fine mesh. The results are presented in Figure 2.
By Lemma 4.2 and Theorem 4.1 we have that \(\|r-r_{h}\|_{L^{2}(Y)}\leq C\inf_{w_{h}\in P_{h}}\|D(p-w_{h})\|_{L^{2}(Y)}\) for some constant \(C>0\), where \(p\in H^{1}_{\text{per},0}(Y;\mathbb{R}^{2})\) is the function from Lemma 4.2(i). For the approximation of \(r\), we observe convergence of order \(\mathcal{O}(h^{s})\) in the \(L^{2}(Y)\)-norm for some \(s\in(0,1)\) in Figure 2. This indicates that \(p\in H^{1+s}(Y)\) in which case we have by standard interpolation inequalities that \(\inf_{w_{h}\in P_{h}}\|D(p-w_{h})\|_{L^{2}(Y)}\leq Ch^{s}\|p\|_{H^{1+s}(Y)}\). In Figure 2, we observe the superconvergence \(\|r-r_{h}\|_{L^{2}(Y)}=\mathcal{O}(h)\) for \(h=2^{-i}\), \(i\in\mathbb{N}\), i.e., when there is no element of the triangulation whose interior intersects the line \(\{y_{1}=\frac{1}{2}\}\) along which \(r\) has a jump.
For the approximation of \(\bar{A}\), we observe that \(|\bar{A}-\bar{A}_{h}|=\mathcal{O}(h)\), and the superconvergence \(|\bar{A}-\bar{A}_{h}|=\mathcal{O}(h^{2})\) for \(h=2^{-i}\), \(i\in\mathbb{N}\).
## 6. Collection of the Proofs
### Proofs for Section 2
Figure 2. Approximation error for the approximation of the invariant measure \(r\) and the effective coefficient matrix \(\bar{A}\) from Section 5. We observe two curves, corresponding to whether or not there are elements of the triangulation whose interior intersect the line \(\{y_{1}=\frac{1}{2}\}\), i.e., the set along which \(r\) exhibits a jump.
Proof of Theorem 2.1.: First, note that there exists a constant \(\tilde{c}_{0}=\tilde{c}_{0}(\operatorname{diam}(\Omega),n)>0\) such that \(\|v\|_{H^{2}(\Omega)}\leq\tilde{c}_{0}\|\Delta v\|_{L^{2}(\Omega)}\) for any \(v\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\). Since \(\tilde{f}_{\varepsilon}:=f-L_{\varepsilon}g\in L^{2}(\Omega)\), we know from Theorem 3 in [27] that there exists a unique function \(\tilde{u}_{\varepsilon}\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) such that \(L_{\varepsilon}\tilde{u}_{\varepsilon}=\tilde{f}_{\varepsilon}\) a.e. in \(\Omega\), and we have that
\[\|\tilde{u}_{\varepsilon}\|_{H^{2}(\Omega)}\leq\tilde{c}_{0}C_{\delta}\|\gamma ^{\varepsilon}\tilde{f}_{\varepsilon}\|_{L^{2}(\Omega)},\]
where \(C_{\delta}:=(1-\sqrt{1-\delta})^{-1}\) and \(\gamma^{\varepsilon}:=\gamma(\frac{z}{\varepsilon})\) with \(\gamma\) defined in Remark 2.2. As \(\|\gamma\|_{L^{\infty}(\mathbb{R}^{n})}\leq\frac{\Lambda}{\lambda^{2}}\) and \(\|L_{\varepsilon}g\|_{L^{2}(\Omega)}\leq c_{1}\|D^{2}g\|_{L^{2}(\Omega)}\) with \(c_{1}:=\sqrt{n}\Lambda\), it follows that \(\|\tilde{u}_{\varepsilon}\|_{H^{2}(\Omega)}\leq c_{2}(\|f\|_{L^{2}(\Omega)}+ \|g\|_{H^{2}(\Omega)})\) with \(c_{2}:=\tilde{c}_{0}(1+c_{1})C_{\delta}\frac{\Lambda^{2}}{\lambda^{2}}\). We find that \(u_{\varepsilon}:=\tilde{u}_{\varepsilon}+g\in H^{2}(\Omega)\) is the unique solution to (2.1) in \(H^{2}(\Omega)\) and we have the bound \(\|u_{\varepsilon}\|_{H^{2}(\Omega)}\leq\|\tilde{u}_{\varepsilon}\|_{H^{2}( \Omega)}+\|g\|_{H^{2}(\Omega)}\leq(c_{2}+1)(\|f\|_{L^{2}(\Omega)}+\|g\|_{H^{2 }(\Omega)})\).
### Proofs for Section 3.1
Proof of Lemma 3.1.: In view of Remark 2.2 and Remark 3.1 we have for any \(\varphi\in H^{2}_{\mathrm{per}}(Y)\) that
\[b_{\mu}(\varphi,\varphi) =\|\mu\varphi-\Delta\varphi\|_{L^{2}(Y)}^{2}+\big{(}(I_{n}-\gamma A ):D^{2}\varphi,\mu\varphi-\Delta\varphi\big{)}_{L^{2}(Y)}\] \[\geq\|\mu\varphi-\Delta\varphi\|_{L^{2}(Y)}^{2}-\sqrt{1-\delta} \|D^{2}\varphi\|_{L^{2}(Y)}\|\mu\varphi-\Delta\varphi\|_{L^{2}(Y)}.\]
Finally, using that for any \(\varphi\in H^{2}_{\mathrm{per}}(Y)\) we have that
\[\|D^{2}\varphi\|_{L^{2}(Y)}^{2}=\|\Delta\varphi\|_{L^{2}(Y)}^{2}\leq\mu^{2}\| \varphi\|_{L^{2}(Y)}^{2}+2\mu\|\nabla\mu\|_{L^{2}(Y)}^{2}+\|\Delta\varphi\|_{ L^{2}(Y)}^{2}=\|\mu\varphi-\Delta\varphi\|_{L^{2}(Y)}^{2},\]
the claimed result follows.
### Proofs for Section 3.2
Proof of Theorem 3.1.: (i) Note that \(b_{0}\) is coercive on \(H^{2}_{\mathrm{per},0}(Y)\) by Lemma 3.1 and that
\[|b_{0}(\varphi_{1},\varphi_{2})|=|(\gamma A:D^{2}\varphi_{1},\Delta\varphi_{2 })_{L^{2}(Y)}|\leq\sqrt{n}\frac{\Lambda}{\lambda}\|\Delta\varphi_{1}\|_{L^{2}( Y)}\|\Delta\varphi_{2}\|_{L^{2}(Y)}\quad\forall\varphi_{1},\varphi_{2}\in H^{2}_{ \mathrm{per},0}(Y), \tag{6.1}\]
where we have used Remark 3.1 and that \(\gamma|A|=\frac{\operatorname{tr}(A)}{|A|}\leq\frac{n\Lambda}{\sqrt{n}\lambda}= \sqrt{n}\frac{\Lambda}{\lambda}\) almost everywhere. Since \(l:H^{2}_{\mathrm{per},0}(Y)\to\mathbb{R}\), \(l(\varphi):=\int_{Y}\tilde{A}:D^{2}\varphi\) is a bounded linear map with \(|l(\varphi)|\leq\sqrt{n}\frac{\Lambda}{\lambda}\|\Delta\varphi\|_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{\mathrm{per},0}(Y)\), we deduce from the Lax-Milgram theorem that there exists a unique \(\psi\in H^{2}_{\mathrm{per},0}(Y)\) such that \(b_{0}(\varphi,\psi)=l(\varphi)\) for any \(\varphi\in H^{2}_{\mathrm{per},0}(Y)\) and we have the bound
\[\|\Delta\psi\|_{L^{2}(Y)}^{2}\leq C_{\delta}\,b_{0}(\psi,\psi)=C_{\delta}\,l( \psi)\leq\sqrt{n}\frac{\Lambda}{\lambda}C_{\delta}\|\Delta\psi\|_{L^{2}(Y)}.\]
(ii) Since \(\tilde{r}:=1-\Delta\psi\in L^{2}_{\mathrm{per}}(Y)\), \(\int_{Y}\tilde{r}=1\), and \((\tilde{r},\tilde{A}:D^{2}\varphi)_{L^{2}(Y)}=l(\varphi)-b_{0}(\varphi,\psi)=0\) for any \(\varphi\in H^{2}_{\mathrm{per}}(Y)\), we have that \(\tilde{r}\) is an invariant measure to \(\tilde{A}\). Suppose \(\hat{r}\in L^{2}_{\mathrm{per}}(Y)\) is another invariant measure to \(\tilde{A}\). Then, as \(\int_{Y}\hat{r}=1\), there exists a unique \(\xi\in H^{2}_{\mathrm{per},0}(Y)\) such that \(\Delta\xi=1-\hat{r}\). Since \(b_{0}(\varphi,\xi)=l(\varphi)-(\hat{r},\tilde{A}:D^{2}\varphi)_{L^{2}(Y)}=l(\varphi)\) for any \(\varphi\in H^{2}_{\mathrm{per}}(Y)\), we have that \(\xi=\psi\) and thus, \(\hat{r}=1-\Delta\psi=\tilde{r}\). Therefore, \(\tilde{r}\) is the unique invariant measure to \(\tilde{A}\).
It remains to show that \(\tilde{r}\geq 0\) almost everywhere. For \(k\in\mathbb{N}\) we set \(a_{k,ij}:=a_{ij}\ast w_{k}\) for \(1\leq i,j\leq n\), where \(w_{k}:=k^{n}w(k\,\cdot)\) for some \(w\in C^{\infty}_{c}(\mathbb{R}^{n})\) with \(w\geq 0\) in \(\mathbb{R}^{n}\) and \(\int_{\mathbb{R}^{n}}w=1\). Then, \(A_{k}:=(a_{k,ij})_{1\leq i,j\leq n}\in C^{\infty}_{\mathrm{per}}(Y;\mathbb{R} ^{n\times n}_{\mathrm{sym}})\cap\mathcal{M}(\lambda,\Lambda)\) for all \(k\in\mathbb{N}\) and we have that \(\lim_{k\to\infty}\|A_{k}-A\|_{L^{2}(Y)}=0\). We set \(\gamma_{k}:=\frac{\operatorname{tr}(A_{k})}{|A_{k}|^{2}}\in C^{\infty}_{ \mathrm{per}}(Y)\) and note that \(\frac{\Lambda}{\Lambda^{2}}\leq\gamma_{k}\leq\frac{\Lambda}{\lambda^{2}}\) for any \(k\in\mathbb{N}\), as observed in Remark 2.2. Let \(\tilde{r}_{k}\in L^{2}_{\mathrm{per}}(Y)\) be the unique invariant measure to \(\tilde{A}_{k}:=\gamma_{k}A_{k}\). Since \(\tilde{A}_{k}\in C^{\infty}_{\mathrm{per}}(Y)\), we have that \(\tilde{r}_{k}\in C^{\infty}_{\mathrm{per}}(Y)\) and it is known (see e.g., [6]) that \(\tilde{r}_{k}>0\) in \(\mathbb{R}^{n}\). Setting \(q_{k}:=\tilde{r}_{k}\gamma_{k}\in C^{\infty}_{\mathrm{per}}(Y)\), we have
that \(q_{k}\geq\frac{\lambda}{\Lambda^{2}}\tilde{r}_{k}>0\) in \(\mathbb{R}^{n}\) and, by (i) and the first part of (ii), we find \(\|q_{k}\|_{L^{2}(Y)}\leq\frac{\Lambda}{\lambda^{2}}\|\tilde{r}_{k}\|_{L^{2}(Y)} \leq C\) for all \(k\in\mathbb{N}\) for some constant \(C=C(\lambda,\Lambda,n,\delta)>0\). Therefore, there exists \(q\in L^{2}_{\rm per}(Y)\) such that, upon passing to a subsequence, \(q_{k}\rightharpoonup q\) weakly in \(L^{2}(Y)\). Thus, for any \(\varphi\in C^{\infty}_{\rm per}(Y)\) we have
\[0=(q_{k},A_{k}:D^{2}\varphi)_{L^{2}(Y)}=(q_{k},A:D^{2}\varphi)_{L^{2}(Y)}+(q_{ k},(A_{k}-A):D^{2}\varphi)_{L^{2}(Y)}\underset{k\to\infty}{\longrightarrow}(q,A:D^{2} \varphi)_{L^{2}(Y)}.\]
It follows that \(q\in L^{2}_{\rm per}(Y)\) is a solution to \(-D^{2}:(qA)=0\) in \(Y\). Note that \(q\geq 0\) a.e. in \(\mathbb{R}^{n}\) and \(\int_{Y}q\geq\frac{\lambda}{\Lambda^{2}}>0\) as \(q_{k}>0\) in \(\mathbb{R}^{n}\) and \(\int_{Y}q_{k}\geq\frac{\lambda}{\Lambda^{2}}\int_{Y}\tilde{r}_{k}=\frac{ \lambda}{\Lambda^{2}}\) for any \(k\in\mathbb{N}\). Recalling that \(\gamma>0\) a.e. in \(\mathbb{R}^{n}\), we see that \(\int_{Y}\frac{q}{\gamma}=\|\frac{q}{\gamma}\|_{L^{1}(Y)}>0\). Setting \(\tilde{q}:=(\int_{Y}\frac{q}{\gamma})^{-1}\frac{q}{\gamma}\in L^{2}_{\rm per }(Y)\) and using uniqueness of the invariant measure to \(\tilde{A}\), we see that \(\tilde{r}=\tilde{q}\geq 0\) a.e. in \(\mathbb{R}^{n}\).
(iii) This follows immediately from (ii) and the fact that \(\frac{\lambda}{\Lambda^{2}}\leq\gamma\leq\frac{\Lambda}{\lambda^{2}}\) a.e. in \(\mathbb{R}^{n}\). Note that the bound (3.3) follows from \(\|r\|_{L^{2}(Y)}\leq\frac{\Lambda}{\lambda^{2}}(\gamma,\tilde{r})_{L^{2}(Y)}^{ -1}\|\tilde{r}\|_{L^{2}(Y)}\) and \((\gamma,\tilde{r})_{L^{2}(Y)}\geq\frac{\lambda}{\Lambda^{2}}\int_{Y}\tilde{r} =\frac{\lambda}{\Lambda^{2}}\), where we have used that \(\tilde{r}\geq 0\) almost everywhere and \(\int_{Y}\tilde{r}=1\).
Proof of Theorem 3.2.: (i) If \(q\in L^{2}_{\rm per}(Y)\) is a solution to (3.1), then \(\int_{Y}f=(f,1)_{L^{2}(Y)}=(q,-A:D^{2}1)_{L^{2}(Y)}=0\), where \(1\) denotes the constant function with value one. Now suppose that \(\int_{Y}f=0\). Then, by Lemma (3.1) and the Lax-Milgram theorem, there exists a unique \(\eta\in H^{2}_{{\rm per},0}(Y)\) such that \(b_{0}(\varphi,\eta)=(f,\varphi)_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{{\rm per},0}(Y)\). Equivalently, since \(\int_{Y}f=0\), we have \(b_{0}(\varphi,\eta)=(f,\varphi)_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{{\rm per}}(Y)\). Setting \(q:=-\gamma\Delta\eta\in L^{2}_{\rm per}(Y)\), we have that \((q,-A:D^{2}\varphi)_{L^{2}(Y)}=b_{0}(\varphi,\eta)=(f,\varphi)_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{{\rm per}}(Y)\), i.e., \(q\) is a solution to (3.1).
(ii) Suppose that \(q_{1},q_{2}\in L^{2}_{\rm per}(Y)\) are solutions to (3.1). Then, \(w:=q_{1}-q_{2}\in L^{2}_{\rm per}(Y)\) satisfies \((w,-A:D^{2}\varphi)_{L^{2}(Y)}=0\) for all \(\varphi\in H^{2}_{{\rm per}}(Y)\) and \(\int_{Y}w=c\). This gives \(w=cr\). Indeed, by uniqueness of \(r\), we see that if \(c\neq 0\) then \(r=c^{-1}w\), and if \(c=0\) then \(r=w+r\), i.e., \(w=0=cr\).
(iii) Note that \(q_{0}=-\gamma\Delta\eta+(\gamma,\Delta\eta)_{L^{2}(Y)}r\), where \(\eta\in H^{2}_{{\rm per},0}(Y)\) is as in the proof of (i). By Lemma 3.1 and Remark 3.1 we have \(\|\Delta\eta\|_{L^{2}(Y)}^{2}\leq C_{\delta}b_{0}(\eta,\eta)=C_{\delta}(f,\eta) _{L^{2}(Y)}\leq\frac{n}{\pi^{2}}C_{\delta}\|f\|_{L^{2}(Y)}\|\Delta\eta\|_{L^{2} (Y)}\). Using that \(\frac{\lambda}{\Lambda^{2}}\leq\gamma\leq\frac{\Lambda}{\lambda^{2}}\) a.e. in \(\mathbb{R}^{n}\), we see that \(\|q_{0}\|_{L^{2}(Y)}\leq\frac{\Lambda}{\lambda^{2}}\left(1+\|r\|_{L^{2}(Y)} \right)\|\Delta\eta\|_{L^{2}(Y)}\) which yields the claimed result in combination with \(\|\Delta\eta\|_{L^{2}(Y)}\leq\frac{n}{\pi^{2}}C_{\delta}\|f\|_{L^{2}(Y)}\).
Proof of Theorem 3.3.: (i,ii) If \(v\in H^{2}_{\rm per}(Y)\) solves (3.2), then \((f,r)_{L^{2}(Y)}=(-A:D^{2}v,r)_{L^{2}(Y)}=0\). Now suppose \((f,r)_{L^{2}(Y)}=0\). By the Lax-Milgram theorem, in view of Lemma 3.1, there exists a unique \(v\in H^{2}_{{\rm per},0}(Y)\) such that \(b_{0}(v,\varphi)=(\gamma f,-\Delta\varphi)_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{{\rm per},0}(Y)\). Equivalently, since \(-\Delta:H^{2}_{{\rm per},0}(Y)\to L^{2}_{{\rm per},0}(Y)\) is a bijection, we have that
\[(-\gamma A:D^{2}v,\phi)_{L^{2}(Y)}=(\gamma f,\phi)_{L^{2}(Y)}\qquad\forall\phi \in L^{2}_{{\rm per},0}(Y). \tag{6.2}\]
We claim that this is equivalent to
\[(-\gamma A:D^{2}v,\phi)_{L^{2}(Y)}=(\gamma f,\phi)_{L^{2}(Y)}\qquad\forall\phi \in L^{2}_{{\rm per}}(Y). \tag{6.3}\]
Indeed, to see that (6.2) implies (6.3), we write \(\phi\in L^{2}_{\rm per}(Y)\) as \(\phi=\phi_{1}+\phi_{2}\) with \(\phi_{1}:=\phi-c\tilde{r}\in L^{2}_{{\rm per},0}(Y)\) and \(\phi_{2}:=c\tilde{r}\), where \(c:=\int_{Y}\phi\) and \(\tilde{r}:=(\frac{1}{\gamma},r)_{L^{2}(Y)}^{-1}\frac{r}{\gamma}\in L^{2}_{{\rm per }}(Y)\) is the invariant measure to \(\tilde{A}:=\gamma A\), and use that \((-\tilde{A}:D^{2}v,\phi_{2})_{L^{2}(Y)}=0=c(\frac{1}{\gamma},r)_{L^{2}(Y)}^{-1}(f,r)_{L^{2}(Y)}=(\gamma f,\phi_{2})_{L^{2}(Y)}\). Noting that (6.3) is equivalent to \(-\gamma A:D^{2}v=\gamma f\) a.e. in \(\mathbb{R}^{n}\), which in turn is equivalent to \(-A:D^{2}v=f\) a.e. in \(\mathbb{R}^{n}\) since \(\gamma\geq\frac{\lambda}{\Lambda^{2}}>0\) a.e. in \(\mathbb{R}^{n}\), we immediately obtain (i)-(ii).
(iii) Note that \(v_{0}\) is the unique element in \(H^{2}_{{\rm per},0}(Y)\) such that \(b_{0}(v_{0},\varphi)=-(\gamma f,\Delta\varphi)_{L^{2}(Y)}\) for any \(\varphi\in H^{2}_{{\rm per}}(Y)\). Hence, \(\|\Delta v_{0}\|_{L^{2}(Y)}^{2}\leq C_{\delta}b_{0}(v_{0},v_{0})=-C_{\delta}( \gamma f,\Delta v_{0})_{L^{2}(Y)}\leq\frac{\Lambda}{\lambda^{2}}C_{\delta}\|f\|_{L ^{2}(Y)}\|\Delta v_{0}\|_{L^
### Proofs for Section 3.3
Proof of Theorem 3.4.: First, recall that by the uniform \(H^{2}\)-bound (2.3) there exists a function \(u\in H^{2}(\Omega)\) with \(u-g\in H^{1}_{0}(\Omega)\) such that, upon passing to a subsequence, (3.5) holds. Passing to the limit in (3.10) and using (3.11) we find that \(u\) is a solution to (3.12). Noting that
\[\lambda|\xi|^{2}=\lambda|\xi|^{2}(r,1)_{L^{2}(Y)}\leq(r,A\xi\cdot\xi)_{L^{2}(Y )}=\bar{A}\xi\cdot\xi\leq\Lambda|\xi|^{2}(r,1)_{L^{2}(Y)}=\Lambda|\xi|^{2}\quad \forall\xi\in\mathbb{R}^{n},\]
where we have used that \(r\geq 0\) almost everywhere and \(A\in\mathcal{M}(\lambda,\Lambda)\), we see that \(u\) is the unique solution in \(H^{2}(\Omega)\) to (3.12). We conclude that the whole sequence \((u_{\varepsilon})_{\varepsilon>0}\) convergences weakly in \(H^{2}(\Omega)\) to \(u\).
### Proofs for Section 3.4
Proof of Theorem 3.5.: We have \(r_{\varepsilon}:=u_{\varepsilon}-u-\varepsilon^{2}(\eta_{\varepsilon}+\theta _{\varepsilon})\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) and \(L_{\varepsilon}r_{\varepsilon}=\varepsilon A^{\varepsilon}:P_{\varepsilon}\). By the proof of Theorem 2.1 and using that \(\gamma|A|=\frac{\operatorname{tr}(A)}{|A|}\leq\frac{n\Lambda}{\sqrt{n} \lambda}=\sqrt{n}\frac{\Lambda}{\lambda}\) almost everywhere, we then have the bound
\[\|r_{\varepsilon}\|_{H^{2}(\Omega)}\leq C_{\delta}\tilde{c}_{0}\|\gamma^{ \varepsilon}L_{\varepsilon}r_{\varepsilon}\|_{L^{2}(\Omega)}\leq\varepsilon C _{\delta}\tilde{c}_{0}\|\gamma|A|\|_{L^{\infty}(\mathbb{R}^{n})}\|P_{ \varepsilon}\|_{L^{2}(\Omega)}\leq\varepsilon C_{\delta}\tilde{c}_{0}\sqrt{n} \frac{\Lambda}{\lambda}\|P_{\varepsilon}\|_{L^{2}(\Omega)},\]
where \(C_{\delta}:=(1-\sqrt{1-\delta})^{-1}\) and \(\tilde{c}_{0}=\tilde{c}_{0}(\operatorname{diam}(\Omega),n)>0\) is as in the proof of Theorem 2.1. The claim follows with \(c_{0}:=\tilde{c}_{0}\sqrt{n}\).
Proof of Theorem 3.6.: We set \(d_{\varepsilon}:=r_{\varepsilon}+2\varepsilon(z_{\varepsilon}-\varepsilon^{2} (\tilde{\eta}_{\varepsilon}+\tilde{\theta}_{\varepsilon}))\) where \(r_{\varepsilon}:=u_{\varepsilon}-u-\varepsilon^{2}(\eta_{\varepsilon}+\theta _{\varepsilon})\). We have that \(d_{\varepsilon}\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) and, writing \(c_{j}^{kl}:=(Ae_{j}\cdot\nabla v_{kl},r)_{L^{2}(Y)}\) for \(1\leq j,k,l\leq n\), there holds
\[L_{\varepsilon}d_{\varepsilon} =\varepsilon A^{\varepsilon}:P_{\varepsilon}+2\varepsilon\left( -\sum_{j,k,l=1}^{n}c_{j}^{kl}\partial_{jkl}^{3}u+\varepsilon^{2}A^{ \varepsilon}:D^{2}\tilde{\eta}_{\varepsilon}\right)\] \[=\varepsilon A^{\varepsilon}:P_{\varepsilon}-2\varepsilon\sum_{j, k,l=1}^{n}[c_{j}^{kl}-A:D^{2}\chi_{jkl}]^{\varepsilon}\partial_{jkl}^{3}u+2 \varepsilon^{2}\sum_{i,j=1}^{n}a_{ij}^{\varepsilon}(2[\partial_{i}X]^{ \varepsilon}:D^{3}(\partial_{j}u)+\varepsilon X^{\varepsilon}:D^{3}(\partial _{ij}^{2}u))\] \[=\varepsilon^{2}A^{\varepsilon}:Q_{\varepsilon},\]
where we have used that \(\chi_{jkl}=T(A,Ae_{j}\cdot\nabla v_{kl})\) and \(L_{\varepsilon}r_{\varepsilon}=\varepsilon A^{\varepsilon}:P_{\varepsilon}\) with \(P_{\varepsilon}\) defined as in Theorem 3.5. By the proof of Theorem 2.1 and using that \(\gamma|A|=\frac{\operatorname{tr}(A)}{|A|}\leq\frac{n\Lambda}{\sqrt{n}\lambda }=\sqrt{n}\frac{\Lambda}{\lambda}\) almost everywhere, we then have the bound
\[\|d_{\varepsilon}\|_{H^{2}(\Omega)}\leq C_{\delta}\tilde{c}_{0}\|\gamma^{ \varepsilon}L_{\varepsilon}d_{\varepsilon}\|_{L^{2}(\Omega)}\leq\varepsilon^{ 2}C_{\delta}\tilde{c}_{0}\|\gamma|A|\|_{L^{\infty}(\mathbb{R}^{n})}\|Q_{ \varepsilon}\|_{L^{2}(\Omega)}\leq\varepsilon^{2}C_{\delta}\tilde{c}_{0}\sqrt{n }\frac{\Lambda}{\lambda}\|Q_{\varepsilon}\|_{L^{2}(\Omega)},\]
where \(C_{\delta}:=(1-\sqrt{1-\delta})^{-1}\) and \(\tilde{c}_{0}=\tilde{c}_{0}(\operatorname{diam}(\Omega),n)>0\) is as in the proof of Theorem 2.1. The claim follows with \(c_{0}:=\tilde{c}_{0}\sqrt{n}\).
### Proofs for Section 3.5
The proofs of Theorem 3.7 and Theorem 3.8 are almost identical to the proofs in [17] and hence omitted.
### Proofs for Section 4
Proof of Lemma 4.1.: In view of Lemma 3.1, we note that existence and uniqueness of \(\psi_{h}\in\Psi_{h}\) satisfying (4.3) follows from the Lax-Milgram theorem. Noting that \(\psi-\psi_{h}\in H^{2}_{\mathrm{per},0}(Y)\) and using Lemma 3.1, Galerkin orthogonality, and (6.1), we have for any \(\varphi_{h}\in\Psi_{h}\) that
\[C_{\delta}^{-1}\|\Delta(\psi-\psi_{h})\|_{L^{2}(Y)}^{2} \leq b_{0}(\psi-\psi_{h},\psi-\psi_{h})\] \[=b_{0}(\psi-\varphi_{h},\psi-\psi_{h})\leq\sqrt{n}\frac{\Lambda}{ \lambda}\|\Delta(\psi-\varphi_{h})\|_{L^{2}(Y)}\|\Delta(\psi-\psi_{h})\|_{L^{ 2}(Y)},\]
which yields the claimed result. Note that \(\tilde{r}-\tilde{r}_{h}=-\Delta(\psi-\psi_{h})\) since \(\tilde{r}_{h}=1-\Delta\psi_{h}\) and \(\tilde{r}=1-\Delta\psi\) by Theorem 3.1.
Proof of Lemma 4.2.: First, we note that for any \(w\in H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{n})\) there holds
\[\|\nabla\cdot w\|_{L^{2}(Y)}^{2}+\frac{1}{2}\|Dw-Dw^{\mathrm{T}}\|_{L^{2}(Y)}^ {2}=\|Dw\|_{L^{2}(Y)}^{2}+\|\nabla\cdot w\|_{L^{2}(Y)}^{2}-(Dw,Dw^{\mathrm{T}} )_{L^{2}(Y)}=\|Dw\|_{L^{2}(Y)}^{2},\]
where the second equality follows from integration by parts and a density argument. We find that
\[b(w,w) =\|Dw\|_{L^{2}(Y)}^{2}+((\tilde{A}-I_{n}):Dw,\nabla\cdot w)_{L^{2} (Y)} \tag{6.4}\] \[\geq\|Dw\|_{L^{2}(Y)}^{2}-\sqrt{1-\delta}\|\nabla\cdot w\|_{L^{2}( Y)}\|Dw\|_{L^{2}(Y)}\geq C_{\delta}^{-1}\|Dw\|_{L^{2}(Y)}^{2}\]
for any \(w\in H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{n})\), where \(C_{\delta}=(1-\sqrt{1-\delta})^{-1}\). Further, using that \(\|\gamma|A|\|_{L^{\infty}(\mathbb{R}^{n})}\leq\sqrt{n}\frac{\Lambda}{\lambda}\) and setting \(C_{n,\lambda,\Lambda}:=1+\sqrt{n}\frac{\Lambda}{\lambda}\), we have for any \(w\in H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{n})\) that
\[|b(w,\tilde{w})| \leq\sqrt{n}\frac{\Lambda}{\lambda}\|Dw\|_{L^{2}(Y)}\|\nabla \cdot\tilde{w}\|_{L^{2}(Y)}+\frac{1}{2}\|Dw-Dw^{\mathrm{T}}\|_{L^{2}(Y)}\|D \tilde{w}-D\tilde{w}^{\mathrm{T}}\|_{L^{2}(Y)} \tag{6.5}\] \[\leq C_{n,\lambda,\Lambda}\|Dw\|_{L^{2}(Y)}\|D\tilde{w}\|_{L^{2}( Y)},\]
where we have used in the second inequality that \(\frac{1}{2}\|Dw-Dw^{\mathrm{T}}\|_{L^{2}(Y)}^{2}\leq\|Dw\|_{L^{2}(Y)}^{2}\) for any \(w\in H^{1}_{\mathrm{per}}(Y;\mathbb{R}^{n})\). Since \(w\mapsto\|Dw\|_{L^{2}(Y)}\) defines a norm on \(H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\), it follows that \(b\) defines a bounded coercive bilinear form on \(H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\). By the Lax-Milgram theorem, using that \(w\mapsto(\gamma,A:Dw)_{L^{2}(Y)}\) defines a bounded linear functional on \(H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\), we have that there exist unique \(p\in H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\) and \(p_{h}\in P_{h}\) such that
\[b(w,p)=(\gamma,A:Dw)_{L^{2}(Y)}\quad\forall w\in H^{1}_{\mathrm{per},0}(Y; \mathbb{R}^{n}),\qquad b(w_{h},p_{h})=(\gamma,A:Dw_{h})_{L^{2}(Y)}\quad\forall w _{h}\in P_{h}.\]
Noting that for any \(\varphi\in H^{2}_{\mathrm{per}}(Y)\) we have that \(\nabla\varphi\in H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\), we see that
\[(1-\nabla\cdot p,-\tilde{A}:D^{2}\varphi)_{L^{2}(Y)}=b(\nabla\varphi,p)-( \gamma,A:D^{2}\varphi)_{L^{2}(Y)}=0\qquad\forall\varphi\in H^{2}_{\mathrm{per} }(Y).\]
Therefore, using that \(\nabla\cdot p\in L^{2}_{\mathrm{per},0}(Y)\), we have that \(\tilde{r}=1-\nabla\cdot p\). It only remains to prove (iii). Noting that \(p-p_{h}\in H^{1}_{\mathrm{per},0}(Y;\mathbb{R}^{n})\) and using (6.4), Galerkin orthogonality, and (6.5), we have that
\[C_{\delta}^{-1}\|D(p-p_{h})\|_{L^{2}(Y)}^{2} \leq b(p-p_{h},p-p_{h})\] \[=b(p-w_{h},p-p_{h})\leq C_{n,\lambda,\Lambda}\|D(p-w_{h})\|_{L^{2} (Y)}\|D(p-p_{h})\|_{L^{2}(Y)}\]
for any \(w_{h}\in P_{h}\). Since \(\|\nabla\cdot(p-p_{h})\|_{L^{2}(Y)}\leq\|D(p-p_{h})\|_{L^{2}(Y)}\), this yields the claimed result. Note that \(\tilde{r}-\tilde{r}_{h}=-\nabla\cdot(p-p_{h})\) since \(\tilde{r}_{h}=1-\nabla\cdot p_{h}\) and \(\tilde{r}=1-\nabla\cdot p\)
Proof of Theorem 4.1.: First, recall that \(r=c^{-1}\gamma\tilde{r}\) with \(c:=(\gamma,\tilde{r})_{L^{2}(Y)}\in[\frac{\Lambda}{\Lambda^{2}},\frac{\Lambda}{ \lambda^{2}}]\), where we have used the bounds on \(\gamma\) from Remark 2.2, \(\int_{Y}\tilde{r}=1\), and \(\tilde{r}\geq 0\) almost everywhere. Noting that \(|c-c_{h}|\leq\frac{\Lambda}{\lambda^{2}}\|\tilde{r}-\tilde{r}_{h}\|_{L^{2}(Y)}\), we see that \(c_{h}\to c\) and hence, \(r_{h}\) is well-defined for \(h>0\) sufficiently small. Further, for \(h>0\) sufficiently small, there holds
\[\|r-r_{h}\|_{L^{2}(Y)}=\|c^{-1}\gamma\tilde{r}-c_{h}^{-1}\gamma\tilde{r}_{h}\|_ {L^{2}(Y)}\leq c_{h}^{-1}\|\gamma(\tilde{r}-\tilde{r}_{h})\|_{L^{2}(Y)}+|c^{- 1}-c_{h}^{-1}|\,\|\gamma\tilde{r}\|_{L^{2}(Y)},\]
and using that \(\gamma\tilde{r}=cr\), we find
\[\|r-r_{h}\|_{L^{2}(Y)} \leq c_{h}^{-1}\left(\|\gamma(\tilde{r}-\tilde{r}_{h})\|_{L^{2}( Y)}+|c-c_{h}|\,\|r\|_{L^{2}(Y)}\right)\] \[\leq\frac{\Lambda}{\lambda^{2}}c_{h}^{-1}\left(\|\tilde{r}- \tilde{r}_{h}\|_{L^{2}(Y)}+\frac{\Lambda}{\lambda^{2}}\|\tilde{r}-\tilde{r}_{ h}\|_{L^{2}(Y)}\|r\|_{L^{2}(Y)}\right)\]
Noting that \(c_{h}^{-1}\leq\frac{\Lambda^{2}}{\lambda}+1\) for \(h>0\) sufficiently small completes the proof.
Proof of Lemma 4.3.: We have \(|\bar{A}-\bar{A}_{h}|=\left|\int_{Y}(r-r_{h})A\right|\leq\int_{Y}|r-r_{h}||A| \leq\sqrt{n}\Lambda\|r-r_{h}\|_{L^{1}(Y)}\), and the claim follows.
Proof of Lemma 4.4.: The proof of this result is analogous to the proof of Lemma 3.4 in [9] and hence omitted.
## Acknowledgments
The author gratefully acknowledges helpful conversations with Professor Yves Capdeboscq (Universite de Paris) and Professor Hung V. Tran (University of Wisconsin Madison) during the preparation of this work.
| $ \begin{aligned}
\text{私たちは、}
&\Omega \subset \mathbb{R}^n \text{という有界凸領域で定義される方程式} \\
& -A(\frac{\cdot}{\varepsilon}):D^2 u_{\varepsilon} = f
&\text{を、ディリッヒ boundary conditionと、そのhomogenized問題の}
\text{数値近似を研究する}
\end{aligned}$
$
\begin{aligned}
&\text{ここで、測度の有界性、連続性、周期性、対称性を持つ}
\text{拡散行列} A \text{は、}
&\text{基本的には有界であり、} \\
&\text{かつ} n > 2 \text{なら、Cordes 条件を満たす}
\end{aligned}
$
$
\begin{aligned}
\text{第一部に |
2308.16524 | Zonal density staircase formation in collisional drift-wave turbulence | Turbulence-driven quasi-stationnary structures known as 'staircase' are
investigated using the collisional drift-wave model. Two-dimensional
simulations show that the ability of zonal density corrugations to suppress
turbulence are affected by the adiabaticity parameter (inversely proportional
to collision frequency). As the adiabaticity parameter increases, zonal density
becomes less efficient at suppressing turbulence, and zonal flows become
dominant in the near-adiabatic regime. The nonlinear transport crossphase
displays radial modulations associated to zonal density. | M. Leconte, T. Kobayashi | 2023-08-31T08:16:07 | http://arxiv.org/abs/2308.16524v3 | # Zonal density staircase formation in collisional drift-wave turbulence
###### Abstract
Turbulence-driven quasi-stationnary structures known as'staircase' are investigated using the collisional drift-wave model. Two-dimensional simulations show that the ability of zonal density corrugations to suppress turbulence are affected by the adiabaticity parameter (inversely proportional to collision frequency). As the adiabaticity parameter increases, zonal density becomes less efficient at suppressing turbulence, and zonal flows become dominant in the near-adiabatic regime. The nonlinear transport crossphase displays radial modulations associated to zonal density.
## 1 Introduction
The High-confinement regime (H-mode) is important for future fusion devices like ITER. It has been the focus of research for more than 30 years. See e.g. Refs [1, 2, 3] for a review. The presence of turbulence-driven flows,
i.e. zonal flows (ZF) have been shown to facilitate access to H-mode, by shearing apart turbulence eddies [4]. Contrary to models based on waves in random media, zonal flows are not random spatially, but instead form well-defined patterns, similar to those found in other non-equilibrium systems [5]. The most common radial pattern first observed in gyrokinetic simulations of ion-temperature gradient driven (ITG) turbulence has been dubbed '\(E\times B\) staircase' [6; 7; 8; 9], due to its quasi-periodic nature, for which the leading explanation is that zonal flows are responsible for the pattern, and that the density and temperature profile corrugations and hence transport modulation are a consequence of the zonal flow pattern directly suppressing the turbulence intensity [10; 11]. Similar patterns were observed in the KSTAR tokamak and reproduced by global \(\delta f\) gyrokinetic simulations of collisionless trapped-electron modes (CTEM) [12; 13; 14]. However, it is well-known that turbulent transport does not only depend on the turbulence intensity but also on the phase-angle, i.e. transport crossphase [15; 16; 17] between electric potential and the advected quantity driving the turbulence, e.g. density, ion temperature, electron temperature, etc...For particle transport - on which we focus here - the turbulent particle flux can be written in the form: \(\Gamma=\sum_{k}k_{y}\sqrt{|n_{k}|^{2}}\sqrt{|\phi_{k}|^{2}}\gamma_{\rm coh }^{2}\sin\theta_{k}\)[16]. Here, \(|n_{k}|^{2}\) and \(|\phi_{k}|^{2}\) denote the power spectrum of density and potential fluctuations, respectively, \(\gamma_{\rm coh}\) is the coherence, assumed here to be \(\gamma_{\rm coh}\simeq 1\) for simplicity, and \(\theta_{k}\) is the transport crossphase (crossphase spectrum), i.e. the phase-angle between density and potential fluctuations. In most research works, it is often assumed that the transport crossphase between density and potential is linear, leading to the so-called '\(i\delta\)' prescription. There are some exceptions, e.g. Refs [18; 19]. In gyrokinetic simulations, the nonlinear crossphase in wavenumber space appears to closely match its linear value, supporting the \(i\delta\) prescription for ITG and TEM turbulence [20]. Gyrokinetic simulations of collisionless trapped-electron mode [21] revealed that - in certain parameter regimes e.g. cold ions relative to electrons - zonal flows are ineffective at suppressing turbulence, and instead zonal density generation becomes the dominant saturation mechanism. In previous work, one of the author (M.L), showed that zonal flows can affect the transport crossphase, in the framework of a parametric instability analysis of a fluid model for dissipative trapped-electron mode [22]. Additionally, an equation was derived for the dynamics of zonal density amplitude. Ref. [23] derived an equation for zonal density staircase evolution and for the correlation between zonal density and zonal flows. In
[24], we presented numerical results showing that the transport crossphase might in fact be nonlinear. This nonlinearity manifests itself mostly via radial modulations of the crossphase, not predicted by linear theory. This radial modulation is responsible for the generation of zonal density corrugations. Such corrugations of the density profile were observed in Ref. [25] during limit cycle oscillations preceding the L-H transition in JFT-2M, using the heavy ion beam probe diagnostic (HIBP). We present here an extended study of zonal density staircase formation and crossphase modulations. The effect of the adiabaticity parameter - inversely proportional to collisionality - on the ability of zonal density corrugations to suppress turbulence is investigated. The article is organized as follows: In section 2, we present the extended wave-kinetic model for collisional drift-waves, including density profile corrugations (zonal density). In section 3, we identify the associated energy transfers. In section 4, we present numerical results of 2D drift-fluid simulations and compare with the analytical model of section 2. The results are discussed in section 5 and finally we present a conclusion.
## 2 Model
We analyse the 2 field modified Hasegawa-Wakatani model, a basic representative model for edge turbulence in magnetized plasmas [26, 27, 28]:
\[\frac{\partial n}{\partial t}+\{\phi,n\}+\kappa\frac{\partial \phi}{\partial y} = -\alpha(\tilde{n}-\tilde{\phi}), \tag{1}\] \[\frac{\partial\nabla_{\perp}^{2}\phi}{\partial t}+\{\phi,\nabla_ {\perp}^{2}\phi\} = -\alpha(\tilde{n}-\tilde{\phi}), \tag{2}\]
where \(n\) is the electron density and \(\phi\) is the electric potential, \(\{f,g\}=\frac{\partial f}{\partial x}\frac{\partial g}{\partial y}-\frac{ \partial f}{\partial y}\frac{\partial g}{\partial x}\) denote Poisson brackets. Here, \(\tilde{n}=n-\langle n\rangle\) and \(\tilde{\phi}=\phi-\langle\phi\rangle\) denote the non-zonal components of the fields, and \(\langle\ldots\rangle=(1/L_{y})\int\ldots dy\) is the zonal average. The quantity \(\kappa\) is the normalized density-gradient, and \(\alpha=k_{\parallel}^{2}v_{Te}^{2}/\nu_{ei}\) is the coupling parameter, with \(k_{\parallel}\sim 1/(qR)\) the parallel wavenumber, \(v_{Te}=\sqrt{T_{e}/m_{e}}\) the electron thermal velocity and \(\nu_{ei}\) the electron-ion collision frequency. Note that an important control parameter of the Hasegawa-Wakatani model is the _adiabaticity parameter_: the ratio \(\hat{\alpha}=\alpha/\kappa\). Time and space are normalized as: \(\omega_{c,i}t\to t\) and \(\rho_{s}\nabla_{\perp}\rightarrow\nabla_{\perp}\), with \(\rho_{s}=c_{s}/\omega_{c,i}\) the sound gyroradius, \(c_{s}=\sqrt{T_{e}/m_{i}}\) the sound speed, and \(\omega_{c,i}=eB/m_{i}\) the ion Larmor frequency.
We extend the wave-kinetic model of Sakaki _et al._[29], to include zonal profile corrugations, i.e. zonal density. After some algebra, one obtains the following reduced model:
\[\frac{\partial W_{k}}{\partial t}+\frac{\partial\omega_{k}}{ \partial k_{x}}\nabla_{x}W_{k}-k_{y}\nabla_{x}U\frac{\partial W_{k}}{\partial k _{x}} = 2\gamma_{L}W_{k}-2c_{k}W_{k}\nabla_{x}N-\Delta\omega W_{k}^{2}, \tag{3}\] \[\frac{\partial U}{\partial t} = -\nabla_{x}\Pi+\nu_{\perp}\nabla_{xx}U-\mu U,\] (4) \[\frac{\partial N}{\partial t} = -\nabla_{x}\Gamma+\nabla_{x}\Big{[}(D_{0}+D_{t}(W_{k}))\nabla_{x }N\Big{]}, \tag{5}\]
Details of the derivation are given in Appendix.
Eq. (3) is the extended wave kinetic equation (EWKE) for the wave action density \(W_{k}\), which includes a nonlinear contribution to the growth-rate, due to the zonal density induced modulation of the transport crossphase, second term on the r.h.s. of Eq. (3). Eqs. (4) and (5) describe the dynamics of zonal flows \(U\) and zonal density corrugations \(N\), respectively.
The frequency \(\omega_{k}\) is the nonlinear frequency, including Doppler-shift due to zonal flows:
\[\omega_{k}=\frac{\omega_{*0}}{1+k_{\perp}^{2}}+k_{y}U, \tag{6}\]
where \(k_{\perp}^{2}=k_{x}^{2}+k_{y}^{2}\), and \(W_{k}\) denotes the wave-action density for drift-waves with adiabatic electrons, given by:
\[W_{k}=(1+k_{\perp}^{2})^{2}|\phi_{k}|^{2}, \tag{7}\]
where \(|\phi_{k}|^{2}\) the turbulent power spectrum, and \(D_{t}(W_{k})\propto W_{k}\) is the turbulent diffusivity, while \(D_{0}\) includes residual turbulence and neoclassical contributions. Moreover, the nonlinear growth-rate, due to non-adiabatic electrons is:
\[\gamma_{k}=\gamma_{L}-c_{k}\nabla_{x}N, \tag{8}\]
with \(\gamma_{L}=\theta_{k}^{0}\omega_{k}^{L}\) the linear growth-rate, \(\theta_{k}^{0}\) the linear crossphase between density \(n_{k}\) and potential \(\phi_{k}\), e.g. \(\theta_{k}^{0}\sim(\omega_{*0}-\omega_{k}^{L})/\alpha\) for collisional drift-waves \(\alpha\gg 1\) and \(c_{k}=c_{s}k_{y}\gamma_{L}/\omega_{*0}=c_{s}k_{y}\theta_{k}^{0}/(1+k_{\perp}^ {2})\), with \(c_{s}\) the sound-speed. Note that for linear drift-waves, \(n_{k}^{L}=(1-i\theta_{k}^{0})\phi_{k}\), but nonlinearly \(n_{k}\simeq(1-i\theta_{k}^{0}-i\Delta\theta_{k}(x,t))\), with \(\Delta\theta_{k}/\theta_{k}^{0}=-\nabla_{x}N/|\nabla_{x}n_{0}|\), where \(\nabla_{x}n_{0}<0\) is the equilibrium density gradient. Physically, this means that the transport crossphase is _nonlinear_ - due to the \(E\times B\) convective nonlinearity - and can be viewed as a radial modulation of the crossphase due to zonal profile
corrugations, e.g. zonal density. This physical mechanism is sketched [Fig 1], together with a sketch of zonal flows and zonal density [Fig 2]. The extended wave-kinetic model (3,4,5) was implemented numerically in Ref. [30]. However, the authors assumed a linear crossphase. Hence, radial modulations of the crossphase are neglected in Ref. [30].
We now describe the model. Here, the first term on the r.h.s. of Eq. (3) is the turbulence drive with linear growth-rate \(\gamma_{L}({\bf k})\), the second term on the r.h.s. is the nonlinear contribution to the growth-rate, proportional to the zonal density gradient \(\nabla_{x}N\), where \(c_{k}\) is the \(k\)-dependent proportionality coefficient. The first term on the r.h.s. of Eq. (4) is the Reynolds torque which involves the Reynolds stress \(\Pi=\langle v_{x}v_{y}\rangle\). The first term on the r.h.s. of Eq. (5) is the convective particle flux associated to the linear electron response \(\Gamma=\langle\tilde{v}_{x}\tilde{n}^{L}\rangle=\sum_{k}(ic_{s}k_{y}/2)[n_{k }^{L}\phi_{k}^{*}-(n_{k}^{L})^{*}\phi_{k}]\).
Figure 1: Radial modulation \(\Delta\theta_{k}\) of the transport crossphase, due to zonal profile corrugations.
Figure 2: Radial modes: sketch of zonal density (top) and zonal flows (bottom).
The Reynolds stress can be expressed in the form:
\[\Pi=-\sum_{k_{y}}\int\frac{k_{y}k_{x}W_{k}}{(1+k_{\perp}^{2})^{2}}dk_{x} \tag{9}\]
Moreover, the particle flux can be approximated as:
\[\Gamma\simeq c_{s}\sum_{k_{y}}\int\frac{k_{y}\theta_{k}^{0}W_{k}}{(1+k_{\perp}^ {2})^{2}}dk_{x} \tag{10}\]
with \(\theta_{k}^{0}=\gamma_{L}/\omega_{k}^{L}\) the linear crossphase, with \(\omega_{k}^{L}=\omega_{*}^{0}/(1+k_{\perp}^{2})\) the linear DW frequency, and the approximation \(\sin\theta_{k}^{0}\simeq\theta_{k}^{0}\) was used, since we assume \(|\theta_{k}^{0}|\ll 1\).
## 3 Energy transfer
Multiplying Eq. (3) by \((1+k_{\perp}^{2})^{-1}\) and integrating in \(k_{x}\), one obtains the evolution equation for turbulence intensity \(I=\int(1+k_{\perp}^{2})|\phi_{k}|^{2}dk_{x}\). Multiplying Eq. (4) by \(2U\) yields the evolution equation for zonal flow intensity \(U^{2}\), and multiplying Eq. (5) by \(2N\) yields the evolution equation for zonal density intensity \(N^{2}\). This yields the following system:
\[\frac{\partial I}{\partial t}+\frac{\partial}{\partial x}(\hat{ v}_{g}I) = 2\hat{\gamma}_{L}I+W_{turb}^{V}+W_{turb}^{n}-\Delta\hat{\omega}I^ {2}, \tag{11}\] \[\frac{\partial U^{2}}{\partial t} = W_{V}-\mu U^{2}+\nu_{\perp}U\nabla_{xx}U,\] (12) \[\frac{\partial N^{2}}{\partial t} = W_{n}+N\nabla_{x}\left[(D_{0}+D_{t}(I))\nabla_{x}N\right], \tag{13}\]
with the nonlinear transfer terms given by:
\[W_{turb}^{V} = -2\Pi\nabla_{x}U, \tag{14}\] \[W_{turb}^{n} = -2\hat{c}I\nabla_{x}N=-2\Gamma\nabla_{x}N,\] (15) \[W_{V} = -2U\nabla_{x}\Pi,\] (16) \[W_{n} = -2N\nabla_{x}\Gamma, \tag{17}\]
where \(\hat{c}\) is defined via:
\[\hat{c}I = \sum_{k_{y}}\int c_{s}k_{y}\theta_{k}^{0}W_{k}(1+k_{\perp}^{2})^{ -2}dk_{x}, \tag{18}\] \[= \Gamma,\]
Note that the energy is conserved in the turbulence - zonal density interaction, since \(\int(W_{turb}^{n}+W_{n})dx=0\). We stress out that this arises independently from the well-known energy conservation in the turbulence - zonal flow interaction \(\int(W_{turb}^{V}+W_{V})dx=0\). This is remarkable as it opens the way to _transport decoupling_ in more sophisticated models, e.g. the possibility of different magnitude of profile corrugations for different channels such as particle transport channel and thermal transport channel.
## 4 Evidence of the proposed mechanism in numerical simulations of HW turbulence
### Numerical results
Since we want to test whether radial modulations of the transport crossphase are generated by turbulence, we perform fluid simulations of collisional drift-wave turbulence described by the Hasegawa-Wakatani model (1, 2) using the BOUT++ framework [35], employing PVODE with adaptative time stepping to advance in time. The model used [31] is 2D with a resolution of \(256^{2}\), integrated over a square of length \(L=51.2\). The coupling parameter is set to \(\alpha=1\) (unless stated otherwise), and the equilibrium density gradient is \(\kappa=0.5\). For Poisson brackets, the Arakawa scheme is used [36]. For numerical stability reasons, viscous hyperdiffusion and particle hyperdiffusion terms are added to the r.h.s. of Eqs. (1) and (2), respectively, with coefficients: \(D_{\Omega}=D_{n}=1\times 10^{-4}\). Simulations are carried out until the turbulence saturates and a statistically stationary state is reached. Snapshots of potential contour [Fig.3a] and density contour [Fig.3b] are shown, including zonal components. Contours of potential and density are elongated in the poloidal direction \(y\) due to zonal flows and zonal density, respectively.
In the saturated state, the nonlinear transport crossphase \(\theta_{k}=\arg(n_{k}^{*}\phi_{k})\), averaged over time, is shown v.s. poloidal wavenumber \(k_{y}\) and radial direction \(x\) [Fig.4a]. Here, \(\arg(z)\) denotes the argument of the complex \(z\). A modulation pattern is clearly observed in the radial direction \(x\) [Fig.4a]. We stress that this is the first time that such a radial modulation of transport crossphase has ever been observed in numerical simulations. The reason is that gyrokinetic simulations tend to focus on the poloidal wavenumber \(k_{y}\) dependence [37]. This nonlinear modulation of the crossphase can act as a stabilization of turbulence, even when zonal flows are artificially suppressed
[Fig.5a]. For comparison, the reference case with suppressed zonal flows and zonal density is also shown [Fig.6a]. The associated level of turbulence, i.e. the turbulence intensity profile \(\langle\tilde{\phi}^{2}\rangle\) is shown for the case with zonal flows and zonal density [Fig.4b], for the case with artificially-suppressed zonal flows [Fig.5b], and for the reference case of zonal flows and zonal density both artificially-suppressed [Fig.6b]. One observes that the turbulence level is the highest in the latter case \(\langle\tilde{\phi}^{2}\rangle\sim 1.5\), as expected. In the case with both zonal flows and zonal density [Fig.4b], the turbulence level is strongly suppressed \(\langle\tilde{\phi}^{2}\rangle\sim 2\times 10^{-2}\), consistent with the standard ZF-induced eddy-shearing paradigm. However, even with artificially-suppressed zonal flows [Fig.5b], the average turbulence level \(\langle\tilde{\phi}^{2}\rangle\sim 0.8\) is lower than for the reference case with artificially-suppressed zonal flows and zonal density [Fig.6b]. This shows that the turbulence is partly-suppressed by zonal density corrugations, qualitatively consistent with the extended wave-kinetic model (3,4,5).
The profiles of zonal flows and zonal density, averaged over time, are shown [Fig.7a,b] for two values of the adiabaticity parameter \(\hat{\alpha}=2\) and \(\hat{\alpha}=10\). In the low-adiabaticity regime \(\hat{\alpha}=2\), it is apparent from Fig.7a that the zonal density profile evolves on a much-smaller scale than that of zonal flows. This is somewhat surprising and not intuitive, since in the literature, zonal flows and zonal density are described as two components of 'zonal modes' with the same wavenumber \({\bf q}=q_{x}\hat{x}\). Hence, if zonal modes were really describable in this way, we would naively expect that they would have approximately the same scale, i.e the same dominant wavenumber. The
Figure 3: Snapshots of a) potential and b) density in the saturated state (\(t=10000\)), including zonal components.
Figure 4: Evidence of radial modulation of the transport crossphase: a) time-averaged nonlinear crossphase spectrum \(\sin\theta_{k}\), v.s. poloidal wavenumber \(k_{y}\) and radial position, b) radial Fourier transform of the crossphase, and c) radial profile of the turbulence intensity \(\langle\tilde{\phi}^{2}\rangle\). The adiabaticity parameter is \(\hat{\alpha}=2\).
Figure 5: case with artificially-suppressed zonal flows: a) time-averaged nonlinear crossphase spectrum, v.s. wavenumber \(k_{y}\) and radial position, and b) radial profile of the turbulence intensity \(\langle\tilde{\phi}^{2}\rangle\). Other parameters are the same as in Fig.4.
Figure 6: case with artificially suppressed zonal flows and zonal density: a) time-averaged nonlinear crossphase spectrum, v.s. wavenumber \(k_{y}\) and radial position, and b) Radial profile of the turbulence intensity \(\langle\tilde{\phi}^{2}\rangle\). Other parameters are the same as in Fig.4.
Figure 8: a) Zonal flow spatio-temporal dynamics and b) time-averaged zonal flow profile, and c) zonal density spatio-temporal dynamics, d) time-averaged zonal density profile. The adiabaticity parameter is \(\hat{\alpha}=2\).
Figure 7: Time-averaged profiles of zonal flows (blue) and zonal density (red), for a) \(\hat{\alpha}=2\) and b) \(\hat{\alpha}=10\). Other parameters are the same as in Fig.4.
fact that they have widely different scale - and hence different dominant wave number - points to the intrinsically nonlinear nature of these fields. The different scale between zonal flows and zonal density in drift-wave turbulence was first observed in Ref. [33]. This difference of scale between zonal flows and zonal density can be interpreted - in the low adiabaticity regime - as a _decoupling_ between different transport channels, i.e. vorticity transport v.s. particle transport. For a more sophisticated model, this may have implications for the important phenomenon of _transport decoupling_ between particle transport v.s. heat transport as observed e.g. in the improved confinement mode (I-mode) with high energy confinement but low particle confinement [34], where the crossphase modulations may play a crucial role, but more work needs to be done to confirm this picture. At high adiabaticity \(\hat{\alpha}=10\), however, i.e. for nearly Boltzmann electrons, the radial scale of zonal density becomes comparable to that of zonal flows [Fig. 7b]. The spatio-temporal dynamics of zonal flows and zonal density is shown, for the case \(\hat{\alpha}=2\) [Fig.8]. The turbulence energy evolution [Fig.9 a,b], and the energy of zonal flows and zonal density [Fig.10 a-c] are also shown.
Figure 9: Time-series of the turbulence energy: a) with zonal flows present and b) with artificially-suppressed zonal flows. The parameters are the same as in Fig.4. Case b) is shown after the turbulence has reached the saturation regime.
Figure 10: Time-series of the energy of zonal flows (full-line) and zonal density (dash): a) with zonal flows present and \(\hat{\alpha}=2\), b) with artificially-suppressed zonal flows and \(\hat{\alpha}=2\), and c) with zonal flows present and \(\hat{\alpha}=10\). The parameters are the same as in Fig.4, except c) for which \(\hat{\alpha}=10\).
### Comparison with the extended wave-kinetic model & with JFT-2M experimental data
#### 4.2.1 Comparison with the extended wave-kinetic model
In this section, qualitative comparison will be made between aspects of the extended wave-kinetic model (3,4,5) and associated reduced 1D model (11,12,13) on one hand, and the numerical simulations of the full model on the other hand. First, the extended wave-kinetic model predicts that zonal flows and zonal density corrugations both play a role in transport suppression. Hence, one should not be dominant over the other. This is consistent with the time-average profile of Fig.7 which shows them to be roughly of the same magnitude. The reduced model also predicts the radial modulation \(\Delta\theta_{k}(x,t)\) of the transport crossphase, which is confirmed in Fig.4a. When zonal flows are artificially suppressed, the amplitude of the crossphase modulation seems to decrease [Fig.4a], and crossphase modulations vanish completely when both zonal flows and zonal density are artificially suppressed Fig.[6a]. This shows that this radial modulation is a nonlinear phenomenon and is not predicted by quasi-linear theory which uses the linear '\(i\delta\)' prescription for the electron response. One may ask: What is the qualitative effect of zonal density v.s. zonal flows in suppressing the turbulence? To investigate this, we compare the turbulence level, i.e. the time-averaged turbulence energy at saturation, for different cases. It is convenient to introduce a normalized indicator: the _zonal efficiency_\(\Upsilon\) (%), defined as:
\[\Upsilon=\frac{\Delta\epsilon_{turb}}{\epsilon_{turb}^{0}}, \tag{19}\]
where \(\Delta\epsilon_{turb}=|\epsilon_{turb}-\epsilon_{turb}^{0}|\), \(\epsilon_{turb}\) denotes the time-averaged turbulence energy at saturation, and \(\epsilon_{turb}^{0}\) is its reference value when both zonal flows and zonal density are artificially-suppressed.
The zonal efficiency (19) indicates how strongly different zonal structures are able to suppress turbulence. It is shown for different cases [Fig.11]. For \(\hat{\alpha}=2\), one observes that the case with zonal flows and zonal density has a zonal efficiency of \(\Upsilon\sim 99.1\%\) close to \(100\%\), corresponding to almost totally suppressed turbulence. The case with artificially-suppressed zonal density has a zonal efficiency of \(\Upsilon\sim 94.4\%\), thus lower than the case with both zonal flows and zonal density present, although not by a large margin. However, the most interesting case is the one with zonal density alone, i.e. with
artificially-suppressed zonal flows, with a zonal efficiency of \(\Upsilon\sim 61.2\%\),which is still large. This shows that zonal density corrugations may play an important role in turbulence suppression in some regimes. For \(\hat{\alpha}=4\), the zonal efficiency is qualitatively similar to the case \(\hat{\alpha}=2\), although there are small quantitative differences, as zonal density effects become weaker. This trend continues for \(\hat{\alpha}=10\), where zonal density effects become negligeable: with zonal density alone, in this case, zonal efficiency is \(\Upsilon\sim 20\%\) only.
#### 4.2.2 Qualitative comparison with JFT-2M experimental data
We also compare the theory with experimental data from previous JFT-2M experimental observations of limit-cycle oscillations (LCO) during the L-H transition [25]. Data from the heavy ion beam probe (HIBP) shows qualitative features resembling the zonal density corrugations of our model. It should be noted that these density corrugations are interpreted to be induced by the turbulence spreading, where the turbulence clump and density gradient perturbation simultaneously travel [25]. We leave detailed comparison for future work. Data from Ref. [25] shows a slow spatial modulation of the HIBP profile, a proxy for electron density. The sound Larmor radius is \(\rho_{s}\sim\rho_{i}\sim 1.2mm\) in this experiment, where \(T_{e}\sim T_{i}\) is assumed and \(\rho_{i}\) is the
ion Larmor radius. Ref. [25] estimated the radial wavenumber of the LCO as \(q_{r}\sim 25m^{-1}\) and that of the microturbulence as \(k_{r}\sim 10^{2}m^{-1}\). This gives \(q_{r}\rho_{s}\sim 0.03\) and \(k_{r}\rho_{s}\sim 0.12\), hence \(q_{r}\rho_{s}\ll k_{r}\rho_{s}\), consistent with a slow radial modulation.
## 5 Discussion
The wave-kinetic model Eqs. (3,4,5) is an extension of the well-know wave-kinetic equation [38, 39], to self-consistently include the physics of the transport crossphase. We can compare this model with that of Ref. [29]. The main difference is the presence of the nonlinear part of the growth-rate in our model, second term on the r.h.s. of Eq. (3), which can be traced to the convective \(E\times B\) nonlinearity. This couples to the dynamics of zonal density corrugations, providing a new feedback loop which is absent in the standard wave-kinetic equation. In Ref. [29], the wave-kinetic model is solved numerically in the extended phase-space \((x,k_{x})\). It shows a complex interplay between turbulence and zonal flows that lead to nonlinear structures (patterns), associated to the 'trapping' of turbulence wave-packets in the troughs of zonal flows. This is beyond the scope of this article and left for future work. Instead, we provides evidence of the validity of the model by using fluid simulations in real space. Let us now discuss the zonal density generation mechanism, Eqs. (5) and (13). Ref. [21] showed that CTEM turbulence can saturate via nonlinear generation of zonal density. We may compare the zonal density drive mechanism described by equation (8) in [21], as this should be model-independent. Our Eq. (5) differs from the one in [21], since we show that energy is conserved between turbulence and zonal density, whereas the analysis in [21] is valid only for the initial exponential growth of the modulational instability. The fluid model (1,2) could possibly be extended to CTEM, where the drive of zona density structures seems to play a crucial role [40, 41].
There are limitations to our model. First, the model assumes cold ions, \(T_{i}\ll T_{e}\), and hence does not contain finite-ion Larmor radius (FLR) effects, although it includes ion inertia (\(\rho_{s}\)) effects. It is thus not directly applicable to the important ion-temperature-gradient driven mode (ITG). Second, electron temperature gradient effects (\(\eta_{e}\)) are neglected. This is beyond the scope of this article and left for future work, where we plan to investigate the possible decoupling between particle transport and thermal transport
(transport decoupling).
## 6 Conclusion
In this work, we derived the extended wave-kinetic equation (3), self-consistently coupled to the dynamics of zonal flows and zonal density corrugations. The latter may be a missing piece in the understanding of turbulence in fusion plasmas. The theory can be summarized as follows: Turbulent fluctuations self-organize to generate quasi-stationary radial modulations \(\Delta\theta_{k}(r,t)\) of the transport crossphase \(\theta_{k}\) between density fluctuations and potential fluctuations. This results in turbulent particle flux modulations \(\tilde{\Gamma}(r,t)\). The radial modulation of particle flux nonlinearly drive zonal corrugations of the density profile via a modulational instability. In turn, zonal density corrugations regulate the turbulence via nonlinear damping of the fluctuations. The main findings of this work are: i) The present theory takes into account the convective \(E\times B\) nonlinearity, and thus goes beyond the well-known '\(i\delta\)' quasi-linear approximation, ii) This nonlinear mechanism conserves energy between turbulence and zonal density. Since zonal density is a radial mode (\(m=0,n=0\)), with \(m\) and \(n\) the poloidal and toroidal mode numbers, it cannot drive transport and thus provides a benign reservoir of energy for the turbulence, and iii) In fluid simulations of collisional drift-wave turbulence, the radial modulation of the transport crossphase and associated staircase profile structure have been confirmed to partly stabilize the turbulence.
## Acknowledgments
M.L. would like to thank J.M. Kwon, Lei Qi, I. Dodin, Hongxuan Zhu, T. Stoltzfulz-Dueck and M.J. Pueschel for helpful discussions. M.L was supported by R&D Program through Korean Institute for Fusion Energy (KFE) funded by the Ministry of Science and ICT of the Republic of Korea (No. KFE-EN2341-9).
## Appendix: derivation of the reduced model
Here, we detail the derivation of the reduced model (3,4,5). Linearizing Eq. (1) yields:
\[n_{k}^{L}=\left[1-i\frac{\omega_{*0}-\omega_{k}^{L}}{\alpha}\right]\phi_{k} \tag{20}\]
which provides the linear density response:
\[n_{k}^{L}=(1-i\theta_{k}^{0})\phi_{k}, \tag{21}\]
with \(\theta_{k}^{0}=(\omega_{*0}-\omega_{k}^{L})/\alpha\) the linear transport crossphase. Moreover, the first-order modulation of Eq. (1) yields:
\[\Delta n_{k}=i(\omega_{*0}-\omega_{k}^{L})\frac{k_{y}\nabla_{x}N}{\omega_{*0} \alpha}\phi_{k} \tag{22}\]
which provides the nonlinear correction to the density response:
\[\Delta n_{k}=-i\Delta\theta_{k}\phi_{k}, \tag{23}\]
with \(\Delta\theta_{k}=-k_{y}\nabla_{x}N/\alpha\) the crossphase radial modulation. Hence, in the weak-turbulence approximation, the nonlinear density response is:
\[n_{k} \simeq n_{k}^{L}+\Delta n_{k} \tag{24}\] \[\simeq [1-i(\theta_{k}^{0}+\Delta\theta_{k})]\phi_{k} \tag{25}\]
Substracting Eq. (2) from Eq. (1) yields the conservation of potential vorticity i.e. gyrocenter ion density:
\[\frac{\partial}{\partial t}(n-\nabla_{\perp}^{2}\phi)+v_{*0}\frac{\partial \phi}{\partial y}+{\bf v}_{E}.\nabla(n-\nabla_{\perp}^{2}\phi)=0 \tag{26}\]
Using the weak-turbulence approximation [32], this can be written:
\[\frac{\partial}{\partial t}(n_{k}+k_{\perp}^{2}\phi_{k})+i\omega_{*0}\phi_{k} +{\bf V}_{zon}.\nabla(n_{k}+k_{\perp}^{2}\phi_{k})=0, \tag{27}\]
where \({\bf V}_{zon}=\hat{z}\times\nabla\phi_{zon}\) denotes zonal flows. After some algebra, this can be written in the form of a Schrodinger-like equation:
\[i\frac{\partial}{\partial t}(R_{k}+k_{\perp}^{2})\phi_{k}=\Big{[}\omega_{*0}+ k_{y}U(R_{k}+k_{\perp}^{2}\Big{]}\phi_{k}, \tag{28}\]
with \(R_{k}=1-i(\theta_{k}^{0}+\Delta\theta_{k})\), and \(U=V_{zon}\).
Assuming \(|\partial_{t}\Delta\theta_{k}|\ll|\partial_{t}\phi_{k}|\), one obtains:
\[i\frac{\partial\phi_{k}}{\partial t}=\Big{[}\frac{\omega_{*0}}{1+k_{\perp}^{2} -i(\theta_{k}^{0}+\Delta\theta_{k})}+k_{y}U(x,t)\Big{]}\phi_{k}, \tag{29}\]
Using the approximation \(|\theta_{k}^{0}|,|\Delta\theta_{k}|\ll 1\), this reduces to:
\[i\frac{\partial\phi_{k}}{\partial t}=H_{H}\phi_{k}+iH_{A}\phi_{k}, \tag{30}\]
where \(H_{H}=\omega_{k}+k_{y}U(x,t)\) and \(H_{A}=\gamma_{k}^{0}+\Delta\gamma_{k}(x,t)\) denote the Hermitian and anti-Hermitian parts of the 'Hamiltonian', respectively. Here, \(\gamma_{k}^{0}=\omega_{*0}\theta_{k}^{0}\) the linear growth-rate associated to the linear crossphase \(\theta_{k}^{0}\), and \(\Delta\gamma_{k}=\omega_{*0}\Delta\theta_{k}(x,t)\) the nonlinear growth-rate associated to the nonlinear part \(\Delta\theta_{k}(x,t)\sim k_{y}N(x,t)\) of the crossphase. Here \(N(x,t)\) denotes the zonal density gradient.
Following Ref.[32], one obtains the following wave-kinetic equation:
\[\frac{\partial W_{k}}{\partial t}+\{H_{H},W_{k}\}=2H_{A}W_{k}, \tag{31}\]
where here \(\{\cdot,\cdot\}\) denotes the Poisson bracket in \((k_{x},x)\) extended phase space.
| turbulence-driven quasi-stationnary structures known as 'staircase'は、衝突による漂流波モデルを用いて調査されています。2次元シミュレーションでは、 zonal密度波の抑制効果は、対称性パラメータ(衝突頻度を反比例させる)に影響を受けます。対称性パラメータが増加すると、 zonal密度が turbulence を抑制する効果が低下し、 zonal 流が近似非対称性領域で支配的な役割を果たします。非線形輸送相関は、 zonal密度に関連する放射的な変動を示しています。 |
2302.14368 | Enhanced Controllability of Diffusion Models via Feature Disentanglement
and Realism-Enhanced Sampling Methods | As Diffusion Models have shown promising performance, a lot of efforts have
been made to improve the controllability of Diffusion Models. However, how to
train Diffusion Models to have the disentangled latent spaces and how to
naturally incorporate the disentangled conditions during the sampling process
have been underexplored. In this paper, we present a training framework for
feature disentanglement of Diffusion Models (FDiff). We further propose two
sampling methods that can boost the realism of our Diffusion Models and also
enhance the controllability. Concisely, we train Diffusion Models conditioned
on two latent features, a spatial content mask, and a flattened style
embedding. We rely on the inductive bias of the denoising process of Diffusion
Models to encode pose/layout information in the content feature and
semantic/style information in the style feature. Regarding the sampling
methods, we first generalize Composable Diffusion Models (GCDM) by breaking the
conditional independence assumption to allow for some dependence between
conditional inputs, which is shown to be effective in realistic generation in
our experiments. Second, we propose timestep-dependent weight scheduling for
content and style features to further improve the performance. We also observe
better controllability of our proposed methods compared to existing methods in
image manipulation and image translation. | Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David I. Inouye, Ajinkya Kale | 2023-02-28T07:43:00 | http://arxiv.org/abs/2302.14368v3 | # Towards Enhanced Controllability of Diffusion Models
###### Abstract
_Denoising Diffusion models have shown remarkable capabilities in generating realistic, high-quality and diverse images. However, the extent of controllability during generation is underexplored. Inspired by techniques based on GAN latent space for image manipulation, we train a diffusion model conditioned on two latent codes, a spatial content mask and a flattened style embedding. We rely on the inductive bias of the progressive denoising process of diffusion models to encode pose/layout information in the spatial structure mask and semantic/style information in the style code. We propose two generic sampling techniques for improving controllability. We extend composable diffusion models to allow for some dependence between conditional inputs, to improve the quality of generations while also providing control over the amount of guidance from each latent code and their joint distribution. We also propose timestep dependent weight scheduling for content and style latents to further improve the translations. We observe better controllability compared to existing methods and show that without explicit training objectives, diffusion models can be used for effective image manipulation and image translation._
## 1 Introduction
Diffusion Models [46, 18] (DM) have gained much attention due to their impressive performance in image generation [8, 41, 42] and likelihood estimation [38]. While many efforts have concentrated on improving image generation quality [38, 45, 53] and sampling speed [47, 28, 36], relatively less attention has focused on enhancing controllability of diffusion models.
Improving editability and controllability in various other forms of generative models (e.g., GANs [14, 15, 50], VAE [27, 2] and Flow-based Models [10, 11]) has been one of the most prominent research topics in the past few years. GANs such as StyleGAN-v2 [22] have been shown to inherently learn smooth and regular latent spaces [15, 50] that enable meaningful edits and manipulations on a real or generated image. The enhanced controls are useful for
many practical applications such as Image Synthesis [39], Domain Adaptation [20], Style Transfer [21, 32] and Interpretability [31] to name a few. Despite high quality and diverse image generations, it is less clear how to manipulate the latent space of diffusion models that is composed of a sequence of gradually denoised 2d samples.
An alternative to using the inherent latent space of GANs for manipulation is to learn multiple external disentangled latent spaces to condition the generation [39, 21, 32, 29]. A common theme across such methods is to learn a structure/content code to capture the underlying structure (e.g., facial shape and pose in face images) and a texture/style code to capture global semantic information (e.g. visual appearance, color, hair style etc.). Similar approaches have been tried in diffusion models in [30, 40], however these techniques do not learn multiple controllable latent spaces. Other inference time editing techniques such as [16, 24, 33, 49, 34] either require computationally expensive optimization (of the conditional embeddings and/or the model) for each sample during inference or do not provide fine-grained controllability. Composable Diffusion Models [34] (CDM) proposes a way to compose multiple conditional inputs but assumes the inputs are independent, which may not always be true (Section 3.3).
In this paper, we propose a novel framework as shown in Fig. 2 to effectively learn two latent spaces to enhance controllability in diffusion models. Inspired by [39, 29] we add a _Content Encoder_ that learns a spatial layout mask and a _Style Encoder_ that outputs a flattened semantic code to condition the diffusion model during training (Section 3.1). The content and style codes are injected differently into the UNet [43] to ensure they encode different semantic factors of an image.
Though decomposing content and style information from an image enables better controllability, enforcing independence between the codes may not always be ideal. For example, _face structure_ (e.g. square or round face) that is ideally encoded in the content code and _gender_ (e.g. male or female) an attribute encoded in the style code [39], may not be independent and treating them as such might lead to unnatural compositions (Fig. 3). Similarly, CDM [34] assumes conditioning inputs are independent and hence shows unnatural compositions for certain prompts like 'a flower' and 'a bird' (see Fig.6 in [34]). We extend the formulation in [34] and propose _Generalized Composable Diffusion Models_ (GCDM) to support compositions during inference when the conditional inputs are not necessarily independent (Section 3.3). This also provides the ability to control the amount of information from content, style and their joint distribution separately during sampling. We observe significantly better translations with GCDM and also show improved compositions in Stable Diffusion compared to CDM (Fig. 5).
In addition, we leverage the inductive bias [1, 5, 6] of diffusion models that learns low frequency layout information in earlier steps and high frequency or imperceptible details in the later steps of the reverse diffusion process, to further improve results. We use a predefined controllable timestep dependent weight schedule to compose the content and style codes during generation. This simulates the mixture of denoising experts proposed in [1] by virtue of varying the conditional information (instead of the entire model) at different timesteps during inference. Some examples generated using the proposed model are shown in Fig. 1.
Moreover, we also show that the learned latent spaces are manipulatable. We apply PCA on the style and content latent spaces and identify meaningful attribute specific manipulation directions similar to [15] as shown in Fig. 1 (c). We also observe that the proposed setup learns latent spaces that support smooth interpolations (Fig. 1 (b)).
To the best of our knowledge, there is no existing work that trains diffusion models with multiple latent spaces, generalizes composable diffusion models and leverages timestep scheduling for image translation and manipulation.
## 2 Preliminaries and Related Works
In this section, we describe the preliminaries that our approach builds on and related works in the literature.
### Diffusion Models
Diffusion Models [46] like DDPM [18] showed impressive image generation and likelihood estimation but
Figure 2: (Top) overview of our proposed framework. We first obtain \(z_{0}\) from the pretrained Autoencoder [12], which is the actual input for the LDM [42]. The external encoders \(E_{c}(\psi)\) and \(E_{s}(\phi)\) and the denoising UNet \(\epsilon(\theta)\) are trained together without any additional objectives. (Bottom) shows the details of injecting style and content information into the denoising UNet at the \(\ell\)-th layer as described in Section 3.1.
had a computationally expensive sampling procedure. DDIM [47] reduced the sampling time by deriving a non-Markovian variant of DDPM. Similarly, Improved-DDPM [38] also improved sampling speed and proposed to learn the variance schedule that was fixed in previous works to enhance mode coverage. DPM-solver [35] and DPM-solver++ [36] proposed high-order ODE solvers for faster sampling. DDGAN [51] combined the best of GANs and diffusion models to retain the mode coverage and quality of diffusion models while making it faster like GANs. LDM [42] used a pretrained autoencoder [12] to learn a lower capacity latent space and trained a diffusion model on the learned latent space (in contrast to pixel space in previous works), reducing time and memory complexity significantly without loss in quality. More descriptions are provided in Section C in the supplementary.
### Controllability in Diffusion Models
**Guidance:**
Some recent works have explored modeling the conditional density \(p(x_{t}|c)\) for controllability. Dhariwal et al. [8] proposed to use a pretrained classifier but finetuning a classifier that estimates gradients from noisy images, which increases the complexity of the overall process [19]. Ho et al. [19] proposed to use an implicit classifier while Composable Diffusion Models [34] (CDM) extend the classifier free guidance approach to work with multiple conditions assuming conditional independence. Though guidance approaches help control the generation, they do not offer fine grained controllability or support applications such as reference based image translation.
**Conditional DMs:**
Conditional Diffusion Models have been explored in diverse applications showing state-of-the-art performance in text to image generation (DALLE2 [41], Imagen [45], Parti [53]). These methods use pretrained CLIP or similar embeddings that support interpolation but not further editability. DiffAE [40] proposed to learn a semantic space that has nice properties making it suitable image manipulation. However, a single latent space capturing all the information makes it difficult to isolate attributes to manipulate.
**Inference only Editing:**
Several works have proposed inference-time editing techniques on top of pretrained diffusion models. SDEdit [37] enables structure preserving edits while Prompt-to-prompt [16] modifies the attention maps from cross-attention layers to add, remove or reweigh importance of an object in an image. DiffusionCLIP [25], Imagic [24] and Unitune [49] propose optimization based techniques for text based image editing. Textual Inversion [13] and Dream-Booth [44] finetunes pretrained models using few reference images to get personalized models. Though the above techniques are helpful with editing, most of these methods require computationally expensive optimization, modify the weights of pretrained model for each sample, and/or doesn't support fine-grained controllability for reference based image translation. The closest related work to ours is DiffuseIT [30]. They enabled reference and text guided image translation by leveraging Dino-VIT [3] to encode content and style. However, their approach requires costly optimization during inference and doesn't support controlling the final generation.
**Inductive Bias of Diffusion Models:**
On top of the inductive bias [5, 6] of Diffusion Models, eDiffi [1] proposed to train models specialized to a subset of the timesteps to improve generations drastically. MagicMix [33] interpolates noise maps while providing different embeddings at different timesteps. Though these approaches show the advantages of the inductive bias, it hasn't been used to provide more controllability for image manipulation.
### Controllability in GANs
MUNIT [21], DRIT [32] and SAE [39] propose frameworks for reference-based image translation by learning disentangled latent spaces. StarGAN v2 [7] uses domain labels to support image to image translation whereas DAG [29] adds an extra content space on top of the style space of StyleGAN v2 [23] for disentanglement. Though these techniques achieve impressive results for translation, they suffer the same limitations as GANs such as mode coverage and difficulty in training. To overcome the limitations, we use similar techniques and build on top of diffusion models, that has shown to have better mode coverage and higher quality generations [9] compared to GANs.
## 3 Proposed Method
Our framework is based on the LDM [42] architecture as it is faster to train and sample from, compared to pixel-based diffusion models. Let \(x\) be an input image and \(E_{LDM}\) and \(D_{LDM}\) be the pretrained and fixed encoder and decoder respectively. The actual input space for our diffusion model is the low-dimensional latent space \(z=E_{LDM}(x)\). The output of the reverse diffusion process is the low dimensional latent \(\hat{z}_{0}\) which is then passed through the pretrained decoder as \(x=D_{LDM}(\hat{z}_{0})\) to get the final image \(\hat{x}_{0}\).
### Learning Content and Style Latent spaces
Inspired by DiffAE [40] and similar approaches in GANs [29], we introduce a content encoder \(E_{c}(\,\cdot\,;\psi)\) and a style encoder \(E_{s}(\,\cdot\,;\phi)\) in our framework as shown in Fig. 2. The objective for training is formulated as:
\[\min_{\theta,\psi,\phi}\mathbb{E}_{z_{0},\epsilon_{t}}\left[\|\epsilon_{t}- \epsilon(z_{t},t,E_{c}(z_{0};\psi),E_{s}(z_{0};\phi);\theta)\|_{2}^{2}\right],\]
where \(z_{t}\) is from the forward process, i.e., \(z_{t}=q(z_{t}|z_{0})\). To ensure that the encoders capture different semantic factors of an image, we design the shape of \(z_{s}\) and \(z_{c}\) asymmetrically as done in [39, 48, 21, 32, 29, 4]. The content encoder \(E_{c}(z_{0};\psi)\) outputs a spatial layout mask \(z_{c}\in\mathbb{R}^{1\times\frac{h}{h}\times\frac{w}{h}}\) where \(w\) and \(h\) are the width and height of \(z_{0}\) latent. In contrast, \(E_{s}(z_{0};\phi)\) outputs \(z_{s}\in\mathbb{R}^{512\times 1\times 1}\) after global average pool layer to capture global high-level semantics. At each layer of the denoising UNet \(\epsilon(\,\cdot\,;\theta)\), the style code \(z_{s}\) is applied using channel-wise affine transformation along with timestep information (\(t_{1}\), \(t_{2}\), and \(t_{3}\)) while the content code \(z_{c}\) is applied in a spatial manner as shown below.
\[\underbrace{t_{1}(1+\varphi^{\ell}(z_{c}))}_{\text{spatial-wise}}\odot \underbrace{(1+\zeta^{\ell}(z_{s}))\cdot(t_{2}(h^{\ell}+t_{3}))}_{\text{ channel-wise}}, \tag{1}\]
where \(\varphi^{\ell}\) is a down or upsampling operation at \(\ell\)-th layer to make the dimensions of \(\varphi^{\ell}(z_{c})\) and \(h^{\ell}\) match, and \(\zeta^{\ell}\) is a MLP layer to optimize \(z_{s}\) particularly for \(\ell\)-th layer. \(h^{\ell}\) denotes the group-normalized feature map at \(\ell\)-th layer from the denoising networks \(\epsilon(\,\cdot\,;\theta)\), and \(t_{1}\), \(t_{2}\) and \(t_{3}\) are timestep information from \(\text{MLP}(\text{enc}(t))\) following sinusoidal embedding layer. Group Normalization is used, following the prior work [40].
### Timestep Scheduling for Conditioning
It has been observed in [5, 6, 1] that low-frequency information, i.e., coarse features such as pose and facial shape are learned in the earlier timesteps (e.g., \(0<\text{SNR(t)}<10^{-2}\)) while high-frequency information such as fine-grained features and imperceptible details are encoded in later timesteps (e.g., \(10^{0}<\text{SNR(t)}<10^{4}\)) in the reverse diffusion process. Here, SNR(t) stands for signal-to-noise ratio per timestep [26].
Inspired by this, we introduce a weight scheduler for \(z_{c}\) and \(z_{s}\) that determines how much the content and the style conditions are applied into the denoising networks. We use the following schedule:
\[w_{c}(t)=\frac{1}{1+\exp{(-a(t-b))}} \tag{2}\] \[w_{s}(t)=\frac{1}{1+\exp{(-a(-t+b))}}, \tag{3}\]
where \(a\) is a coefficient for determining how many timesteps content and style are jointly provided while \(b\) indicates the timestep at which \(w_{s}(t)\geq w_{c}(t)\). We also tried simple linear weighting schedule (decreasing for content and increasing for style with every timestep during the reverse diffusion process) and constant schedule but observed that the proposed schedule gave consistently better results (examples are provided in Section F.2 in the supplementary). We additionally evaluate using timestep scheduling during training. It is a promising future direction showing better decomposition between factors controlled by content and style (Section F.1 in the supplementary).
### Generalized Composable Diffusion Models
As mentioned in Section 1, generalizing CDM by introducing the joint component can potentially improve composition of multiple conditions and enhancing controllability over the generation. Fig. 3 shows a conceptual illustration of the possible benefit of GCDM over CDM. Let \(z_{s}^{*}\) and \(z_{c}^{*}\) be the ground-truth content and style features that are not observed in Fig. 3 (a). The approximated content and the style features \(\hat{z}_{c}\) and \(\hat{z}_{s}\) can be better separated by leveraging the inductive bias during training. Using the inductive bias only during sampling, would represent scaling due to the variation in their magnitude across timesteps. Note that the approximated \(\hat{z}_{c}\) and \(\hat{z}_{s}\) are used as the axes in (b) and (c). Fig. 3 (b) shows an example that the content and the style guidances from CDM generate unrealistic samples because the combined guidance is outside the manifold. On
Figure 3: Conceptual illustration of Composable DMs and our proposed sampling method. (a) shows the effects of leveraging the inductive bias during training and sampling. Leveraging the inductive bias during the training may more disentangle the feature representation. On the other hand, the inductive bias can be used for balancing the amount of the content and the style during the sampling. (b) compares CDM and the joint guidance. The result based on CDM can be outside of manifold while the joint guidance stays on manifold. (c) shows the proposed GCDM. GCDM trades off between the independent guidance provided by CDM (stronger effects of the condition) and the joint guidance (more realistic). Corresponding experiment results can be found in Fig. 5, 6 (main paper) and Fig. 25 (supplementary).
the contrary, the joint guidance helps keep the generation within the manifold. (c) visualizes the proposed GCDM which can be seen as a linear interpolation between CDM and the joint guidance. GCDM has the added advantage of enabling separate controls for style, content and realism. Moreover, CDM and the joint guidance are special cases of GCDM. Hence, we argue that it is helpful to derive a generalized composing method without constraining the style and content to be conditionally independent as done in [34]. We would like to sample images given multiple conditions (i.e., style and content in our case), which we formulate as sampling from \(\tilde{p}(x_{t}|c_{1},c_{2})\propto p(x_{t})[p(c_{1},c_{2}|x_{t})^{\lambda}(p(c_ {1}|x_{t})^{\beta_{1}}p(c_{2}|x_{t})^{\beta_{2}})^{1-\lambda}]^{\alpha}\), where \(\alpha\geq 0\) controls the overall strength of conditioning, \(\lambda\in[0,1]\) controls the trade-off between the dependent and independent conditional information, and \(\beta_{1}\) and \(\beta_{2}\) control the weight for style and content information. The guidance gradient in terms of the denoising network \(\epsilon\) (which may depend on zero, one or both conditions) is as follows:
\[\nabla_{x_{t}}\log\tilde{p}(x_{t}|c_{1},c_{2})= \tag{4}\] \[\underbrace{\epsilon(x_{t},t)}_{\nabla\log p(x_{t})}+\alpha[ \lambda\underbrace{(\epsilon(x_{t},t,c_{1},c_{2})-\epsilon(x_{t},t))}_{\nabla \log p(c_{1},c_{2}|x_{t})}\] (5) \[+(1-\lambda)\underbrace{\sum_{i=\{1,2\}}\beta_{i}\underbrace{( \epsilon(x_{t},t,c_{i})-\epsilon(x_{t},t))}_{\nabla\log p(c_{i}|x_{t})}}_{ \nabla\log p(c_{1}|x_{t})p(c_{2}|x_{t})}, \tag{6}\]
If \(\lambda=0\), this simplifies to CDM [34] and thus can be seen as a generalization of it. In the following experiments on image translation, \(\beta_{1}\) and \(\beta_{2}\) denote \(\beta_{s}\) and \(\beta_{c}\), respectively. The detailed derivation and the effect of various hyperparameters are in Sections B and D.1 in the supplementary.
Note that GCDM and timestep scheduling are generic sampling techniques for diffusion models that can also be applied to other tasks beyond image translation (Fig. 5).
## 4 Experiments
We comprehensively evaluate the proposed model on image to image translation and additionally show qualitative examples of GCDM and CDM on text to image composition with stable diffusion. Implementation details are provided in Section D. For sampling, we use _reverse DDIM_[40] approach conditioned on the content image and its corresponding content and style codes to get \(x_{T}\) instead of sampling random noise unless otherwise mentioned. This helps with better identity preservation for faces. Analysis on the effects of _reverse DDIM_ is provided in Section D.2.
### Experimental Setup
#### Datasets
We train different models on the commonly used datasets such as AFHQ [7], FFHQ [22] and LSUN-church [52].
#### Baselines
**DiffuseIT:** The most similar work to ours based on diffusion models is DiffuseIT [30] that tackles the same problem formulation. We compare our results with DiffuseIT using their pretrained model and default parameters.
**DiffAE+SDEdit:** Since Diffusion Autoencoder [40] does not directly support image-to-image translation, we combine that with SDEdit [37]. The input image for the reverse process is \(x_{600}\) (chosen empirically) obtained as \(q(x_{600}|x_{c})\) by running the forward process on the content image. The semantic feature \(z_{sem}\) from the semantic encoder of DiffAE is used given the style image \(x_{s}\).
**DiffAE+MagicMix:** We also combine MagixMix [33] with DiffAE. Similar to DiffAE+SDEdit, this model takes \(x_{600}\) from \(x_{c}\) as input and \(z_{sem}\) from \(x_{s}\) as conditioning. Additionally, at each timestep, the approximated previous timestep \(\hat{x}_{t-1}\) is combined with \(x_{t-1}\) from the content image \(x_{c}\), i.e., \(\hat{x}_{t-1}=v\hat{x}_{t-1}+(1-v)q(x_{t-1}|x_{c})\). For this experiment, \(v=0.5\) is used and the noise mixing technique is applied between \(t=[600,300]\).
**SAE:** Swapping Autoencoder [39] based on GAN [14] is also evaluated. Since the available pretrained model is on the resolution of 512, we resize the generated results to 256 for fair comparison.
**Evaluation Metrics**
**FID:** We use the commonly used Frechet inception distance (FID) [17] to ensure the generated samples are realistic. We follow the protocol proposed in [7] for reference based image translation. To obtain statistics from generated images, 2000 test samples are used as the content images and five randomly chosen images from the rest of the test set are used as style images for each content image to generate 10000 synthetic images.
**LPIPS:** Even though FID evaluates realism of the generations, the model could use just content and ignore style (or vice versa) and still get good FID. Following [7], we use LPIPS score obtained by measuring the feature distances between pairs of synthetic images generated from the same content image but with different style images. **Higher LPIPS indicates more diverse results**. It is ideal for the model to tradeoff between LPIPS and FID, i.e incorporate enough style information from different style images for the same content image (increasing LPIPS) but without going out of the real distribution (decreasing FID).
### Comparison with Existing Works
In this section, we compare the reference-based image translation performance of the proposed model with baseline models on FFHQ dataset.
**Qualitative Results.**
Fig. 4 visually shows example generations from different techniques. We observe that DiffAE+SDEdit loses content information while DiffAE+MagicMix generates unnatural
images that naively combine the two images. This indicates that a single latent space even with additional techniques such as SDEdit and MagicMix is not suitable for reference based image translation. DiffuseIT and SAE models maintain more content information but does not transfer enough information from the style image and have no control over the amount of information transferred from style.
An important benefit of the proposed method is better controllability. By manipulating \(\lambda\), we can control how much guidance is applied. In Fig. 4, decreasing \(\lambda\) increases the effect of style from the style image when \(\beta_{c}=0\) and \(\beta_{s}=1\), where \(\beta_{c}\) and \(\beta_{s}\) are the weights for each conditional guidance (Eq. 4-6). For example, the man on the second row has more wrinkles and beard as \(\lambda\) decreases. Visualizations on the behavior of each hyperparameter are provided in Fig. 10 in the supplementary.
**Quantitative Results.**
Table 1 shows quantitative comparison in terms of FID and LPIPS metrics on FFHQ dataset. Our variants generate images that are realistic as indicated by the lowest FID scores compared with other models while also performing better on diversity as measured by the highest LPIPS except for DiffAE+SDEdit method. However, DiffAE+SDEdit does not show meaningful translation of style onto the content image. DiffAE+MagicMix shows the worst performance because of its unrealistic generation. SAE and DiffuseIT show lower LPIPS score than ours, indicating that it translates very little information from the style image onto the content image. We can also observe that increasing \(\lambda\) (when \(\beta_{c}=0\) and \(\beta_{s}=1\)) makes LPIPS worse while improving FID. In other words, the stronger the joint guidance is, the more realistic but less diverse the generations. This verifies our assumption in Fig. 3 that the joint component has an effect of pushing the generations into the real manifold.
### Effect of GCDM and Timestep Scheduling
**CDM vs GCDM.**
The key benefit of leveraging GCDM is that the guidance by GCDM would help keep the sample on the real manifold and thereby generate more realistic images.
We compare SAE [39] (the best performing baseline) and ours on AFHQ dataset in Table 2. The joint guidance (\(\lambda=1\)) gets the lowest FID indicating that the generations are more realistic as it pulls the guided results to be within the real data manifold.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & DiffusIT [30] & SAE [39] & DiffAE+SDEdit [40, 37] & DiffAE+MagicMix [33] & Ours(\(\lambda=0.9\)) & Ours(\(\lambda=0.6\)) & Ours(\(\lambda=0.3\)) \\ \hline FID & 29.99 & 25.06 & 26.63 & 84.55 & **11.99** & 13.40 & 15.45 \\ LPIPS & 0.47 & 0.39 & 0.64 & 0.41 & 0.34 & 0.42 & 0.49 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison between the proposed and baseline models using FID and LPIPS on FFHQ dataset.
Figure 4: Comparison of the proposed model with baselines for reference based image translation on FFHQ dataset. Our method generates more plausible, realistic combinations of the content and style images with better controllability. Other models either show poorer performance or lack sufficient controllability.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & SAE & CDM & GCDM & GCDM \\ & & & (\(\lambda=0.9\)) & (\(\lambda=1.0\)) \\ \hline FID & 9.29 & 10.57 & 9.75 & 8.58 \\ LPIPS & 0.45 & 0.59 & 0.59 & 0.57 \\ \hline \hline \end{tabular}
\end{table}
Table 2: FID comparisons between SAE and our model with CDM and GCDM on AFHQ dataset.
We can also see that GCDM can be thought of as interpolating between CDM and the joint guidance term, since FID for GCDM (\(\lambda=0.9\)) is in between the joint and CDM. By comparing LPIPS and FID of the variants of GCDM, we can see that the outputs become less diverse as realism is increased. SAE shows worse performance than ours in terms of both diversity and realism. The qualitative comparisons can be found in Fig. 25 in the supplementary.
**Generalizability of GCDM.**
We also compare the performance of CDM and GCDM in composing text prompts for text to image generation using Stable Diffusion V2[42] in Fig. 5. The phrases before and after 'and' are used as the first and the second guidance terms in Eq. 6. The full sentence is used to represent the joint conditioning. As shown in Fig. 5, CDM tends to fail in composing multiple conditions if both conditions contain object information. For example, _the red bird_ and _the yellow flower_ are merged in two out of three generations using CDM. On the other hand, GCDM consistently shows better compositions in the generated images. This emphasizes that GCDM is a generalized formulation for composing multiple conditioning inputs providing more control to the user in terms of realism and diversity as illustrated in Fig. 3. Additional results and the used GCDM hyperparameters can be found in Fig. 28.
**Effect of Timestep Scheduling.**
To more carefully analyze the effect of time-step scheduling when combined with GCDM or CDM, we alter the time-step scheduling so that there is at least a 0.1 weight on style or content. Specifically, we change the upper and lower bounds of the the sigmoid to be 0.1 and 0.9 in Eq 2 and 3, e.g., \(w^{\prime}_{c}(t)=0.8w_{c}(t)+0.1\). The results can be seen in Table 3 and Fig. 6. Without timestep scheduling, GCDM shows better performance in both FID (realism) and LPIPS (diversity). Combined with timestep scheduling, both CDM and GCDM show meaningful improvements in FID in exchange for losing diversity. This is because, timestep scheduling improves content identity preservation, e.g., pose and facial structure causing less variations in structural information and consequently lower LPIPS/diversity. Additionally, timestep scheduling with GCDM variants show better FID or LPIPS than CDM depending on the strength of guidance terms showing varied control over the generations.
### Analysis and Discussion
In this section, we analyze the importance of each of the components of our framework using AFHQ and LSUN-church dataset and aim to better understand the content and style latent spaces. Further analysis and results based PCA, latent interpolation and K-Nearest Neighbor are provided in
Figure 5: GCDM vs CDM for text-to-image generation with Stable Diffusion. We can observe that CDM generates unnatural images (e.g., blending two objects) that may be out of the real manifold while GCDM ensures realistic generations (e.g., combining two objects in a realistic way).
Figure 6: Effect of timestep scheduling in CDM and GCDM. Timestep scheduling improved the results of both CDM and GCDM and gives the best results when combined with GCDM.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{w/o schedule} & \multicolumn{3}{c}{w/ schedule} \\ \cline{2-5} & CDM & GCDM & CDM & GCDM (\(\beta_{c}=1\)) & GCDM (\(\beta_{s}=1\)) \\ FID & 21.43 & **14.46** & 10.50 & 10.21* & 10.61 \\ LPIPS & 0.47 & **0.51** & 0.31 & 0.28 & 0.33* \\ \hline \hline \end{tabular}
\end{table}
Table 3: FID comparisons between CDM and GCDM with and without the timestep scheduling in FFHQ dataset. Best method without timestep scheduling is highlighted in bold and with timestep scheduling is highlighted with a *.
Figure 7: Visualization of the effect of each guidance term (described in Eq. 4-6) on generation. \(x_{T}\) is randomly sampled.
Section E.1, E.2 and E.3 respectively in the supplementary.
**Visualization of Each Guidance Term.**
The proposed GCDM in Section 3.3 has guidance from three terms, the joint distribution of style and content and style and content codes separately. Fig. 7 shows comparison of the effect of these terms. Columns 3 shows images generated only using guidance from content image. It can be seen that the generated animals are not the same as the content image but has the exact same structure and pose. Similarly, columns 4 shows generations when only style guidance is used. Since content information is not used at all, the pose is random while the style such as color, fur etc. corresponds to the style image. Column 5 shows result of joint guidance whereas the last column shows generations using GCDM. It can be observed that GCDM with \(\beta_{s}=1.0\) has more semantic information from the style than the joint guidance.
**Classifier-based comparisons.**
To further understand what kind of attributes are encoded in style and content latent spaces, we use pretrained classifiers to predict the attributes of translated images and compare with the original style and content images. We sample 2000 random images from test set to use as \(x_{c}\) and another 2000 as \(x_{s}\) to form 2000 content-style pairs. Next, we acquire the translated output \(x_{o}\) and corresponding pseudo labels \(y_{c}\), \(y_{s}\) and \(y_{o}\) by leveraging an off-the-shelf pretrained attribute classifier (EasyFace). In Table 4, we show the probabilities that the final generated image \(x_{o}\) has an attribute from content image as \(p(y_{c}^{att}=y_{o}^{att})\) and likewise for style image.
Both ours and SAE are designed to make \(z_{s}\) encode global high-level semantics, e.g., Gender, Age, etc. Thus, methods would show ideal performance if \(y_{o}^{att}=y_{s}^{att}\neq y_{c}^{att}\). We see that most global attributes come from the content image for SAE indicating conservative translations from the style image (as seen in Fig. 4 and lower LPIPS in Table 1). In contrast, ours has a controllable way of deciding the strength of attributes from the style image through \(\lambda\). The lower the value of \(\lambda\), the more disentangled and consistent the attributes will be in the generations.
**Information Encoded in Each Latent Space.**
We analyze the role of the denoising network \(\epsilon_{\theta}\) and the encoders \(E_{c}\) and \(E_{s}\) by analyzing what information is encoded in the respective latent spaces. Fig. 8 and Fig. 9 show the role of \(\epsilon_{\theta}\) in the reverse process evaluated on LSUN-church dataset. Fig. 8 shows the results of fixing the content while varying the style images (and vice versa). \(x_{T}\) is fixed as well to reduce the stochasticity. The remaining stochasticity comes from the white noise at each timestep during the reverse process. From the results, we can see that the structure information is maintained while style information changes according to the style image (and vice versa) as we intended.
Similarly, in Fig. 9 we forward the same image to content and style encoders while the generation starts from different random noise \(x_{T}\). The images show that the denoising network play a role in stochasticity since the outputs have consistent shape, color and texture information while minor details of the buildings or clouds are changed.
## 5 Conclusion
We propose a novel framework for enhancing controllability in image conditioned diffusion models for reference based image translation and image manipulation. Our content and style encoders trained along with the diffusion model do not require additional objectives or labels to learn to decompose style and content from images. The proposed generalized composable diffusion model extends
\begin{table}
\begin{tabular}{c c c c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{
\begin{tabular}{c} Probability \\ Att. is Equal (\%) \\ \end{tabular} }} & \multicolumn{2}{c}{\(x_{c}\)} & \multicolumn{2}{c}{\(x_{s}\)} \\ \cline{2-7} & Gender & Age & Race & Gender & Age & Race \\ SAE & 65.95 & 62.36 & 50.40 & 34.05 & 26.40 & 27.91 \\ Ours (\(\lambda=0.9\)) & 65.14 & 53.79 & 53.31 & 34.86 & 31.60 & 28.51 \\ Ours (\(\lambda=0.25\)) & 26.61 & 25.94 & 31.73 & 73.39 & 56.77 & 44.48 \\ \hline \end{tabular}
\end{table}
Table 4: Classifier-based comparisons in FFHQ.
Figure 8: Example generations on LSUN-church dataset showing that the content and style codes are robust to changes. \(x_{T}\) is randomly sampled.
Figure 9: Example showing the role of the denoising network during sampling when content and style codes are unchanged. \(x_{T}\) is randomly sampled.
CDM for a more generalized scenario. It shows significantly better performance when compared with CDM for translation as well as compositing text prompts. We also build on the inductive bias and show that timestep dependent weight schedules for conditioning inputs can help improve overall results and controllability. Additionally, the learned latent spaces are observed to have desirable properties like PCA based attribute manipulation and smooth interpolations. Quantitative and qualitative evaluation shows the benefits of the proposed sampling techniques.
| ディフュージョンモデルの優れた性能を示したことから、ディフュージョンモデルの制御可能性の向上に向けた多くの努力が行われています。しかし、ディフュージョンモデルを分離された潜在空間を持つようにトレーニングし、サンプリングプロセスにおいて分離された条件を自然に組み込む方法は、まだ十分に研究されていません。この論文では、ディフュージョンモデルの分離を目的としたトレーニングフレームワークであるFDiffを提案します。さらに、2つのサンプリング方法を提案し、ディフュージョンモデルのリアリティを高め、制御可能性を高めることができます。要約すると、ディフュージョンモデルを空間コンテンツマスクとスタイルエンベディンクの2つの潜在的特徴に基づいてトレーニングします。このトレーニング方法では、ディフュージョンモデルのノイズ除去プロセスを誘導的バイアスとして使用し、コンテンツ特徴にレイアウト情報と、スタイル特徴に文脈/スタイル情報を含めます。サンプリング方法 |
2304.00093 | Dicke superradiance in ordered arrays of multilevel atoms | In inverted atomic ensembles, photon-mediated interactions give rise to Dicke
superradiance, a form of many-body decay that results in a rapid release of
energy as a photon burst. While originally studied in pointlike ensembles, this
phenomenon persists in extended ordered systems if the inter-particle distance
is below a certain bound. Here, we investigate Dicke superradiance in a
realistic experimental setting using ordered arrays of alkaline-earth(-like)
atoms, such as strontium and ytterbium. Such atoms offer exciting new
opportunities for light-matter interactions as their internal structure allows
for trapping at short interatomic distances compared to their long-wavelength
transitions, providing the potential for collectively enhanced dissipative
interactions. Despite their intricate electronic structure, we show that
two-dimensional arrays of these atomic species should exhibit many-body
superradiance for achievable lattice constants. Moreover, superradiance
effectively ``closes'' transitions, such that multilevel atoms become more
two-level like. This occurs because the avalanchelike decay funnels the
emission of most photons into the dominant transition, overcoming the
single-atom decay ratios dictated by their fine structure and Zeeman branching.
Our work represents an important step in harnessing alkaline-earth atoms as
quantum optical sources and as platforms to explore many-body dissipative
dynamics. | Stuart J. Masson, Jacob P. Covey, Sebastian Will, Ana Asenjo-Garcia | 2023-03-31T19:33:35 | http://arxiv.org/abs/2304.00093v2 | # Dicke superradiance in ordered arrays of multilevel atoms
###### Abstract
In fully-inverted atomic ensembles, photon-mediated interactions give rise to Dicke superradiance, a form of many-body decay that results in a rapid release of energy as a photon burst. While originally studied in point-like assembles, this phenomenon persists in extended ordered systems if the inter-particle distance is below a certain bound. Here, we investigate Dicke superradiance in a realistic experimental setting using ordered arrays of alkaline earth(-like) atoms, such as strontium and ytterbium. Such atoms offer exciting new opportunities for light-matter interaction as their internal structure offers the possibility of trapping at short interatomic distances compared to their strong long-wavelength transitions, providing the potential for strong collectively modified interactions. Despite their intricate electronic structure, we show that two-dimensional arrays of these atomic species should exhibit many-body superradiance for achievable lattice constants. Moreover, superradiance effectively "closes" transitions, such that multilevel atoms become more two-level like. This occurs because the avalanche-like decay funnels the emission of most photons into the dominant transition, overcoming the single-atom decay ratios dictated by their fine structure and Zeeman branching. Our work represents an important step in harnessing alkaline-earth atoms as quantum optical sources and as dissipative generators of entanglement.
## I Introduction
Atoms in a cavity emit into the same electromagnetic mode, leading to interactions between them, and a collective interaction between light and matter. Interactions are well understood within the paradigm of cavity quantum electrodynamics (QED), as the indistinguishability of the atoms enables their description as a large spin coupled to a single radiative channel. An emblematic example of many-body physics in cavity QED is Dicke superradiance [1; 2; 3; 4; 5], where fully-inverted atoms decay by radiating light in a short bright pulse with peak intensity that scales quadratically with atom number [see Fig. 1(a)]. Dicke superradiance has also been observed in Bose-Einstein condensates [6; 7; 8], where a macroscopically-occupied state couples to light. In these scenarios, superradiance is well understood because the permutational symmetry arising from indistinguishability restricts dynamics to a small subspace of the full Hilbert space.
Understanding collective light-matter interactions beyond the cavity QED regime is critical not only from a fundamental point of view, but also to realize applications in quantum non-linear optics, quantum simulation, and metrology. Potentially, one could translate concepts such as the superradiant laser [9; 10], driven-dissipative phase transitions [11; 12], and quantum-enhanced sensing [13; 14; 15] into a much larger class of systems. For instance, atomic arrays in the single-excitation regime have been proposed as promising platforms for generating novel light sources and optical components, with the recent realization of an atomically-thin mirror [16; 17] as an example. The many-body landscape offers a far greater toolbox, and could open up possibilities to create sources of light with unusual statistical properties [18; 19; 20; 21; 22; 23] or to generate entangled atomic states via dissipation [24; 25; 26; 27; 28; 29; 30; 31; 32].
In extended systems in free space, interactions between atoms depend on their relative positions. Theoretical studies of Dicke superradiance in this regime have been greatly limited, as the broken permutational symmetry increases the complexity of the problem, which in principle scales exponentially with atom number. However, experiments have confirmed that superradiant bursts can still occur. The first demonstrations occurred in thermal molecular and atomic vapors [33; 34; 35; 36; 37], but observations have since been made in several other systems [38; 39; 40; 41]. In contrast to other phenomena (such as subradiance), superradiance is attractive from an experimental point of view, as it is robust under many imperfections and does not require single photon detection.
Ordered atomic arrays [42; 43; 44; 45; 46; 47] have been recently suggested as a promising platform to study many-body decay [48; 49; 50; 27; 45; 46; 47]. In contrast to other setups that typically suffer from dephasing arising from thermal motion or coherent (i.e., Hamiltonian) dipole-dipole interactions, atomic arrays are supposed to experience less dephasing, as the role of Hamiltonian dipole-dipole interactions in the burst is significantly reduced due to the spatial order. In these systems, atoms can decay into many radiative channels. Nevertheless, it has been shown that signatures of superradiance should persist in very extended two-dimensional (2D) systems, of size much larger than the transition wavelength [48; 49; 50].
Here, we propose the use of alkaline earth(-like) atoms (AEAs) in atomic arrays to observe and control Dicke superradiance. These atoms have favorable transitions that enable their trapping at relatively small distances in comparison to the wavelength of the emitted photons.
While the atoms are intrinsically multilevel in nature, we demonstrate that the internal competition presented by the different transitions does not prohibit Dicke superradiance. Via a cumulant expansion, we approximate the dynamics and compute the superradiant bursts that would be emitted by arrays of lattice constants that can be achieved in state-of-the-art experimental setups. The emitted light is nontrivially dependent on the geometry of the array and detector location [as shown in Fig. 1(b)]. For example, as the interatomic distance is increased, the superradiant burst is lost but then reappears. We show that this dependence can be easily predicted by the use of conditional two-photon correlation functions. Finally, we show how to use Dicke superradiance to inhibit or enhance decay into a particular state, overcoming limits set by fine structure and Zeeman branching. This work suggests that AEAs offer significant advantages for exploring and harnessing superradiance in atom arrays.
The paper is structured as follows. In Section II, we introduce the full relevant structure of AEAs. In Section III, we consider toy models of multi-level atoms at a single spatial location. We show that Dicke superradiance occurs both for decay to multiple ground states and for cascaded decay (i.e., where the excited state decays to intermediate states before decaying to the final ground state). This allows us to simplify the full level structure of AEAs, keeping only relevant transitions. In Section IV, we introduce the methods necessary to treat these simplified AEAs in ordered arrays with finite separation. In Section V, we show that significant bursts can be achieved in reasonably sized arrays of AEAs, and that this decay can be tailored via the lattice constant.
## II Transitions of alkaline earth atoms
Here, we discuss the relevant atomic transitions of AEAs. These bielectron species have different wavelength transitions that, in theory, allow for trapping and cooling on a short wavelength and for realizing quantum optics experiments on a much longer wavelength [58] [see Fig. 1(c)]. In particular, the \({}^{1}\)S\({}_{0}\) and metastable \({}^{3}\)P\({}_{\{0,2\}}\) states can be trapped at an optical wavelength. If the atoms are excited into a state in the \({}^{3}\)D\({}_{J}\) manifold, decay occurs at infrared wavelengths, relative to which the atoms have significantly subwavelength spacing. We consider the bosonic isotopes \({}^{88}\)Sr and \({}^{174}\)Yb, where there is
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**transition** & **wavelength (nm)** & **decay rate (\(\times 10^{6}\) s\({}^{-1}\))** \\ \hline \({}^{3}\)P\({}_{1}\)\(\rightarrow\)\({}^{1}\)S\({}_{0}\) & 689 [54] & 0.47 [54] \\ \hline \({}^{3}\)D\({}_{1}\)\(\rightarrow\)\({}^{3}\)P\({}_{0}\) & 2600 [55] & 2.8 [56] \\ \hline \({}^{3}\)D\({}_{1}\)\(\rightarrow\)\({}^{3}\)P\({}_{1}\) & 2740 [55] & 1.8 [56] \\ \hline \({}^{3}\)D\({}_{1}\)\(\rightarrow\)\({}^{3}\)P\({}_{2}\) & 3070 [55] & 0.088 [56] \\ \hline \({}^{3}\)D\({}_{2}\)\(\rightarrow\)\({}^{3}\)P\({}_{1}\) & 2690 [55] & 3.3 [54] \\ \hline \({}^{3}\)D\({}_{2}\)\(\rightarrow\)\({}^{3}\)P\({}_{2}\) & 3010 [55] & 0.79 [54] \\ \hline \({}^{3}\)D\({}_{3}\)\(\rightarrow\)\({}^{3}\)P\({}_{2}\) & 2920 [55] & 5.9 [57] \\ \hline \end{tabular}
\end{table}
Table 2: Wavelengths and decay rates for relevant transitions in \({}^{88}\)Sr.
Figure 1: (a) Atoms at a point emit a superradiant burst, with a peak intensity that scales as the square of the number of atoms, in contrast to uncorrelated atoms, which emit an exponentially-decaying pulse. (b) Schemarks of the proposed setup: A 2D array of \(N\) atoms with lattice constant \(d\) is held in the \(x-y\) plane with quantization axis set by a magnetic field along the \(z\)-axis. Light is measured in the far field, at a location described by spherical coordinates \(\{\,r,\theta,\varphi\,\}\), where \(r\gg\sqrt{N}d\). (c) Relevant level structure of bosonic AEAs. The atoms are optically trapped via strong transitions at short wavelengths (dashed line). The atoms are then prepared in a \({}^{3}\)D\({}_{J}\) state, where they decay to \({}^{3}\)P\({}_{J}\) states emitting light with a (relatively long) infrared wavelength, and then potentially decay further to the \({}^{1}\)S\({}_{0}\) state. Possible decay paths are indicated by solid lines. Due to the difference in wavelengths, decay dynamics from \({}^{3}\)D\({}_{J}\) will be dictated by many-body effects.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**transition** & **wavelength (nm)** & **decay rate (\(\times 10^{6}\) s\({}^{-1}\))** \\ \hline \({}^{3}\)P\({}_{1}\)\(\rightarrow\)\({}^{1}\)S\({}_{0}\) & 556 [52] & 1 [53] \\ \hline \({}^{3}\)D\({}_{1}\)\(\rightarrow\)\({}^{3}\)P\({}_{0}\) & 1389 [52] & 2 [53] \\ \hline \({}^{3}\)D\({}_{1}\)\(\rightarrow\)\({}^{3}\)P\({}_{1}\) & 1540 [52] & 1 [53] \\ \hline \({}^{3}\)D\({}_{1}\)\(\rightarrow\)\({}^{3}\)P\({}_{2}\) & 2090 [52] & 0.03 [53] \\ \hline \({}^{3}\)D\({}_{2}\)\(\rightarrow\)\({}^{3}\)P\({}_{1}\) & 1480 [52] & 2 [53] \\ \hline \({}^{3}\)D\({}_{2}\)\(\rightarrow\)\({}^{3}\)P\({}_{2}\) & 1980 [52] & 0.3 [53] \\ \hline \({}^{3}\)D\({}_{3}\)\(\rightarrow\)\({}^{3}\)P\({}_{2}\) & 1800 [52] & 2 [53] \\ \hline \end{tabular}
\end{table}
Table 1: Wavelengths and decay rates for relevant transitions in \({}^{174}\)Yb.
no nuclear spin and thus no hyperfine splitting, for the sake of simplicity. Results can be extended to fermionic isotopes, where similar physics should be observable.
The internal structure of AEAs is well characterized due to their excellent performance as optical atomic clocks [59, 60, 61, 62, 63, 64, 65, 66, 67]. In recent years, AEA arrays have also attracted much attention as candidates for quantum computing [68, 69, 70, 71], with significant advancements with both strontium [72, 73, 74] and ytterbium [75, 76, 77, 78]. Current tweezer array implementations use Rydberg states to mediate interactions, and do not require subwavelength spacing. Nevertheless, quantum gas microscopes of \({}^{174}\)Yb have been demonstrated, with interatomic spacings of 266 nm [79, 80].
\({}^{174}\)Yb can be operated as an optical source at telecom wavelengths, as the \({}^{3}\)D\({}_{1}\)\(\rightarrow\)\({}^{3}\)P\({}_{\{0,1\}}\) and \({}^{3}\)D\({}_{2}\)\(\rightarrow\)\({}^{3}\)P\({}_{1}\) transitions have wavelengths of around \(1.4-1.5\)\(\mu\)m. Therefore, the light emitted on these transitions is compatible with low-loss fiber-optic cables and devices built with these atoms can be integrated into distributed photonic networks without need for quantum frequency conversion [81, 82, 83]. Alternatively, two-level systems can be found on the \({}^{3}\)D\({}_{3}\)\(\rightarrow\)\({}^{3}\)P\({}_{2}\) line. In addition, lasers and optical components are readily available for all these transitions. Full details of the transition wavelengths and decay rates for ytterbium are given in Table 1.
In \({}^{88}\)Sr the ratio between trapping and science wavelengths is even more beneficial to realize closely-packed arrays. In particular, atoms initialized in the \({}^{3}\)D\({}_{3}\) state decay at a wavelength of \(2.9\)\(\mu\)m. However, these transitions are in the mid-infrared, where sources, detectors and other components are less readily available. Full details of the transition wavelengths and decay rates for strontium are given in Table 2.
Longer wavelength transitions also exist in alkali atoms, including at telecom frequencies [84, 85, 86, 87]. However, the lack of metastable states means the needed initial state is more difficult to prepare. Furthermore, intermediate states have significantly larger linewidths, such that the simplifications we make to the level structure for AEAs are not necessarily valid for alkalis. Additionally, the relatively small fine and hyperfine splitting combined with large multiplicity yields a cluttered spectrum.
## III Multilevel atoms at a point
We first consider a toy model where atoms are all at the same spatial location and are initially in the excited state. This endows the system with enough symmetry that exact dynamics can be calculated for large atom number. It is well established that superradiance can still occur if there are multiple ground states [88, 30, 89]. Here, we show how the properties of the decay change with atom number, allowing us to simplify the level structure of the considered AEAs in Section V.
If all atoms are at a point or, equivalently, identically coupled to a cavity mode, they are indistinguishable. Their indistinguishability means that there is no Hamiltonian interaction and decay is diagonalized into the action of symmetric spin lowering operators of the form \(\hat{S}_{ge}=\sum_{j=1}^{N}\hat{\sigma}_{ge}^{j}\) where \(\hat{\sigma}_{ge}^{j}=\left|g\right\rangle_{j}\left\langle e\right|_{j}\) is the lowering operator between states \(\left|e\right\rangle\) and \(\left|g\right\rangle\) for atom \(j\). The complexity of this problem scales as \(\mathcal{O}(N^{m-1})\) for \(m\)-level atoms, making use of the permutational symmetry and conserved total atom number.
### Multiple ground states: \(\Lambda\)-systems
We now consider a \(\Lambda\)-system where the excited state can decay to two different ground states. The frequencies of these transitions are assumed to be far separated such
Figure 2: Superradiant decay from \(\Lambda\)-atoms at a point. Each atom decays at a total rate \(\Gamma_{0}=\Gamma_{0}^{eg}+\Gamma_{0}^{ch}\) split between two levels. (a) Superradiant bursts emitted by 40 \(\Lambda\)-atoms. Solid lines indicate emission on the brighter transition \(\left|e\right\rangle\rightarrow\left|g\right\rangle\), while dashed lines indicate emission on the less bright transition \(\left|e\right\rangle\rightarrow\left|h\right\rangle\). (b) Scaling of the peak emission on each transition. Solid lines are power-law best fits of data from \(N\geq 20\). Solid fit lines indicate a brighter transition, while dashed fit lines indicate a less bright transition. The scalings are \(\sim N^{2.01}\) and \(\sim N^{2.00}\) for the brighter transitions with \(\Gamma_{0}^{eg}=2\Gamma_{0}^{ch}\) and \(\Gamma_{0}^{eg}=1.5\Gamma_{0}^{ch}\) respectively, while the less bright transitions scale as \(N^{1.56}\) and \(N^{1.72}\). In the balanced case, the scaling is \(N^{1.92}\) for both pathways. (c) Fraction of photons emitted on the brighter transition. Solid lines are lines of best fit of data from \(N\geq 20\) of the form \(A\ln(N)+B\). For \(\Gamma_{0}^{eg}=2\Gamma_{0}^{ch}\), the fit is \(0.054\ln(N)+0.630\). For \(\Gamma_{0}^{eg}=1.5\Gamma_{0}^{ch}\), the fit is \(0.046\ln(N)+0.549\).
that the channels are independent. In the limit of atoms at a point, the dynamics follows the master equation
\[\dot{\rho}=\Gamma_{0}^{eg}\ell[\hat{S}_{ge}](\rho)+\Gamma_{0}^{eh}\ell[\hat{S}_{ he}](\rho), \tag{1}\]
where decay is diagonalized into collective lowering operators \(\hat{S}_{ge,he}=\sum_{j=1}^{N}\hat{\sigma}_{ge,he}^{j}\) and
\[\ell[\hat{A}](\rho)=\hat{A}\rho\hat{A}^{\dagger}-\frac{1}{2}\hat{A}^{\dagger} \hat{A}\rho-\frac{1}{2}\rho\hat{A}^{\dagger}\hat{A}. \tag{2}\]
Superradiant bursts can be emitted on multiple channels at the same time, as shown in Fig. 2. Both the height of the burst and its scaling with \(N\) depend on the relative strength of the decay channels. For channels of equal decay rate, \(\Gamma_{0}^{eg}=\Gamma_{0}^{eh}\), the best fit for the peaks' scaling is \(N^{1.92}\), instead of the \(N^{2}\) scaling for two-level systems at a point. For imbalanced channels, \(\Gamma_{0}^{eg}>\Gamma_{0}^{eh}\), the larger burst scales faster than for balanced channels. For the relative rates of decay of \(2:1\) and \(1.5:1\), the best-fit scalings of the peak intensity emitted on the stronger transition are \(N^{2.01}\) and \(N^{2.00}\) respectively. This implies that in such configurations, as long as there is some bias towards one transition, for large enough \(N\), the peak on that transition will always scale as the ideal two-level case with \(N^{2}\). In the case of balanced channels, neither channel gains any advantage, and so the scaling is reduced. Furthermore, the superradiant burst on the weaker transition has a peak that scales slower than the balanced case. For \(\Gamma_{0}^{eg}=2\Gamma_{0}^{eh}\), the scaling is \(N^{1.56}\) and for \(\Gamma_{0}^{eg}=1.5\Gamma_{0}^{eh}\) it is \(N^{1.72}\).
The percentage of photons emitted on the bright channel increases logarithmically with atom number, as shown in Fig. 2(c). Significant population accumulates in \(|g\rangle\) faster than in \(|h\rangle\), and so the collective enhancement of Dicke superradiance occurs earlier, "stealing" photons from the weaker transition. For large atom number, if the ratio between decay rates is strongly biased towards the brighter transition, the impact of the weaker transition is minimal. The bias of the imbalance of photons emitted on each transition was reported in Ref. [30].
### Cascaded decay: Ladder-systems
We now consider a ladder system where the excited state, \(|e\rangle\), decays to an intermediate state \(|f\rangle\), that itself decays again to the ground state \(|g\rangle\). In the limit of all atoms at a point, dynamics follows the master equation
\[\dot{\rho}=\Gamma_{0}^{ef}\ell[\hat{S}_{ef}](\rho)+\Gamma_{0}^{fg}\ell[\hat{ S}_{fg}](\rho), \tag{3}\]
where decay is diagonalized into collective lowering operators \(\hat{S}_{ef,fg}=\sum_{j=1}^{N}\hat{\sigma}_{ef,fg}^{j}\).
A superradiant burst is emitted on both transitions consecutively, as shown in Fig. 3(b). This is because the excited state decay is extremely fast due to large population inversion, while the decay of the intermediate state is very small due to small inversion. By the time that the population in the intermediate state is large enough to drive fast collective decay, the superradiant burst from the first transition is mostly finished. In the regime \(N\Gamma_{0}^{ef}\gg\Gamma_{0}^{fg}\), the scaling of the first superradiant peak goes approximately as \(\sim N^{2}\) regardless of the relative ratio of decays and the two-level case is retrieved.
## IV Theoretical methods for ordered extended arrays
### Spin model for multilevel atoms
Here we introduce the theoretical framework to investigate an array of \(N\) multilevel atoms that interact with
Figure 3: Superradiant decay from ladder-atoms at a point. Each atom decays initially at a rate \(\Gamma_{0}^{ef}\) to an intermediate state which itself decays at a rate \(\Gamma_{0}^{fg}\). (a) Superradiant bursts emitted by 40 ladder atoms. Solid (dashed) lines indicate emission on the initial (secondary) transition. (b) Scaling of the peak emission by \(N\) ladder atoms. Lines are power-law best fits of data from \(N\geq 20\). In all three cases the fit scales as approximately \(N^{2}\).
each other via free space beyond the point approximation. Without permutational symmetry, atoms interact both coherently and dissipatively. Under a Born-Markov approximation, the atomic density matrix evolves according to the master equation [90; 91]
\[\dot{\rho}=\sum_{a}-\frac{\mathrm{i}}{\hbar}[\mathcal{F}_{a},\rho]+\mathcal{E} _{a}(\rho), \tag{4}\]
where an excited state \(\ket{e}\) decays to a set of ground states \(\ket{g_{a}}\). Each Hamiltonian and Lindbladian read
\[\mathcal{R}_{a} =-\hbar\omega_{a}\sum_{j=1}^{N}\hat{\sigma}_{g_{a}g_{a}}^{j}+\sum _{j,l=1}^{N}J_{jl}^{a}\hat{\sigma}_{eg_{a}}^{j}\hat{\sigma}_{g_{a}e}^{l}, \tag{5}\] \[\mathcal{E}_{a}(\rho) =\sum_{j,l=1}^{N}\frac{\Gamma_{gl}^{a}}{2}\left(2\hat{\sigma}_{g _{a}e}^{j}\rho\hat{\sigma}_{eg_{a}}^{l}-\hat{\sigma}_{eg_{a}}^{j}\hat{\sigma}_ {g_{a}e}^{l}\rho-\rho\hat{\sigma}_{eg_{a}}^{j}\hat{\sigma}_{g_{a}e}^{l}\right), \tag{6}\]
where \(\omega_{a}\) is the frequency of the transition from \(\ket{e}\rightarrow\ket{g_{a}}\), \(\hat{\sigma}_{g_{a}e}^{j}=\ket{g_{a}}_{j}\bra{e}_{j}\) is the lowering operator from state \(\ket{e}\rightarrow\ket{g_{a}}\) for the \(j\)th atom, and interactions between atoms \(j\) and \(l\) are characterized by
\[J_{jl}^{a}-\frac{\mathrm{i}\Gamma_{jl}^{a}}{2}=-\frac{\mu_{0}\omega_{a}^{2}}{ \hbar}\mathbf{\wp}_{a}^{*}\cdot\mathbf{G}_{0}(\mathbf{r}_{j},\mathbf{r}_{l}, \omega_{a})\cdot\mathbf{\wp}_{a}. \tag{7}\]
Here, \(\mathbf{\wp}_{a}\) is the normalized dipole matrix element of the transition, and \(\mathbf{G}_{0}(\mathbf{r}_{j},\mathbf{r}_{l},\omega_{a})\) is the electromagnetic field propagator between atoms at positions \(\mathbf{r}_{j}\) and \(\mathbf{r}_{l}\)[92].
Throughout the manuscript, we consider that the transition frequencies are all sufficiently distinct such that photons associated with one transition cannot excite any others, and interactions of the form \(\hat{\sigma}_{\mathcal{E}_{a}}^{j}\hat{\sigma}_{g_{a}e}^{l}\) are heavily detuned and can be ignored. This condition is naturally met for transitions from an excited state to states with different angular momentum. For transitions to different Zeeman levels, we assume the presence of a magnetic field to break the degeneracy. This requires the Zeeman splitting to be much larger than the linewidth of the emitted light. The spectrum is maximally broadened for two-level atoms at the same spatial location, as this situation produces the shortest possible burst. In this case, one requires a magnetic field with \(B\gg N\Gamma_{0}/\mu_{B}\), where \(\Gamma_{0}\) is the bare decay rate of a single atom. This corresponds to magnetic fields on the order of \(\sim 100\)G for the atom numbers considered here. For 2D arrays, the power spectrum is expected to scale sub-linearly with atom number, thus requiring smaller Zeeman shifts.
### Conditions for many-body superradiance
#### ii.2.1 Two level systems
The emission of a superradiant burst can be predicted using the set of eigenvalues of the dissipative interaction matrix \(\mathbf{\Gamma}\) with elements \(\Gamma^{jl}\), \(\{\,\Gamma_{\nu}\,\}\)[48]. The minimal requirement for a superradiant burst is an initial positive slope in the emitted photon rate or, equivalently, that the emission of the first photon _on average_ enhances the emission rate of the second. In previous work, we showed that the necessary criterion for a superradiant burst to be emitted from an initially fully excited ensemble of \(N\) two-level atoms is [48]
\[\mathrm{Var.}\left(\frac{\{\,\Gamma_{\nu}\,\}}{\Gamma_{0}}\right)\equiv\frac{ 1}{N}\sum_{\nu=1}^{N}\left(\frac{\Gamma_{\nu}^{2}}{\Gamma_{0}^{2}}-1\right)>1. \tag{8}\]
This condition assumes that all emitted light is collected. If that is not the case, one instead has to consider the rate of emission into the optical modes that are detected. A superradiant burst meeting certain criteria requires that the emission of the first photon on average enhances the rate of photons that meet those criteria. Further detail on these derived bounds are given in Appendix A.
#### ii.2.2 Decay to multiple ground states
If there are multiple ground states, a superradiant burst is emitted by the fully excited state during decay to \(\ket{g_{a}}\) if
\[\mathrm{Var.}\left(\frac{\left\{\,\Gamma_{\nu}^{a}\,\right\}}{\Gamma_{0}^{a} }\right)>\frac{\Gamma_{0}}{\Gamma_{0}^{a}}, \tag{9}\]
where \(\Gamma_{0}=\sum_{a}\Gamma_{0}^{a}\) is the total decay from the excited state. This is of the same form as Eq. (8), but the enhancement provided by the operators on the particular channel needs to additionally overcome competition between different "internal" channels. If all atoms are at a point, then the condition for a superradiant burst on a particular transition reduces to \(\Gamma_{0}^{a}/\Gamma_{0}>1/(N-1)\).
#### ii.2.3 Directional decay
In experiments, light is typically only collected in a particular direction. Directional superradiance is defined as the rate of photon emission into a particular direction having a positive slope, and can persist to much larger interatomic separations than when the entire emitted field is considered [50]. As shown in Appendix A, directional superradiance occurs if
\[\sum_{j,l=1}^{N}\mathrm{e}^{\mathrm{i}k_{0}^{a}\mathbf{R}(\theta,\varphi)\cdot (\mathbf{r}_{l}-\mathbf{r}_{j})}\frac{\Gamma_{jl}^{a}}{N\Gamma_{0}^{a}}>1+ \frac{\Gamma_{0}}{\Gamma_{0}^{a}},. \tag{10}\]
Here we map directional photon detection to atomic emission where \(\mathbf{R}(\theta,\varphi)\) is a unit vector in the direction of the detector and \(k_{0}^{a}=2\pi/\lambda_{0}^{a}\) the wavevector of the transition [93; 18]. We define the quantity
\[S=\frac{\sum\limits_{j,l=1}^{N}\mathrm{e}^{\mathrm{i}k_{0}^{a}\mathbf{R}(\theta,\varphi)\cdot(\mathbf{r}_{l}-\mathbf{r}_{j})}\Gamma_{jl}^{a}}{N\left(\Gamma_{ 0}^{a}+\Gamma_{0}\right)}, \tag{11}\]
such that if \(S<1\) the photon emission rate will decay monotonically in time, and if \(S>1\), the minimal conditions for superradiance are met.
### Master equation evolution by cumulant expansion
It would be ideal to calculate the full dynamics to verify our predictions. However, the full Hilbert space scales exponentially with atom number. We approximate the full dynamics by means of a second-order cumulant expansion [94; 95; 96]. This method involves truncating the hierarchy of operator expectation values such that
\[\left\langle\hat{u}\hat{v}\hat{w}\right\rangle=\left\langle\hat{u}\hat{v} \right\rangle\left\langle\hat{w}\right\rangle+\left\langle\hat{v}\hat{w} \right\rangle\left\langle\hat{u}\right\rangle+\left\langle\hat{u}\hat{w} \right\rangle\left\langle\hat{v}\right\rangle-2\left\langle\hat{u}\right\rangle \left\langle\hat{v}\right\rangle\left\langle\hat{w}\right\rangle. \tag{12}\]
The complexity of this expansion scales as \(\mathcal{O}(N^{3})\) rather than exponentially. Further details are provided in Appendix B. The accuracy of this approximation is not well-characterized for 2D arrays. We benchmark the method in Appendix C, showing that generically the accuracy is better for larger lattice constants.
## V Superradiance in 2D arrays of alkaline-earth atoms
We now consider AEA arrays where atoms are initialized in either one of the \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=0\right\rangle\) or \({}^{3}\mathrm{D}_{3}\left|J=3,m_{J}=\left\{0,3\right\}\right\rangle\) states and allowed to decay [see Fig. 4]. Large inversion can be achieved with a short intense pulse of duration \(\tau\ll\left(N\Gamma_{0}\right)^{-1}\) to prevent collective effects [97].
Decay from \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=0\right\rangle\) can be simplified using information from Tables 1 and 2. First, decay to \({}^{3}\mathrm{P}_{2}\) has minimal impact on the dynamics due to the reduced linewidth. Second, the subsequent decay from \({}^{3}\mathrm{P}_{1}\rightarrow{}^{1}\mathrm{S}_{0}\) will not impact the initial burst as the decay is not fast enough, nor, due to the short wavelength, will it be strongly collectively enhanced. This leads to a four-level system, as shown in Fig. 4(a), with a bright linearly-polarized decay channel \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=0\right\rangle\)\(\rightarrow\)\({}^{3}\mathrm{P}_{0}\left|J=0,m_{J}=0\right\rangle\) and two dimmer circularly polarized transitions \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=0\right\rangle\)\(\rightarrow\)\({}^{3}\mathrm{P}_{1}\left|J=1,m_{J}=\pm 1\right\rangle\). Note that the Clebsch-Gordan coefficient is zero for the \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=0\right\rangle\)\(\rightarrow\)\({}^{3}\mathrm{P}_{1}\left|J=1,m_{J}=0\right\rangle\) pathway. For simplicity, we treat decay to \({}^{3}\mathrm{P}_{1}\left|J=1,m_{J}=\pm 1\right\rangle\) as split by large enough Zeeman shifts that photons on each transition are not seen by the other, but not by enough to significantly alter the wavelength of the transitions. Without such Zeeman shifts, photons of one circular polarization can drive transitions with the other, allowing the atoms to explore the full Zeeman structure and adding far greater complexity to the problem [98]. Similar structure of three decay channels would be obtained for initialization in \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=\pm 1\right\rangle\), but here the brightest transition is circularly-polarized which is generically less favorable than linearly-polarized transitions for superradiance [49].
We also consider atoms initialized in the \({}^{3}\mathrm{D}_{3}\left|J=3,m_{J}=0\right\rangle\) state. From here, there are also three decay channels, as shown in Fig. 4(b). As before, one is linearly polarized, that to \({}^{3}\mathrm{P}_{2}\left|J=2,m_{J}=0\right\rangle\), and the two decay channels to \({}^{3}\mathrm{P}_{2}\left|J=2,m_{J}=\pm 1\right\rangle\) are circularly polarized. As above, we assume that these channels are independent due to sufficiently large Zeeman shifts. Alternatively, atoms initialized in \({}^{3}\mathrm{D}_{3}\left|J=3,m_{J}=3\right\rangle\) are effective two-level systems with circularly polarized decay, as the only decay channel is to \({}^{3}\mathrm{P}_{2}\left|J=2,m_{J}=2\right\rangle\). To study this situation, we rotate the magnetic field such that the dipole moment of the two-level systems is oriented as \(\mathbf{\wp}=\sqrt{1/2}\left(\hat{y}+\mathrm{i}\hat{z}\right)\), so that the detector position is still perpendicular to the polarization axis, enhancing the signal. Other three (and two) decay channel systems could also be obtained by starting in different Zeeman levels in the \({}^{3}\mathrm{D}_{3}\) line.
We thus reduce the level structure of both atomic species to those shown in Fig. 4. Starting from states with \(m_{J}=0\), the master equation in Eq. (4) reduces to
\[\dot{\rho}=-\frac{\mathrm{i}}{\hbar}[\mathcal{H}_{f}+\mathcal{H}_{g}+\mathcal{ H}_{h},\rho]+\mathcal{L}_{f}(\rho)+\mathcal{L}_{g}(\rho)+\mathcal{L}_{h}(\rho), \tag{13}\]
where \(\left|e\right\rangle\) is the excited state and \(\left|f,g,h\right\rangle\) are the three ground states. For the two-level system we instead have
\[\dot{\rho}=-\frac{\mathrm{i}}{\hbar}[\mathcal{H}_{g},\rho]+\mathcal{L}_{g}( \rho). \tag{14}\]
### Many-body decay vs distance
We first investigate atoms initialized in the \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=0\right\rangle\) state. We consider the condition given in Eq. (10) for the specific case of a square array of \(12\times 12\) atoms. The detector is placed along the \(x\)-axis, which should see significant emission as it is perpendicular to the dipole moment. For \({}^{174}\)Yb atoms, this detector would measure a superradiant burst on the
Figure 4: Considered level structures. Atoms are initialized in the (a) \({}^{3}\mathrm{D}_{1}\left|J=1,m_{J}=0\right\rangle\) or (b) \({}^{3}\mathrm{D}_{3}\left|J=3,m_{J}=0,3\right\rangle\) state. In both \(m_{J}=0\) cases, the internal structure is simplified into a toy model with three decay channels: a dominant linear \(\pi\)-polarized channel and two circularly polarized channels. In (a), the \({}^{3}\mathrm{P}_{1}\left|J=1,m_{J}=0\right\rangle\) state is not considered as the transition is forbidden.
dominant transition for any interatomic separation satisfying \(d<0.6\,\mu\)m, as shown in Fig. 5(a). This distance would be challenging for tweezer array experiments, but is achievable in an optical lattice [16; 17].
Superradiance can also be observed at particular "islands" where the set of decay operators combines to realize a sudden revival in emission in a particular direction. For this detector position, this occurs in regions centred on \(d=0.7\,\mu\)m and \(d=1.4\,\mu\)m, corresponding to a half and full wavelength of the brightest transition respectively. These revivals are due to geometric resonances where a mode that emits in this direction suddenly increases in amplitude due to Umklapp scattering, and thus becomes significantly brighter [99; 100]. While these processes can also revive global superradiance [48; 49], the effect is much more pronounced for directional superradiance [50]. This type of superradiance is highly dependent on the detector position, so the predicted distances change as a function of detector angle. The approximated full dynamics (via second-order cumulant expansion) agrees with our predictions, as shown in Fig. 5(b).
Superradiance can also be observed in \({}^{88}\)Sr. The dominant decay channel from \({}^{3}\)D\({}_{1}\,|J=1,m_{J}=0\rangle\) has a smaller branching ratio than that in \({}^{174}\)Yb, but due to its much longer wavelength, the constraints on interatomic distance are less tight. Figure 5(c) shows that a superradiant burst is always observed for \(d<1\,\mu\)m. In addition, superradiance could be observed at the revivals with interatomic spacing \(1.3\,\mu\)m and \(2.6\,\mu\)m. Therefore, as in \({}^{174}\)Yb, as the interatomic spacing is increased, directional superradiance disappears and reappears. Strontium thus also offers a suitable platform for the direct observation of Dicke superradiance, despite the less favorable branching ratios to each state (see Tables 1 and 2).
### Collective "closing" of the atomic transition
Once a transition starts decaying superradiantly, it proceeds quickly, "stealing" photons from the other tran
Figure 5: Predictions of a burst versus lattice constant in \(12\times 12\) arrays of (a,b) \({}^{174}\)Yb and (c,d) \({}^{88}\)Sr. The atoms are prepared in the \({}^{3}\)D\({}_{1}\,|J=1,m_{J}=0\rangle\) state initially and allowed to decay, with the linear transition polarized perpendicular to the array. Light is detected along the \(x\)-axis. (a,c) The shaded areas indicate a burst on the \({}^{3}\)D\({}_{1}\,|J=1,m_{J}=0\rangle\)\(\rightarrow\)\({}^{3}\)P\({}_{0}\,|J=0,m_{J}=0\rangle\) transition (red line), as the quantity \(S\) in Eq. (11) is larger than 1 (black dashed line). The \({}^{3}\)P\({}_{1}\,|J=1,m_{J}=\pm 1\rangle\) transition (orange dashed line) is never superradiant. (b,d) Approximations of the full dynamics via a second-order cumulant expansion (on the \({}^{3}\)D\({}_{1}\rightarrow\)\({}^{3}\)P\({}_{0}\) transition) at three particular distances. As predicted by the superradiance condition, as the distance increases, the superradiant burst disappears and then reappears.
sition (as was discussed for the toy model described in Sec. III). The effect is stronger for smaller interatomic distances, because the superradiant burst is much faster in that regime. In Fig. 6, we plot the total photon share scattered on the dominant transition during superradiance. For \({}^{88}\)Sr, starting in the \({}^{3}\mathrm{D}_{3}\ket{J=3,m_{J}=0}\) state, a single atom scatters 60% of light on its brightest transition. By contrast, a \(12\times 12\) array overcomes the Clebsch-Gordan coefficient and scatters almost 70% on that transition. We see a similar improvement at telecom wavelengths for \({}^{174}\)Yb initialized in \({}^{3}\mathrm{D}_{1}\ket{J=1,m_{J}=0}\). As for the idealized case of atoms at a point, the light share increases logarithmically with atom number.
The geometry of the array dictates the relative scattering on each channel and how much one can go beyond the ratio dictated by the single-atom Clebsch-Gordan coefficients. As the interatomic distance is increased, generally the transition is repened, as shown in Fig. 6(c). However, revivals due to the geometric resonances can be seen by comparison to the variance of the set of decay rates [see Fig. 6(b)]. The revivals appear relatively minor but are capable of strongly impacting the decay dynamics. It should be noted that the condition for global superradiance is not met in the arrays of larger lattice constants, yet the closing of these weaker channels still occurs despite the fact that the avalanche is relatively weak.
The share could be further increased by intentionally seeding the transition with a small fraction of the atoms deterministically placed in the desired ground state, or an incomplete initial excitation from a particular ground state. Instead of relying on a quantum fluctuation to drive the start of the superradiant burst, these atoms would provide an artificial fluctuation. This would accelerate the superradiant burst on that seeded transition and generate a large atomic population in that state [101, 102, 30]. Nevertheless, how effectively this fluctuation will trigger the avalanche depends on its specific spatial profile and phase. We will thus not explore this avenue here.
### Scaling of the burst
The largest possible burst occurs for atoms initialized in the \({}^{3}\mathrm{D}_{3}\ket{J=3}\) manifold. The transition wavelength is \(\lambda_{0}=1.80\)\(\mu\)m for \({}^{174}\)Yb and \(\lambda_{0}=2.92\)\(\mu\)m in \({}^{88}\)Sr. To minimize the interatomic distance, both species are assumed to be trapped in an optical lattice with 244 nm lattice spacing, corresponding to a wavelength of 488 nm for which Yb and Sr are trapped in the relevant states and high power lasers are available. This yields an interatomic spacing of \(d=0.136\lambda_{0}\) for \({}^{174}\)Yb and \(d=0.084\lambda_{0}\) for \({}^{88}\)Sr. We consider two initial states: \(\ket{J=3,m_{J}=0}\) and \(\ket{J=3,m_{J}=3}\). In the former case, decay is to three states, with a dominant transition that is linearly polarized. In the latter case, the atoms become effective two-level systems, decaying by circular \(\sigma^{+}\)-polarized light. This closed two-level transition can be accessed in all AEAs, including fermionic isotopes.
The largest possible burst is emitted by the simplest system operating at the longest wavelength, as shown in Fig. 7. For the effective two-level system, the peak intensity is more than three times greater than the initial intensity emitted by an array of \(12\times 12\)\({}^{174}\)Yb atoms, and more than six times greater for \({}^{88}\)Sr. The peak scales as \(\sim N^{1.38}\) and \(\sim N^{1.47}\) for \({}^{174}\)Yb and \({}^{88}\)Sr respectively. The smaller peak emitted from the \(\ket{J=3,m_{J}=0}\) state is still significant, and scales as \(\sim N^{1.29}\) for \({}^{174}\)Yb and
Figure 6: Manipulation of branching ratios by collective emission. (a) Scaling of total share of light emitted on the brightest linearly-polarized transition (polarized perpendicular to the array) with atom number for square arrays of spacing \(d=0.2\lambda_{0}\), obtained by second-order cumulant expansion simulations. Simulations for \({}^{174}\)Yb (\({}^{88}\)Sr) are plotted in red (blue), with atoms initialized in the \({}^{3}\mathrm{D}_{1}\ket{J=1,m_{J}=0}\) (\({}^{3}\mathrm{D}_{3}\ket{J=3,m_{J}=0}\)) state. Horizontal dashed lines represent the branching ratio for independent atoms. Solid lines are best fits to data from \(N\geq 25\) of the form \(A\ln(N)+B\). For \({}^{174}\)Yb (\({}^{88}\)Sr), the fit is \(0.026\ln(N)+0.622\) (\(0.030\ln(N)+0.545\)). (b,c) \(12\times 12\) atoms are initialized in the \({}^{3}\mathrm{D}_{3}\ket{J=3,m_{J}=0}\) state. (b) The condition given by Eq. (9) in the form \(\mathrm{Var.}\left(\left\{\Gamma_{\nu}^{\mathrm{cd}}\right\}/\Gamma_{0}^{ \mathrm{cd}}\right)-\Gamma_{0}/\Gamma_{0}^{\mathrm{cd}}+1\); a global superradiant burst will be measured on the transition to \({}^{3}\mathrm{P}_{2}\ket{J=2,m_{J}=0}\) where the solid line is above the dashed line. (c) Total share of light emitted on the brightest linearly-polarized transition as the interatomic distance is changed.
\(\sim N^{1.37}\) for \({}^{88}\)Sr. A word of caution is needed though, as the validity of the second-order cumulant expansion is not well characterized for these large atom numbers at such small distances, and may overestimate the burst due to significant multipartite correlations [96].
## VI Conclusions
We have presented results on collective decay in realistic arrays of alkaline earth atoms. Building on previous work, we calculate conditional correlation functions to predict the nature of the collective decay. We predict highly non-trivial many-body decay through control of the interatomic spacing of the array and position of the detector. Focussing on the particular cases of \({}^{88}\)Sr and \({}^{174}\)Yb, we show that the observation of Dicke superradiance should be feasible in such systems. Furthermore, we show that by increasing the interatomic separation, Dicke superradiance is attenuated and lost, but is then revived at a larger distance. We show that this understanding can be used to manipulate how much population ends up in each possible ground state.
Experiments are critical to understand many-body decay, as full dynamics are only obtained via approximations. We have focused on strontium and ytterbium due to the recent progress in implementing atomic arrays with these species, and the favorable set of transitions to achieve subwavelength interatomic spacing. However, similar results should also be possible with other alkaline earth elements, which have the same structure, but where progress in cooling and trapping is less advanced [103; 104; 105; 106]. The relative spacing (and order) of levels is different in all these atoms. For example, in radium, there is a two-level linearly-polarized transition - as the \({}^{3}\)D\({}_{1}\) state can only decay to \({}^{3}\)P\({}_{0}\) - at a far-infrared wavelength of \(\sim 16\,\mu\)m [107]. Rare earth elements also have infrared transitions from the ground state, and can be similarly trapped in short wavelengths due to strong blue transitions [108; 109; 110; 111].
Our results may be relevant in the context of Rydberg quantum simulators [112; 113; 114]. Excited Rydberg states can decay via a fast short-wavelength transition (and therefore not collectively enhanced) or via much slower but very long-wavelength transitions. Our work implies that the amount of light scattered on these long Rydberg-Rydberg transitions could be significantly enhanced by collective decay [115; 116; 89]. Furthermore, understanding the collective enhancement of coupling between black body photons and the \({}^{3}\)P\({}_{0}\) state in atomic optical lattice clocks is key to achieving high precision in compact devices [117; 118; 119].
The control of the atoms is translated into control over the emitted light. For example, initial superposition states will emit superpositions of different pulses, with the potential for generation of macroscopic superposition states of light. In particular, the potential for \({}^{174}\)Yb arrays to produce non-classical light at telecom frequencies is tantalizing. While we have focused on the interatomic spacing of the array and the relative position of the detector to control the decay, there are additional tuning knobs that could be harnessed. The dynamics, and in particular the directionality, could be altered by changing the geometry of the array, either by modifying the lattice or the global shape. Manipulation of the initial state, adding coherent or incoherent drives, or manual addition of site-specific inhomogeneity [120]; all of this will impact the dynamics and steady state of the system.
Understanding the various decay processes - and freezing out coherent dynamics - opens up possibilities to harness them. For instance, the complex dissipative dynamics provides a method to access highly entangled
dark states that completely decouple from the environment. The deterministic production of these states, and their potential as resources for quantum computing and metrology [121; 122], remains an exciting open problem.
## VII Acknowledgements
We are grateful to Francis Robicheaux, Oriol Rubies-Bigorda, Silvia Cardenas-Lopez, Debayan Mitra and Hannes Bernien for useful discussions. A.A.-G. acknowledges support by the National Science Foundation through the CAREER Award (No. 2047380) and the QII-TAQS program (Award No. 1936359), the Air Force Office of Scientific Research through their Young Investigator Prize (grant No. 21RT0751), as well as by the A. P. Sloan Foundation. A.A.-G. also acknowledges the Flatiron Institute, where some of this work was performed. A.A.-G. and S.J.M. acknowledge additional support by the David and Lucile Packard Foundation. J.P.C. acknowledges support from the NSF PHY Division (Award No. 2112663). S.W. acknowledges support by the National Science Foundation through the QII-TAQS program (Award No. 1936359) and the Alfred P. Sloan Foundation. We acknowledge computing resources from Columbia University's Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010.
## Appendix A Derivation of directional condition for superradiance on a particular channel
The dissipator of Eq. (6) can be expressed in terms of a set of collective lowering operators \(\{\,\hat{\mathscr{G}}_{\nu,a}\,\}\). These are generically superpositions of the form
\[\hat{\mathscr{G}}_{\nu,a}=\sum\limits_{j=1}^{N}\alpha_{\nu,a,j}\hat{\sigma}_ {g_{a}e}^{j}. \tag{15}\]
The coefficients \(\alpha_{\nu,a,j}\) are found as the eigenstates of the dissipative interaction matrix \(\mathbf{\Gamma}^{a}\) with elements \(\Gamma^{a}_{jl}\), with their rates, \(\{\,\Gamma^{a}_{\nu}\,\}\), given by the corresponding eigenvalues. Each Lindbladian term is thus recast as
\[\mathscr{L}_{a}(\rho)=\sum\limits_{\nu=1}^{N}\frac{\Gamma^{a}_{ \nu}}{2}\left(2\hat{\mathscr{G}}_{\nu,a}\rho\,\hat{\mathscr{G}}^{\dagger}_{ \nu,a}-\hat{\mathscr{G}}^{\dagger}_{\nu,a}\hat{\mathscr{G}}_{\nu,a}\rho-\rho \,\hat{\mathscr{G}}^{\dagger}_{\nu,a}\hat{\mathscr{G}}_{\nu,a}\right). \tag{16}\]
Dissipation can thus be understood as the emission of a photon into one of the \(N\) possible decay channels.
## Appendix A.1 Multiple ground states
The derivative of the intensity emitted on a specified transition \(|e\rangle\rightarrow|g_{a}\rangle\) is positive if
\[\frac{\sum\limits_{\mu=1}^{N}\sum\limits_{b}\sum\limits_{\nu=1}^{N}\langle \hat{\mathscr{G}}^{\dagger}_{\nu,b}\hat{\mathscr{G}}^{\dagger}_{\mu,a}\hat{ \mathscr{G}}_{\mu,a}\hat{\mathscr{G}}_{\nu,b}\rangle}{\left(\sum\limits_{b} \sum\limits_{\nu=1}^{N}\langle\hat{\mathscr{G}}^{\dagger}_{\nu,b}\hat{ \mathscr{G}}_{\nu,b}\rangle\right)\left(\sum\limits_{\mu=1}\langle\hat{ \mathscr{G}}^{\dagger}_{\mu,a}\hat{\mathscr{G}}_{\mu,a}\rangle\right)}>1, \tag{17}\]
On a fully excited initial state, this expression reads
\[1+\frac{\sum\limits_{\nu=1}^{N}(\Gamma^{a}_{\nu})^{2}}{N^{2}\Gamma^{a}_{0} \Gamma_{0}}-\frac{1}{N}-\frac{\Gamma^{a}_{0}}{N\Gamma_{0}}>1, \tag{18}\]
which simplifies to
\[\text{Var.}\left(\frac{\Gamma^{a}_{\nu}}{\Gamma^{a}_{0}}\right)\equiv\frac{1} {N}\sum\limits_{\nu=1}^{N}\left[\left(\frac{\Gamma^{a}_{\nu}}{\Gamma^{a}_{0} }\right)^{2}-1\right]>\frac{\Gamma_{0}}{\Gamma^{a}_{0}} \tag{19}\]
where \(\Gamma_{0}=\sum_{a}\Gamma^{a}_{0}\) is the total excited state decay rate.
## Appendix A.2 Directional decay
Detection of a photon from a given transition in far-field in a direction governed by spherical angles \(\{\,\theta,\varphi\,\}\) can be mapped to the collective lowering operator [93; 18]
\[\hat{\mathscr{G}}_{a}(\theta,\varphi)= \sqrt{\frac{3\Gamma^{a}_{0}}{8\pi}\left[1-\left(\mathbf{\varphi}_{a} \cdot\mathbf{R}(\theta,\varphi)\right)^{2}\right]\text{d}\Omega}\] \[\times\sum\limits_{j=1}^{N}\mathrm{e}^{-\mathrm{i}k^{a}_{0} \mathbf{R}(\theta,\varphi)\cdot\mathbf{r}_{j}}\hat{\sigma}_{g_{a}e}^{j}, \tag{20}\]
where \(\mathbf{R}(\theta,\varphi)\) is a unit vector in the direction of the detector, \(\text{d}\Omega\) is a solid angle increment and \(k^{a}_{0}\) is the wavevector of the transition. Using these, the derivative of the intensity emitted on a specified transition \(|e\rangle\rightarrow|g_{a}\rangle\) in a direction \(\{\theta,\varphi\}\) is positive if
\[\frac{\sum\limits_{b}\sum\limits_{\nu=1}^{N}\langle\hat{\mathscr{G}}^{\dagger}_ {\nu,b}\hat{\mathscr{G}}^{\dagger}_{a}(\theta,\varphi)\hat{\mathscr{G}}_{a}( \theta,\varphi)\hat{\mathscr{G}}_{\nu,b}\rangle}{\sum\limits_{b}\left(\sum \limits_{\nu=1}^{N}\langle\hat{\mathscr{G}}^{\dagger}_{\nu,b}\hat{\mathscr{G}} _{\nu,b}\rangle\right)\langle\hat{\mathscr{G}}^{\dagger}_{a}(\theta,\varphi) \hat{\mathscr{G}}_{a}(\theta,\varphi)\rangle}>1. \tag{21}\]
On a fully excited state, this expression reads,
\[1+\frac{\sum\limits_{j,l=1}^{N}\mathrm{e}^{\mathrm{i}k_{0}\mathbf{R}(\theta, \varphi)\cdot(\mathbf{r}_{l}-\mathbf{r}_{j})}\Gamma_{jl}}{N^{2}\Gamma_{0}}- \frac{1}{N}-\frac{\Gamma^{a}_{0}}{N\Gamma_{0}}>1 \tag{22}\]
where we have employed that
\[\sum\limits_{\nu=1}^{N}\Gamma^{a}_{\nu}\alpha^{*}_{\nu,a,j}\alpha_{\nu,a,l}= \Gamma^{a}_{jl}. \tag{23}\]
This simplifies to
\[\sum_{j,l=1}^{N}\mathrm{e}^{\mathrm{i}k\delta\mathbf{R}(\theta,\varphi)(\mathbf{r} _{l}-\mathbf{r}_{j})}\frac{\Gamma_{jl}^{a}}{N\Gamma_{0}^{a}}>1+\frac{\Gamma_{0} }{\Gamma_{0}^{a}}. \tag{24}\]
## Appendix B Appendix B: Second-order cumulant expansion for four-level systems
We consider four-level atoms that can decay to three ground states \(\left|f,g,h\right\rangle\) from an excited state \(\left|e\right\rangle\). Photons associated to each transition \(\left|e\right\rangle\rightarrow\left|f,g,h\right\rangle\) are sufficiently distinct in frequency to not excite one another. To calculate the directional intensity on a channel \(\left|e\right\rangle\rightarrow\left|f\right\rangle\), we require the evolution of the expectation values of the set \(\{\hat{\sigma}_{ef}^{i}\hat{\sigma}_{fe}^{j}\}\), requiring at least a second-order cumulant expansion. Generically, the evolution of these expectation values depends on the expectation values of sets of three of the population operators, e.g. \(\{\hat{\sigma}_{ee}^{i}\}\) (the fourth can be related to the other three as the total single atom population is always unity), all six coherence operators, e.g., \(\{\hat{\sigma}_{ef}^{i}\}\), and all \(66\) two-operator products, noting that the complex expectation value of the coherence operators leads to extra operators such as \(\{\hat{\sigma}_{ef}^{i}\hat{\sigma}_{fe}^{j}\}\).
If the initial state has no coherence, such that initially
\[\{\left\langle\hat{\sigma}_{ab}^{i}\right\rangle\}=0\;\forall\;a,b\;\;\text{ and}\;\;\;\{\left\langle\hat{\sigma}_{ab}^{i}\hat{\sigma}_{cd}^{j}\right\rangle\}=0\;\forall\;a,b,c,d, \tag{25}\]
then the equations are greatly simplified. This condition is met by the fully excited state, or any state where all single atom states are the ground or excited state. In this case, the single atom coherences are never different from zero in the second-order cumulant expansion and the expectation values of all two-operator products of the form \(\{\hat{\sigma}_{ab}^{i}\hat{\sigma}_{ad}^{j}\}\) are always zero, and those of the form \(\hat{\sigma}_{ab}^{i}\hat{\sigma}_{da}^{j}\) are zero \(\forall\,b\neq d\). The two-operator products of the form \(\{\hat{\sigma}_{aa}^{i}\hat{\sigma}_{bb}^{j}\}\) and \(\{\hat{\sigma}_{ab}^{i}\hat{\sigma}_{ba}^{j}\}\) do become non-zero, but only those where one of \(a\) or \(b\) represents the excited state impact the evolution of the terms needed to calculate directional intensity. As such, there is a closed set of operators with expectation values defined as
\[a_{j} =\left\langle\hat{\sigma}_{ee}^{j}\right\rangle,b_{j}=\left\langle \hat{\sigma}_{ff}^{j}\right\rangle,c_{j}=\left\langle\hat{\sigma}_{gg}^{j} \right\rangle, \tag{26a}\] \[e_{jl} =\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{ee}^{l}\right\rangle,f_{jl}=\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{ff}^{j}\right\rangle,g_ {jl}=\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{gg}^{l}\right\rangle,\] (26b) \[q_{jl} =\left\langle\hat{\sigma}_{ef}^{j}\hat{\sigma}_{fe}^{l}\right\rangle,p_{jl}=\left\langle\hat{\sigma}_{eg}^{j}\hat{\sigma}_{ge}^{l}\right\rangle,r_ {jl}=\left\langle\hat{\sigma}_{eh}^{j}\hat{\sigma}_{he}^{l}\right\rangle. \tag{26c}\]
The expectation values evolve according to
\[\dot{a}_{j} =-\Gamma_{0}a_{j}+\mathrm{i}\sum_{m=1}^{N}\left[-A_{jm}q_{jm}+A_{jm }^{*}q_{mj}-B_{mj}p_{jm}+B_{mj}^{*}p_{mj}-C_{jm}r_{jm}+C_{mj}^{*}r_{mj}\right],\] (27a) \[\dot{b}_{j} =\Gamma_{0}^{ef}a_{j}+\mathrm{i}\sum_{m=1}^{N}\left[-A_{mj}^{*}q_ {mj}+A_{jm}q_{jm}\right],\] (27b) \[\dot{c}_{j} =\Gamma_{0}^{eg}a_{j}+\mathrm{i}\sum_{m=1}^{N}\left[-B_{mj}^{*}p_ {mj}+B_{jm}p_{jm}\right],\] (27c) \[\dot{e}_{jl} =-2\Gamma_{0}\varphi_{jl}+\mathrm{i}\sum_{l}\left[-A_{jm}\left\langle \hat{\sigma}_{ef}^{j}\hat{\sigma}_{ee}^{l}\hat{\sigma}_{fe}^{m}\right\rangle-A_ {jm}\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{ef}^{l}\hat{\sigma}_{fe}^{m }\right\rangle-B_{jm}\left\langle\hat{\sigma}_{eg}^{j}\hat{\sigma}_{ee}^{l} \hat{\sigma}_{ge}^{m}\right\rangle-B_{jm}\left\langle\hat{\sigma}_{ee}^{j}\hat {\sigma}_{eg}^{l}\hat{\sigma}_{ge}^{m}\right\rangle-C_{jm}\left\langle\hat{ \sigma}_{eh}^{j}\hat{\sigma}_{ee}^{l}\hat{\sigma}_{he}^{m}\right\rangle\] \[-C_{jm}\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{eh}^{l}\hat{ \sigma}_{he}^{m}\right\rangle+A_{mj}^{*}\left\langle\hat{\sigma}_{fe}^{j}\hat {\sigma}_{ee}^{l}\hat{\sigma}_{ef}^{m}\right\rangle+A_{ml}^{*}\left\langle\hat{ \sigma}_{ee}^{j}\hat{\sigma}_{fe}^{l}\hat{\sigma}_{ef}^{m}\right\rangle+B_{mj}^{* }\left\langle\hat{\sigma}_{ge}^{j}\hat{\sigma}_{ee}^{l}\hat{\sigma}_{eg}^{m} \right\rangle+B_{ml}^{*}\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{ge}^{l} \hat{\sigma}_{eg}^{m}\right\rangle+C_{mj}^{*}\left\langle\hat{\sigma}_{he}^{j} \hat{\sigma}_{ee}^{l}\hat{\sigma}_{ef}^{m}\right\rangle\] \[+C_{mj}^{*}\left(\hat{\sigma}_{ee}^{j}\hat{\sigma}_{he}^{l}\hat{ \sigma}_{ef}^{m}\right)\left],\] (27d) \[\dot{f}_{jl} =-\Gamma_{0}f_{jl}-\mathrm{i}A_{jl}q_{jl}+\mathrm{i}A_{ji}^{*}q_{ ji}+\Gamma_{0}^{A}\epsilon_{jl}+\mathrm{i}\sum_{l}\left[-A_{jm}\left\langle\hat{ \sigma}_{ef}^{j}\hat{\sigma}_{ff}^{l}\hat{\sigma}_{fe}^{m}\right\rangle-A_{ml}^{ *}\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{fe}^{l}\hat{\sigma}_{ef}^{m} \right\rangle-B_{jm}\left\langle\hat{\sigma}_{eg}^{j}\hat{\sigma}_{ff}^{l} \hat{\sigma}_{ge}^{m}\right\rangle\right]\] \[-C_{jm}\left\langle\hat{\sigma}_{eh}^{j}\hat{\sigma}_{ff}^{l}\hat{ \sigma}_{he}^{m}\right\rangle+A_{mj}^{*}\left\langle\hat{\sigma}_{fe}^{j}\hat{ \sigma}_{ff}^{l}\hat{\sigma}_{ef}^{m}\right\rangle+A_{jm}\left\langle\hat{\sigma }_{ee}^{j}\hat{\sigma}_{ef}^{l}\hat{\sigma}_{fe}^{m}\right\rangle+B_{mj}^{*} \left\langle\hat{\sigma}_{ge}^{j}\hat{\sigma}_{ff}^{l}\hat{\sigma}_{eg}^{m}\right\rangle +C_{mj}^{*}\left\langle\hat{\sigma}_{he}^{j}\hat{\sigma}_{ff}^{l}\hat{\sigma}_{ ef}^{m}\right\rangle\Big{]},\] (27e) \[\dot{g}_{jl} =-\Gamma_{0}g_{jl}-\mathrm{i}B_{jl}p_{jl}+\mathrm{i}B_{ji}^{*}p_ {ji}+\Gamma_{0}^{B}\epsilon_{jl}+\mathrm{i}\sum_{l}\left[-B_{jm}\left\langle \hat{\sigma}_{eg}^{j}\hat{\sigma}_{gg}^{l}\hat{\sigma}_{ge}^{m}\right\rangle-B_{ ml}^{*}\left\langle\hat{\sigma}_{ee}^{j}\hat{\sigma}_{ge}^{l}\hat{\sigma}_{eg}^{m} \right\rangle-A_{jm}\left\langle\hat{\sigma}_{ef}^{j}\hat{\sigma}_{gg}^{l} \hat{\sigma}_{fe}^{m}\right\rangle-A_{jm}\left\langle\hat{\sigma}_{ef}^{j}\hat{ \sigma}_{gg}^{l}\hat{\sigma}_{fe}^{m}\right\rangle\right.\] \[\left.-C_{jm}\left\langle\hat{\sigma}_{eh}^{j}\hat{\sigma}_{gg}^{l} \hat{\sigma}_{he}^{m}\right\rangle+B_{mj}^{*}\left\langle\hat{\sigma}_{ge}^{j} \hat{\sigma}_{gg}^{l}\hat{\sigma}_{eg}^{m}\right\rangle+B_{jm}\left\langle\hat{ \sigma}_{ee}^{j}\hat{\sigma}_{eg}^{l}\hat{\sigma}_{ge}^{m}\right\rangle+C_{mj}^{*} \left\langle\hat{\sigma}_{he}^{j}\hat{\sigma}_{gg}^{l}\hat{\sigma}_{
\[+B_{jm}\left\langle\hat{\sigma}^{j}_{eg}\hat{\sigma}^{L}_{ee}\hat{ \sigma}^{m}_{ge}\right\rangle\right], \tag{27h}\] \[\dot{r}_{jl} =-\Gamma_{0}r_{jl}-\mathrm{i}C_{ji}\left(a_{i}-e_{jl}-f_{jl}-g_{ jl}\right)+\mathrm{i}C^{*}_{ji}(a_{j}-e_{ji}-f_{ji}-g_{jl})+\Gamma^{C}_{ji}e_{jl}+ \mathrm{i}\sum_{l}\left[-C^{*}_{mj}\left\langle\hat{\sigma}^{j}_{ee}\hat{ \sigma}^{l}_{he}\hat{\sigma}^{m}_{ef}\right\rangle\right.\] \[\left.-C_{jm}\left\langle\hat{\sigma}^{j}_{eh}\hat{\sigma}^{l}_{ hh}\hat{\sigma}^{m}_{he}\right\rangle+C^{*}_{mj}\left\langle\hat{\sigma}^{j}_{ hh}\hat{\sigma}^{l}_{he}\hat{\sigma}^{m}_{ef}\right\rangle+C_{jm}\left\langle \hat{\sigma}^{j}_{eh}\hat{\sigma}^{l}_{ee}\hat{\sigma}^{m}_{he}\right\rangle \right], \tag{27i}\]
where we have defined that
\[A_{jl}=J_{jl}^{ef}-\mathrm{i}\frac{\Gamma^{ef}_{jl}}{2},B_{jl}=J_{jl}^{eg}- \mathrm{i}\frac{\Gamma^{eg}_{jl}}{2},C_{jl}=J_{jl}^{eh}-\mathrm{i}\frac{ \Gamma^{eh}_{jl}}{2}, \tag{28}\]
and three-operator product expectation values are approximated by the second-order cumulant expansion as
\[\left\langle\hat{u}\hat{v}\hat{w}\right\rangle=\left\langle\hat{u}\hat{v} \right\rangle\left\langle\hat{w}\right\rangle+\left\langle\hat{v}\hat{w} \right\rangle\left\langle\hat{u}\right\rangle+\left\langle\hat{u}\hat{w} \right\rangle\left\langle\hat{v}\right\rangle-2\left\langle\hat{u}\right\rangle \left\langle\hat{v}\right\rangle\left\langle\hat{w}\right\rangle. \tag{29}\]
## Appendix C Benchmarking second-order cumulant expansion
To benchmark the second-order cumulant expansion we compare the approximated dynamics to the exact dynamics for small system sizes, as shown in Fig. 8. Here, we consider two-level atoms as calculating exact dynamics for four-level atoms is not computationally tractable for even 16 atoms. Exact dynamics are found as the ensemble average of quantum trajectories [123]. At short times, and for modest separations, the cumulant expansion is an excellent approximation of the dynamics. As the distance decreases, the error becomes more significant. For \(d=0.1\lambda_{0}\), the peak is overestimated by 12% for a \(4\times 4\) array, and by 9% for a \(3\times 3\) array. The relative error is much larger at later times as the cumulant expansion is unable to capture the subradiant tail [95]. This is also true for \(d=0.2\lambda_{0}\), where the burst is captured more accurately (overestimated by only 1% for a \(4\times 4\) array), but large relative errors occur in the tail.
| Inverted原子ensembleにおいて、光子媒介相互作用は、Dickesuperradianceと呼ばれる、多体崩壊の一種として発現し、原子核の崩壊を伴い、光の爆発としてエネルギーが急速に解放されます。当初、点状の ensembles で研究されていましたが、この現象は、粒子間距離が一定の境界を下回る場合に、広範囲の秩序のシステムで継続します。ここでは、アルカリ土類金属系原子、例えば strontium と ytterbium を使用した実用的な実験設定で Dickesuperradiance を調査します。これらの原子は、短距離の原子間距離を捕獲できるため、光子と物質の相互作用に新しい機会を提供します。彼らの内部構造は、長波長遷移に比べて原子間距離を短くする可能性があり、集団的なエネルギー損失相互作用の潜在性を提供します。複雑な電子構造にもかかわらず、実現可能な格子定数を持つ |
2309.11604 | Distances to Recent Near-Earth Supernovae From Geological and Lunar 60Fe | Near-Earth supernova blasts which engulf the solar system have left traces of
their ejecta in the geological and lunar records. There is now a wealth of data
on live radioactive ${}^{60}$Fe pointing to a supernova at 3 Myr ago, as well
as the recent discovery of an event at 7 Myr ago. We use the available
measurements to evaluate the distances to these events. For the better analyzed
supernova at 3 Myr, samples include deep-sea sediments, ferromanganese crusts,
and lunar regolith; we explore the consistency among and across these
measurements, which depends sensitively on the uptake of iron in the samples as
well as possible anisotropies in the ${}^{60}$Fe fallout. There is also
significant uncertainty in the astronomical parameters needed for these
calculations. We take the opportunity to perform a parameter study on the
effects that the ejected ${}^{60}$Fe mass from a core-collapse supernova and
the fraction of dust that survives the remnant have on the resulting distance.
We find that with an ejected ${}^{60}$Fe mass of $3\times10^{-5} M_\odot$ and a
dust fraction of 10%, the distance range for the supernova 3 Myr ago is $D \sim
20 - 140$ pc, with the most likely range between $50 - 65$ pc. Using the same
astrophysical parameters, the distance for the supernova at 7 Myr ago is $D
\sim 110$ pc. We close with a brief discussion of geological and astronomical
measurements that can improve these results. | Adrienne F. Ertel, Brian D. Fields | 2023-09-20T19:39:21 | http://arxiv.org/abs/2309.11604v1 | # Distances to Recent Near-Earth Supernovae From Geological and Lunar \({}^{60}\)Fe
###### Abstract
Near-Earth supernova blasts which engulf the solar system have left traces of their ejecta in the geological and lunar records. There is now a wealth of data on live radioactive \({}^{60}\)Fe pointing to a supernova at 3 Myr ago, as well as the recent discovery of an event at 7 Myr ago. We use the available measurements to evaluate the distances to these events. For the better analyzed supernova at 3 Myr, samples include deep-sea sediments, ferromanganese crusts, and lunar regolith; we explore the consistency among and across these measurements, which depends sensitively on the uptake of iron in the samples as well as possible anisotropies in the \({}^{60}\)Fe fallout. There is also significant uncertainty in the astronomical parameters needed for these calculations. We take the opportunity to perform a parameter study on the effects that the ejected \({}^{60}\)Fe mass from a core-collapse supernova and the fraction of dust that survives the remnant have on the resulting distance. We find that with an ejected \({}^{60}\)Fe mass of \(3\times 10^{-5}\)\(M_{\odot}\) and a dust fraction of 10%, the distance range for the supernova 3 Myr ago is \(D\sim 20-140\) pc, with the most likely range between \(50-65\) pc. Using the same astrophysical parameters, the distance for the supernova at 7 Myr ago is \(D\sim 110\) pc. We close with a brief discussion of geological and astronomical measurements that can improve these results.
Supernovae (1668), Nucleosynthesis (1131); Nuclear abundances (1128); Mass spectrometry (2094), Astrophysical dust processes (99)
0000-0002-8882-8858]Adrienne F. Ertel
0000-0002-4883-0885]Brian D. Fields
## 1 Introduction
In the last few decades, two global \({}^{60}\)Fe (\(t_{1/2}=2.62\) Myr1) signals corresponding to near-Earth supernovae have been discovered in ferromanganese (FeMn) crusts and deep-sea sediments at 3 and 7 million years ago (Mya) (Knie et al., 1999, 2004; Fitoussi et al., 2008; Ludwig et al., 2016; Wallner et al., 2016, 2021). An excess of \({}^{60}\)Fe above the natural background has also been discovered in lunar regolith (Fimiani et al., 2016). The progenitors of these signals are most likely either core-collapse (CCSN) or electron-capture (ECSN) supernovae, as other producers of \({}^{60}\)Fe, such as thermonuclear supernovae and kilonovae, do not produce sufficient \({}^{60}\)Fe mass to be within a plausible distance of Earth (Fry et al., 2015). Although not entirely ruled out, super-asymptotic-giant-branch (SAGB) stars are not considered in this paper, as their slow winds last a relatively short duration and do not match the observed \({}^{60}\)Fe fallout timescale of \(\gtrsim 1\) Myr (Ertel et al., 2023). The two near-Earth supernovae conveniently fall into separate geologic epochs, and therefore we will refer to them as the Pliocene Supernova (SN Plio, 3 Mya) and the Miocene Supernova (SN Mio, 7 Mya). For recent reviews on near-Earth supernovae, see Korschinek & Faestermann (2023), Wallner (2023), Fields & Wallner (2023).
Footnote 1: Half-life measurement: Rugel et al. 2009; Wallner et al. 2015; Ostdiek et al. 2017
Fry et al. (2015) used the available data from Knie et al. (2004) and Fitoussi et al. (2008) to put bounds on the distance from Earth to SN Plio given the observed \({}^{60}\)Fe fluence. Using the supernova \({}^{60}\)Fe yields available at the time, they found a distance of \(D\sim 60-130\) pc for CCSN and ECSN. We seek to expand on those calculations, given the plethora of new \({}^{60}\)Fe data for SN Plio presented in Ludwig et al. (2016), Fimiani et al. (2016), Wallner et al. (2016), and Wallner et al. (2021). In addition to the Earth-based data, the distance to the supernovae depends on three astronomical parameters: the ejected \({}^{60}\)Fe mass from the progenitor, the time it takes for the dust to travel to
Earth, and the fraction of the dust which survives the journey. The time is thus ripe to investigate the impact those parameters have on the supernova distance.
The structure for the paper is as follows. In Section 2, we lay out the relevant variables and their theory and data sources. In Section 3, we examine the different \({}^{60}\)Fe samples and their possible constraints. In Section 4, we examine the three main astronomical parameters, their bounds, and the implications of their range on the supernova distance. We then systematically map out the uncertainties in those parameters in Section 5 and rule out model combinations. In Section 6, we discuss other methods for calculating the supernova distance.
## 2 Formalism
The nucleosynthesis products from a supernova -- including radioisotopes such as \({}^{60}\)Fe -- are ejected in the explosion and eventually spread throughout the remnant. The time-integrated flux, or fluence, thus allows us to connect the observed parameters of the \({}^{60}\)Fe signal on Earth with the astronomical parameters of the supernova remnant. In reality, the distribution of \({}^{60}\)Fe in the remnant will be anisotropic, and the time history of its flux on Earth will be complex. Because the supernova blast cannot compress the solar wind to 1 au without being within the kill (mass extinction limit) distance (Fields et al., 2008; Miller and Fields, 2022), the terrestrial signal can arise only from the ejecta arriving in the form of dust grains (Benitez et al., 2002; Athanassiadou and Fields, 2011; Fry et al., 2015, 2016). We have argued that supernova dust decouples from the blast plasma, and that its magnetically-dominated propagation and evolution naturally lead to the observed \(>1\) Myr timescale for \({}^{60}\)Fe deposition (Fry et al., 2020; Ertel et al., 2023). The Earth's motion relative to the blast will also affect the \({}^{60}\)Fe flux onto Earth (Chaikin et al., 2022).
The \({}^{60}\)Fe flux \(\Phi_{60}(t)\) accumulates in natural archives over time. This signal integrates to give the \({}^{60}\)Fe fluence \(\mathcal{F}=\int\Phi_{60}(t)\;dt\), which will be the central observable in our analysis. Our goal in this paper, as with earlier distance studies (Fields and Ellis, 1999; Fry et al., 2015), is not to capture all of this complexity, but to find a characteristic distance based on a simplified picture of a spherical blast engulfing an stationary Earth.
For a spherical supernova blast, we can generalize the relationship between the observed fluence of a radioisotope \(i\) and the supernova properties as
\[\mathcal{F}_{\mathrm{obs},i}=\frac{1}{4}\frac{M_{\mathrm{ej},i}}{4\pi D^{2}A_ {i}\,m_{u}}\,U_{i}f_{i}\exp\left[\frac{-(t_{\mathrm{arrive}}+t_{\mathrm{trav}} )}{\tau_{i}}\right], \tag{1}\]
where \(A_{i}\) is the mass number, \(m_{u}\) is the atomic mass unit, and \(\tau_{i}\) is the lifetime of the isotope. The leading factor of \(1/4\) is the ratio of Earth's cross sectional area to surface area. The two Earth-based parameters are the arrival time \(t_{\mathrm{arrive}}\), which is the time of the first non-zero signal point, and the uptake fraction \(U_{i}\), which quantifies the difference between the amount of the isotope that arrives at Earth and what is detected (see Section 3). The four astronomical parameters are the ejected mass of the isotope \(M_{\mathrm{ej},i}\), the fraction of the isotope that is in the form of dust \(f_{i}\), the distance to the supernova \(D\), and the travel time \(t_{\mathrm{trav}}\) between the supernova and Earth. Note that Eq. (1) assumes a uniform fallout of the \({}^{60}\)Fe onto Earth (but see Fry et al., 2016).
Equation 1 gives an inverse square law for the radioisotope fluence as a function of distance, similar to the inverse square relation for photon flux. Setting aside the travel time's dependence on the distance, we can then solve Eq. (1) as:
\[D=\frac{1}{2}\left(\frac{f_{60}\,M_{\mathrm{ej},60}}{4\pi A_{60}\,m_{u}} \right)^{\frac{1}{2}}\,\left(\frac{U_{60}}{\mathcal{F}_{\mathrm{obs}}}\right)^ {\frac{1}{2}}\,\exp\left[\frac{-(t_{\mathrm{arrive}}+t_{\mathrm{trav}})}{2\, \tau_{60}}\right]. \tag{2}\]
Equation (2) is the main equation of interest in this work; therefore we have substituted the generic isotope \(i\) for \({}^{60}\)Fe, as this is the isotope measured on Earth. In the interest of brevity, \(\mathcal{F}_{\mathrm{obs},60}\) will be referred to as \(\mathcal{F}_{\mathrm{obs}}\). Note that \(\mathcal{F}_{\mathrm{obs}}\) is the fluence of \({}^{60}\)Fe into the material (deep-sea sediment, FeMn crust, lunar regolith) and not the \({}^{60}\)Fe fluence at Earth's orbit or in the interstellar medium (ISM) -- these latter two are a geometric factor of 4 different due to surface area and include corrective values such as the uptake factor.
Equation (2) shows that distance scales as \(D\propto(f_{60}\,\mathcal{N}_{60}/\mathcal{F}_{\mathrm{obs}})^{1/2}\). We see the fluence scaling \(\mathcal{F}_{\mathrm{obs}}^{-1/2}\) and additional dimensionless factors counting the number \(\mathcal{N}_{60}=M_{\mathrm{ej},60}/A_{60}\,m_{u}\) of \({}^{60}\)Fe atoms and correcting for the dust fraction \(f_{60}\). Moreover, the analogy to photon flux is very close: this _radioactivity distance_ is formally identical to a luminosity distance, with (uptake corrected) \({}^{60}\)Fe fluence playing the role of photon flux, and the product \(f_{60}\,M_{\mathrm{ej},60}\) of dust fraction and yield playing the role of luminosity.
The error on the radioactivity distance depends on both data-driven and astrophysical values. Because an objective of this work is to examine the effects that different ejected masses, dust fractions, and travel times have on the
supernova distance, the errors associated with those values are not included in our calculations. Therefore the quoted distance error will only be the result of the data-driven values of observed fluence and uptake factor.2
Footnote 2: The arrival time also has an associated error, however it is negligible in this calculation and therefore not included.
All observed fluences have well-defined statistical errors. However most of the uptake factors are quoted as an approximation or assumed to be 100% -- due to the lack of clarity and the large influence these errors have on the resulting distance, for the purpose of this paper we will not be including the uptake error explicitly into our calculations. Rather, we will illustrate the effect of this systematic error by displaying results for a wide range of uptakes corresponding to values quoted in the literature. _The errors quoted on all of our distance calculations therefore solely reflect the reported statistical error on the fluence._
We can calculate the error on the radioactivity distance that arises due to uncertainties in the fluence. This is simply
\[\sigma_{D}=\frac{1}{2}\,D\,\left(\frac{\sigma_{\mathcal{F}}}{\mathcal{F}_{ \mathrm{obs}}}\right), \tag{3}\]
where \(\sigma_{\mathcal{F}}\) is the error on the observed fluence.
It is important to note that the radioactivity distance scales as \(D\propto\sqrt{f_{60}\,M_{\mathrm{ej,60}}}\), so that the key astrophysics input or figure of merit is the product \(f_{60}\,M_{\mathrm{ej,60}}\) of \({}^{60}\)Fe yield and dust fraction. This represents physically the effective yield of \({}^{60}\)Fe in a form able to reach the Earth. The resulting radioactivity distance is therefore most affected by the allowed range of \(M_{\mathrm{ej,60}}\) and \(f_{60}\). To that effect, this paper presents the quantity \(f_{60}\times M_{\mathrm{ej,60}}\) [\(M_{\odot}\)] as a means of approximating the maximum and minimum astronomical parameters that can be used to find a supernova distance's from Earth.
## 3 Data and Benchmark Results
The data used in this analysis are from the work of Knie et al. (2004, hereafter K04), Fitoussi et al. (2008, F08), Ludwig et al. (2016, L16), Wallner et al. (2016, W16), Fimiani et al. (2016, F16), and Wallner et al. (2021, W21). The \({}^{60}\)Fe signal has been found in a number of different materials on Earth, including deep-sea sediment cores and ferromanganese (FeMn) crusts, as well as in the lunar regolith.
FeMn crusts are slow-growing, iron and manganese-rich layers which build up on exposed rock surfaces in the ocean at a rate of \(\sim\) few mm/Myr. Ferromanganese nodules have a similar growth rate and are found as individual objects on the sea floor. These crusts grow by extracting iron and manganese from the surrounding seawater, and thus they have an associated uptake factor, which accounts for how much of the available iron in the seawater they absorb. The uptake factor varies considerably with each crust, on the order of \(1-30\%\) (see Tab. 1), and must be calculated for each sample. In contrast, deep-sea sediments grow much faster rate of \(\sim\) few mm/kyr. Unlike FeMn crusts, they are assumed to have a 100% uptake factor as they sample what is deposited on the ocean floor.
Table 1 summarizes the observed fluences (\(\mathcal{F}_{\mathrm{obs}}\)), the \({}^{60}\)Fe arrival times (on Earth and the Moon), and the uptake percentage of \({}^{60}\)Fe into the material for all of the \({}^{60}\)Fe detections considered in this work.3 We see that the arrival times are for the most part quite consistent, even across crust and sediment measurements.
Footnote 3: A low-level \({}^{60}\)Fe infall over the last 30 kyr has been measured by Koll et al. (2019) and Wallner et al. (2020); however, we do not consider it as part of the same astrophysical delivery mechanism that created the \({}^{60}\)Fe peaks considered in this work and therefore this infall is not included.
Table 1 also provides an example of a distance to SN Plio. These results all use the quoted fluence and uptake, and assume \(f_{60}\times M_{\mathrm{ej,60}}=3\times 10^{-6}M_{\odot}\). In the next sections, we will address in detail the correlations between these results and the wide variety of distances they give. The range of distances is much larger than the quoted statistical errors, confirming that systematic errors -- most notably the uptake -- dominate the distance uncertainties.
Figure 1 then plots the distance vs fluence for the published \({}^{60}\)Fe data relating to SN Plio. The distance is calculated as shown in Tab. 1 and the fluence refers to the fluence into the material on Earth. Error bars on the fluence are as quoted in the original papers; error bars on the distance trace the fluence error effects.4 The top plot of Fig. 1 shows all of the data, while the bottom plot neglects the outlier L16 sediment data (discussed in Subsection 3.3).
Footnote 4: As can be seen in Tab. 1, only the W16 crust 1 and W21 crust 3 have precise errors on the uptake (the sediment and lunar uptakes are assumed to be 100% and thus do not have an associated error). Without more precise values for the other FeMn crust uptakes and in the interest of consistency between datasets, we ignore all uptake errors here.
Figure 1 represents a consistency check among the \({}^{60}\)Fe measurements. The reported fluence and uptake are used to infer the interstellar fluence arriving at Earth, and this in turn leads to the distances plotted. As seen in eq. (2), all results scale with the adopted yield and dust section as \(D\propto(f_{60}M_{\mathrm{ej,60}})^{1/2}\). Because this factor is common to all points shown in the plot, the entire pattern can shift up or down systematically for different choices of this parameter combination. But crucially, whether the distances we infer are consistent or discrepant does not depend on these parameter choices.
We will review the agreement among data sets in detail below, but the main results are clear from a glance at Figure 1. We see that most results span 50 to 150 pc, which are shown in a zoom in the bottom panel. There is a group of data clustered together in distance, from around 40 to 70 pc, which shows a non-trivial consistency -- though we will see that most of the points are correlated. Note that the K04 crust results are shown for different uptake values, making it clear that this choice can lead to consistency (if \(U_{60}\sim 0.04\) for this crust) or discrepancy (if \(U_{60}\) takes a substantially different value). On the other hand, the top panel shows that the L16 results lead to distances that are far from the others. We discuss this in detail below.
Horizontal lines on Fig. 1 indicate key astrophysical distances. The lowest line at 10 pc is an estimate of the typical SN kill distance, inside of which substantial damage to the biosphere is expected Gehrels et al. (e.g., 2003); Brunton et al. (e.g., 2023). No points lie below this range, consistent with the lack of widespread anomalous biological extinctions in the past 3 Myr. The other two lines show the position of nearby star clusters that have been proposed to host SN Plio: the \(\sim 50\) pc location of the Tucana-Horologium association, and the \(\sim 100\) pc distance to the Scorpius-Centaurus association. We see that the clustered data points are consistent with the Tuc-Hor distance, though a somewhat larger \(f_{60}\times M_{\rm ej,60}\) would favor Sco-Cen. Finally, we note that the maximum size of a supernova remnant is can be estimated from the "fadeaway distance" (Draine, 2011) when the blast wave becomes a sound wave, which depends on the density of the ambient medium but is \(\sim 100-150\) pc. We see that all of the points are inside this distance, as would be expected for a SN origin of \({}^{60}\)Fe -- except for L16. Thus, aside from L16, the \({}^{60}\)Fe data is consistent with astrophyiscal expectations, which represents a non-trivial test, because astrophyical distances are not built into the \({}^{60}\)Fe measurements (in contrast to the \(\sim\) Myr timescale that is pre-ordained by the choice of \({}^{60}\)Fe).
We now examine the datasets and results in Fig. 1 in more detail. There are three possible uptake factors to use for the K04 data (see Subsection 3.1 and Tab. 1) and all three have been included as separate date points to demonstrate the effect of the uptake factor on the supernova distance. The F08 data have two points to represent the two options presented in Tab. 4 of Fitoussi et al. (2008); these are the same sediment sample fluence calculated two different ways, not independent measurements (see Subsection 3.2). The uptake factors for the W16 and W21 crusts and nodules are
\begin{table}
\begin{tabular}{l c|c c c|c} \hline \hline Paper & & \({\cal F}_{\rm obs}\) [\(10^{6}\,\rm atoms/cm^{2}\)] & \(t_{\rm arr}\) [Myr] & \(U_{60}\) & D [pc] \\ \hline Knie et al. (2004) crusta & K04 & \(1.5\pm 0.4\) & 2.61 & \(\sim 0.006\) & \(22\pm 3\) \\ Knie et al. (2004) crust & & \(1.5\pm 0.4\) & 2.61 & \(\sim 0.24^{\rm b}\) & \(140\pm 19\) \\ Knie et al. (2004) crust & & \(1.5\pm 0.4\) & 2.61 & \(\sim 0.04^{\rm c}\) & \(57\pm 8\) \\ \hline Fitoussi et al. (2008) sediment A & F08 & \(30.0\pm 14.5^{\rm d}\) & 2.87 & 1.0 & \(64\pm 15\) \\ Fitoussi et al. (2008) sediment B & & \(58.0\pm 39.0^{\rm d}\) & 3.08 & 1.0 & \(46\pm 15\) \\ \hline Ludwig et al. (2016) sediment & L16 & \(0.56\pm 0.18\) & \(3.02^{\rm e}\) & 1.0 & \(470\pm 75\) \\ \hline Wallner et al. (2016) sediment & W16 & \(35.4\pm 2.6\) & 3.18 & 1.0 & \(58.9\pm 2.2\) \\ Wallner et al. (2016) crust 1 & & \(5.9\pm 0.8\) & 4.35 & \(0.17\pm 0.3\) & \(59\pm 4\) \\ Wallner et al. (2016) crust 2 & & \(2.2\pm 0.2\) & 3.1 & \(\sim 0.07\) & \(62\pm 3\) \\ Wallner et al. (2016) nodules & & \(1.4\pm 0.5\) & 3.3 & \(0.02-0.04\) & \(51\pm 9\) \\ Wallner et al. (2021) crust 3 & W21 & \(6.10\pm 0.31\) & 4.2 & \(0.17\pm 0.3\) & \(58.3\pm 1.5\) \\ \hline Fimiani et al. (2016) lunar & F16 & \(10-60\) & 2.6 & 1.0 & \(45-110\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data-driven values for calculating the distance to SN Plio.
calculated based on the assumption that the W16 sediment collected 100% of the \({}^{60}\)Fe fluence, and so all of the W16 and W21 data are correlated and trace approximately the same distance (see Subsection 3.4). The K04 point with \(U_{60}\)= 0.04 (orange cross in Fig. 1) is similarly calculated and therefore also correlated with the W16 and W21 data.
Figure 1: _Distance to supernova vs fluence into material for SN Piao (3 Mya)._ The assumed \(f_{60}\times M_{\rm ej,60}\) is printed under the legend. Horizontal dotted lines show distances of 100, 50, and 10 pc (the kill distance). **Top:** All of the \({}^{60}\)Fe fluence data, including L16. K04 is shown with three possible uptake factors (see Subsection 3.1), demonstrating the effect of the uptake on the distance. F08 opt. A and B are two different binnings for the same sediment data, not two sediments (see Subsection 3.2). Note that all of the W16 and W21 data, as well as K04 (\(U_{60}\)=0.4, orange cross), are correlated to the W16 sediment data and therefore show approximately the same distance to SN. **Bottom:** A zoom in on the distances without the L16 point. For paper citations and abbreviations, as well as calculation details, see Tab. 1.
The F16 lunar data are plotted as a dashed line showing the full possible range quoted in Fimiani et al. (2016). As described in Subsection 3.5, the time window for this fluence covers the last 8 Myr, during which there have been two near-Earth supernovae at 3 Mya and 7 Mya. However, the 3 Mya supernova contributes 90% of the observed fluence (as calculated in Subsection 3.5.1) and therefore the dashed line more or less traces the available distance and fluence range for SN Plio.
It is of note that the W16 sediment fluence and the two possible F08 fluences fall on the lunar fluence line, with the significantly more precise W16 sediment in exact agreement. The full implications of this alignment are analyzed in Subsection 3.5, but altogether it does lend credence to the idea that the deep-sea sediments are sampling 100% of the \({}^{60}\)Fe flux which falls on the Earth.
### Knie+ 2004 Data (K04)
The K04 FeMn crust was the first measurement of the \({}^{60}\)Fe signal from SN Plio that provided a time profile. Since the paper was published, the \({}^{60}\)Fe and \({}^{10}\)Be half-lives have been updated: the \({}^{60}\)Fe half-life has changed from 1.47 Myr (Kutschera et al., 1984) to 2.62 Myr (Rugel et al., 2009; Wallner et al., 2015; Ostdiek et al., 2017); while the \({}^{10}\)Be half-life has changed from 1.5 Myr to 1.36 Myr (Nishizumi et al., 2007) to 1.387 Myr (Korschinek et al., 2010). Wallner et al. (2016) updates K04's fluence for these half-life changes in their Tab. 3 and we use those numbers here. It should be noted that the same FeMn crust was measured by F08, which confirmed the \({}^{60}\)Fe signal results.
FeMn crusts do not absorb all of the available iron in the seawater they contact; thus, it is necessary to calculate an uptake efficiency factor, \(U_{60}\), for the crust. Unfortunately, this factor cannot be measured directly and must be inferred. K04 cites Bibron et al. (1974) as a means of calculating the uptake factor for the FeMn crust. By using the known \({}^{53}\)Mn extraterrestrial infall and comparing elemental ratios of Mn and Fe in seawater to the \({}^{53}\)Mn found in the FeMn crust, K04 was able to estimate the Fe uptake factor. As explained in F16 and confirmed by T. Faestermann and G. Korschinek (private communication), recent work with the \({}^{53}\)Mn infall corrects Bibron et al. (1974) by a factor of 40 smaller. The factor of 40 decrease in \({}^{53}\)Mn leads to a relative factor of 40 increase in the \({}^{60}\)Fe to match the \({}^{60}\)Fe/\({}^{53}\)Mn ratio detected in the crusts, and thus the \({}^{60}\)Fe uptake for the FeMn crust in K04 changes from 0.6% to 24%.
An alternative method to calculate the uptake factor for the crust is to use a known \({}^{60}\)Fe infall over the relevant period of time, such as the W16 sediment.5 By dividing the K04 incorporation by the W16 sediment incorporation in Tab. 3 of W16, we find an uptake factor of 4%, consistent with the FeMn crust and nodule uptakes in W16 and W21, which use the same method. This method correlates all of the distance measurements that are based off of the W16 sediment, leaving only the F08, L16, and F16 as independent measurements.
Footnote 5: This method was not available for the original calculation, as \({}^{60}\)Fe would not be measured in sediments until F08.
To demonstrate the importance of the uptake factor in the distance calculation, the three options for the K04 uptake (\(U_{60}\)= 0.6%, \(U_{60}\)= 4%, \(U_{60}\)= 24%) are shown in Tab. 1 and Fig. 1.
### Fitoussi+ 2008 data (F08)
F08 measured \({}^{60}\)Fe in both FeMn crusts and in deep-sea sediments. They first repeated the \({}^{60}\)Fe analysis on the same crust as used by K04, confirming the \({}^{60}\)Fe peak; since that work recreates an existing measurement, we do not use those results in this paper. F08 also pioneered the first \({}^{60}\)Fe analysis on deep-sea sediment samples. They found no significant signal unless they binned their data using a running-means average of either 0.4 or 0.8 Myr, as shown in their Fig. 4.
The theory at the time was the that the \({}^{60}\)Fe was in dust following the SN blast wave, which would take about 10 kyr to sweep over the solar system (Fry et al., 2015). The \({}^{60}\)Fe timescale they were looking for (to match the fluence seen in K04) was actually spread over \(\gtrsim\) 1 Myr (Ertel et al., 2023), as would be shown in later work such as L16 and W16 -- thus greatly diluting the signal they expected to find. Although the F08 sediment data cannot be used for a reliable time profile, we are able to include it in this work, as we are interested in measuring the fluence of their data and not the specific timing details.
Using the two running-means averages shown in Fig. 4 of F08, we calculate the area under the curve and thereby the fluence by fitting a triangle to the upper plot (A) and two back-to-back triangles to the lower plot (B). To find a fluence comparable to what is shown in other work, we: 1) updated the \({}^{10}\)Be half-life from 3.6 Myr to 3.87 Myr
(Korschinek et al., 2010) and changed the timescale accordingly; 2) subtracted the background of \(2.3\times 10^{-16}\) from the \({}^{60}\)Fe/Fe ratio; and 3) decay corrected the \({}^{60}\)Fe/Fe ratio using \(t_{1/2}=2.62\) Myr.
From there, we were able to calculate the fluence for F08 via
\[\mathcal{F}_{\rm obs}=\int\frac{\,{}^{60}\rm Fe}{\,{}^{\rm Fe}}\,\rho\,\dot{h} \,c_{\rm Fe}\,\ dt, \tag{4}\]
where \(\rho=1.6\) g/cm\({}^{3}\) is the sediment density, \(\dot{h}=3000\) cm/Myr is the sedimentation rate, and \(c_{\rm Fe}=5.39\times 10^{19}\) atoms/g is the iron concentration in the sediment, corresponding to a mass fraction of 0.5 wt%. Errors were pulled from the \(1\sigma\) lines around the running-averages in Fig. 4a and 4b of F08. The results are shown in Tab. 1, labeled A and B to match the relevant plots in the original paper.
### Ludwig+ 2016 data (L16)
The L16 sediment data is notable in that the group set out to answer a different research question than the other \({}^{60}\)Fe analysis: instead of measuring the total \({}^{60}\)Fe over a specific time range, their goal was to prove that the iron was not moving around in the sediment column due to chemical processes. To achieve this, they focused on analyzing microfossils in the sediment and discarded iron material \(\gtrsim 0.1\mu\)m, using the assumption that, since the \({}^{60}\)Fe is vaporized on impact with the atmosphere, there should not be any \({}^{60}\)Fe in larger sized grains. However, the resulting \(\mathcal{F}_{\rm obs}\) is notably 1 - 2 orders of magnitude smaller than the other deep-sea sediments measured in F08 and W16, as well as the range for the lunar fluence in F16. As can be seen in Tab. 1, the low \(\mathcal{F}_{\rm obs}\) in L16 puts SN Plio at an implausibly far distance from Earth, bringing into question whether dust from a supernova at that distance can even reach our solar system (Fry et al., 2015, 2020). While it is possible to manipulate the values for the dust fraction and ejected mass used in the distance calculation to bring the L16 sediment within a reasonable distance of Earth (ie: to within at most 150 - 200 pc away, see Subsection 4.2 and 4.1), this unfortunately pushes the rest of the observed \({}^{60}\)Fe fluences within the 10 pc "kill distance" and is therefore not realistic.
L16 compare their data to the sediment fluence found in W16 and attribute the difference to global atmospheric fallout variations. However, as noted in Fry et al. (2015) and Ertel et al. (2023), latitude variations would account for a factor of 5 difference at the most -- this is not enough offset the differences in the observed fluences. Furthermore, the sediment samples from F08 and W16 see similar fluences despite significant location differences (off the coasts of Iceland and Australia, respectively). It should also be noted that, while latitude fallout variation may account for some range in \(\mathcal{F}_{\rm obs}\), the supernova should still be at minimum within reasonable travel distance to Earth. Therefore, we must assume that some of the discarded sample from L16 contained \({}^{60}\)Fe.
The work of L16 conclusively demonstrated that the \({}^{60}\)Fe is not moving within the sediment column after being deposited. This is a major contribution to the field, considering that the observed \({}^{60}\)Fe data is spread over an order of magnitude longer timescale than what is conventional for a supernova shock-wave and there are considerable implications for this effect to be astronomical in origin rather than geophysical (Ertel et al., 2023). However, due to the fact that the \({}^{60}\)Fe fluence results in a \(>400\) pc supernova distance, we will not be using the numbers from L16 in this study.
### Wallner+ 2016 and 2021 data (W16 and W21)
W16 measured the 3 Mya \({}^{60}\)Fe signal in two FeMn crusts, two FeMn nodules, and four deep-sea sediments. They greatly increased the known evidence of the signal and were also able to find indications of a second \({}^{60}\)Fe signal 7 Mya. W21 followed up these measurements in a separate FeMn crust and were able to verify the 7 Mya SN signal as well as provide an excellent time profile of both SN.6
Footnote 6: W21 notes that the observed time profile in the crust is wider than anticipated for SN Plio (which is profiled in the W16 sediments), indicating that factors such as crust porosity could affect these results.
W16 and W21 calculated the uptake factors for their FeMn crusts and nodules by assuming that their sediment samples observed 100% of the flux of \({}^{60}\)Fe onto Earth.7 This connection between the different datasets means the resulting distance numbers are entirely correlated -- as seen in Tab. 1 and Fig. 1, the W16 and W21 data trace the same SN distance. When using the K04 uptake as 4% based off the W16 sediment (as in Fig. 5), the K04 data is similarly correlated.
Footnote 7: Koll et al. (2019) and Wallner et al. (2020) both study the current low-level \({}^{60}\)Fe infall, found in Antarctic snow and the top layers of the W16 sediments, respectively. Both groups find approximately the same current flux of \({}^{60}\)Fe — considering that these are very different sample types in different environments and global locations, this could be used as strong evidence that the 100% uptake assumption for deep-sea sediments is accurate, and that the W16 sediment data does accurately express the global \({}^{60}\)Fe fallout for SN Plio.
It should be noted that the W16 and the W21 quote "deposition rate" and "incorporation rate", respectively, instead of a fluence. This is the fluence into the material, which we use throughout this paper. To connect it to the fluence into the solar system that is quoted later in W16 and W21, factors such as uptake and global surface area must be accounted for and corrected out of the equation.
W21 measured the \({}^{60}\)Fe in an FeMn crust from 0 Mya to 10 Mya using sample slices of \(\sim 400\) kyr. In doing so, they created a detailed time profile showing two \({}^{60}\)Fe peaks in the last 10 Myr, which can be attributed to two supernovae. The peaks were measured in the same sample using the same analytical techniques -- thus if we compares their relative fluences, most of the geophysical complications and systematic errors drop out of the results. Only issues such as fluctuations in the growth rate over millions of years and other large shifts in absorption into the crust over long periods of time will influence the results.
### Fimiani+ 2016 Data (F16)
Unlike the Earth-based samples, the lunar samples analyzed by F16 are not affected by atmospheric, geologic, or biologic processes. They also present an analysis that is fully independent from anything measured on Earth and which can be used to verify the many different techniques and sample types involved in analyzing the \({}^{60}\)Fe signals.
When using the lunar data, we must work with two effects caused by the lack of atmosphere: cosmic ray nucleosynthesis and micrometeorite impacts. The solar and galactic cosmic rays create a natural background of \({}^{60}\)Fe in the lunar regolith and any \({}^{60}\)Fe signal related to SN Plio will be shown in an excess of \({}^{60}\)Fe above the standard background. In addition, the micrometeorite impacts create a "lunar gardening" effect that churns the top regolith and makes time resolution under 10 Myr ambiguous (Fimiani et al., 2016; Costello et al., 2018). In previous work, gardening was not an issue, as the excess \({}^{60}\)Fe in the \(\sim 8\) Myr sample was attributed to SN Plio; however, W21 has shown that there are actually two near-Earth supernovae in the last 10 Myr, at 3 Mya and 7 Mya. Therefore, the excess \({}^{60}\)Fe in the lunar regolith accounts for both supernovae, and in order to accurately calculate the distance to SN Plio using the lunar data, we first need to portion the excess \({}^{60}\)Fe signal between the two supernovae.
#### 3.5.1 Data-driven portioning with the W21 results
With the data from W21, we have an \({}^{60}\)Fe signal that goes back 8 Myr and shows two distinct supernova peaks. By taking the fluence ratio of these peaks, we are able to portion the lunar \({}^{60}\)Fe signal into two separate supernovae. W21 has the \({}^{60}\)Fe fluence for SN Plio (3.1 Mya) \(\mathcal{F}_{\rm obs}=6.10\times 10^{6}\,\rm atoms/cm^{2}\) and for SN Mio (7.0 Mya) \(\mathcal{F}_{\rm obs}=1.77\times 10^{6}\,\rm atoms/cm^{2}\), both of which are decay corrected.8 The F16 lunar data is also decay corrected, under the assumption that the excess \({}^{60}\)Fe signal seen in the 8 Myr sample was actually deposited 2.6 Mya (to match the K04 FeMn crust signal). We now know that two supernovae occurred within the last 8 Myr and therefore this decay correction needs to be fixed.
Footnote 8: W21 quotes an “incorporation rate” which is proportional to the fluence; however, we are only interested in the ratio between these two values and thus the difference falls out. Furthermore, since the two supernova peaks were measured in the same data slice from the same FeMn crust, the systematic and geo-related errors (such as uptake factor, various Earth processes that affect the signal, and any errors with absolute timing) cancel out.
To portion the lunar signal, the first step is to undo the decay correction on all three fluences using
\[\mathcal{F}_{0}=\mathcal{F}\ 2^{-t_{\rm arr}/t_{1/2}}\,, \tag{5}\]
where \(\mathcal{F}_{0}\) is the "dug up" fluence, \(\mathcal{F}\) is the decay corrected fluence, \(t_{\rm arr}\) is the time that was used to decay correct the fluence (which in this case is the expected arrival time of the \({}^{60}\)Fe signal), and \(t_{1/2}\) is the half-life of the isotope (2.62 Myr for \({}^{60}\)Fe). \(t_{\rm arr}\) = 2.6 Myr for the lunar signal F16 and \(t_{\rm arr}\) = 3.1 Mya and 7.0 Mya for the two W21 crust signals corresponding to SN Plio and SN Mio. From there, we can calculate the respective ratio of the two supernovae fluences to each other, with
\[\mathcal{P}_{\rm Plio} =\frac{1}{(\mathcal{F}_{\rm Mio}/\mathcal{F}_{\rm Plio})+1} \tag{6}\] \[\mathcal{P}_{\rm Mio} =1-\mathcal{P}_{\rm Plio}=\frac{\mathcal{F}_{\rm Mio}}{\mathcal{ F}_{\rm Plio}}\,\mathcal{P}_{\rm Plio}\, \tag{7}\]
where \(\mathcal{P}_{\rm Plio}\) and \(\mathcal{P}_{\rm Mio}\) are the percentages of the fluence from each supernova. We find that about 90% of the excess lunar \({}^{60}\)Fe signal should come from SN Plio (3.1 Mya), and 10% should come from SN Mio (7.0 Mya). We can then
portion the "dug up" lunar fluence range of \(1-6\times 10^{6}\) atoms/cm\({}^{2}\) and redo the decay correction, using \(t_{\rm arr}=3.1\) Myr for SN Plio and \(t_{\rm arr}=7.0\) Myr for SN Mio (Wallner et al., 2021). From there, we can calculate the distances to the two supernovae using Eq. (2). It should be noted that this calculation assumes the same \(M_{\rm ej,60}\), \(f_{60}\), and travel time for both SN; these values, possible ranges, and effects on the distance are examined in detail in Sections 4 and 5.
Using \(M_{\rm ej,60}=3.0\times 10^{-5}\)\(M_{\odot}\), \(f_{60}=0.1\), and \(t_{\rm trav}=0.1\) Myr, we find that SN Plio occurs between \(45-110\) pc from Earth and SN Mio occurs between \(80-200\) pc. The left plot in Fig. 2 shows these two ranges in gold and purple, respectively, along with the original lunar range quoted in F16. Note that the lunar fluence for SN Plio is about \(10\%\) less than the total lunar fluence for the last 8 Myr quoted in F16; the \(10\%\) loss results in a distance that is only a few pc different, meaning that a \(10\%\) difference in fluence does not have a large impact on the distance calculation. However, this difference does allow us to extract additional information about the distance to the second supernova.
The full range of the lunar fluence corresponds to a distance range for SN Plio from \(45-110\) pc. It is interesting to note that the W16 sediment data (assumed to have a \(100\%\) uptake factor) falls exactly on this band and the two possibilities for the F08 sediment data include the band within error. These are completely independent measurements and in fact occur on separate bodies in the solar system. An extension of this work is to use the W16 sediment fluence and calculated distance to pinpoint what the actual lunar fluence is for SN Plio, with the remainder of the excess \({}^{60}\)Fe then originating from SN Mio, which we do below.
Figure 2: _Lunar fluence portioning._**Left:** Lunar fluence portioned with the W21 fluence ratios (see Subsection 3.5.1), along with the F16 published full lunar range. SN Plio (3.1 Mya) is in gold and SN Mio (7 Mya) is in purple; note the overlap. SN Plio is \(90\%\) of the lunar \({}^{60}\)Fe and therefore traces nearly the same range as the originally published data (dashed line). **Right:** Lunar fluence portioned assuming the W16 sediment data is the full fluence for SN Plio (see Subsection 3.5.2). The SN Plio fluence is plotted with a black X, directly on top of the W16 sediment point. The purple line shows the possible remaining fluence and distance range for SN Mio, along with the original lunar data as a dashed line. Included on both plots for demonstration are the F08 and W16 sediment data.
#### 3.5.2 Data-driven portioning with W16 results
As noted in the previous section, the W16 sediment fluence falls exactly in the range of the lunar fluence. In this section, we make the assumption that the W16 sediment _is_ the fluence from SN Plio at 3 Mya; therefore, any remaining lunar \({}^{60}\)Fe fluence detected is from SN Mio at 7 Mya. Once again undoing the decay correction on the lunar fluence and the W16 sediment fluence with Eq. (5), we can subtract the "dug up" W16 fluence from the lunar fluence, redo the 7 Mya decay correction, and recalculate the possible distance range to SN Mio with Eq. (2). Using the same astrophysical parameters (\(M_{\rm ej,60}=3.0\times 10^{-5}M_{\odot}\), \(f_{60}=0.1\), and \(t_{\rm trav}=0.1\) Myr), we find that the SN Mio distance range with this method is \(40-160\) pc.
The right plot in Fig. 2 shows the results of this calculation. The fluence from SN Plio is denoted with a black X and is plotted directly over the W16 sediment fluence (as these are the same number). The shaded purple line represents the full possible range of fluence and distance for SN Mio, with the original F16 range plotted as a black dashed line.
## 4 Models
The \({}^{60}\)Fe signal found in the natural archives on Earth can be used to find the \(\mathcal{F}_{\rm obs}\), \(t_{\rm arr}\), and \(U_{60}\) parameters needed to calculate the supernova distance in Eq. (2). For the remaining three parameters of \(M_{\rm ej,60}\), \(f_{60}\), and \(t_{\rm trav}\), we turn to astrophysical models and observations to provide additional constraints, and we explore the allowed ranges that remain. A brief summary of the parameter ranges are listed in Tab. 2 and these ranges are discussed in detail in the following subsections.
### Ejected Mass
There are no available measurements of \({}^{60}\)Fe yields in individual supernovae. Thus we have two options. One is to rely on theoretical predictions. The other is to use observations of \({}^{60}\)Fe gamma-ray emission from the Galaxy to find an average \({}^{60}\)Fe yield. We consider each of these in turn.
#### 4.1.1 Supernova Calculations of \({}^{60}\)Fe Yields
Finding the ejected mass of \({}^{60}\)Fe from a supernova requires modeling explosive nucleosynthesis in the shell layers of the progenitor as it explodes. The \({}^{60}\)Fe is not made in the core of the CCSN but instead from neutron capture onto pre-existing iron in the shell layers (for a recent review, see Diehl et al., 2021). For this reason, we exclude explosion models with low metallicity or which do not track nucleosynthesis in the shell layers, such as the Curtis et al. (2019) "s model" and the Wanajo et al. (2018) "s models".
There are some additional constraints we can place on the available nucleosynthesis models. As discussed in Fry et al. (2015), the supernova must be a CCSN or ECSN in order to produce sufficient \({}^{60}\)Fe. It must also be close enough to Earth for its debris to reach the solar system. While we have already excluded models with low metallicity which will not make \({}^{60}\)Fe, the progenitor should already be at or near solar metallicity due to its proximity to Earth and the time of the explosion (\(\lesssim 10\) Mya).
Table 3 highlights the relevant simulation parameters for the selected models. We focus on four recent publications which model stars of solar metallicity and include \({}^{60}\)Fe production in their nucleosynthesis reactions (Sukhbold et al., 2016; Limongi and Chieffi, 2018; Wanajo et al., 2018; Curtis et al., 2019). Rauscher et al. (2002) is included to enable comparison with the data used in Fry et al. (2015). Figure 3 then plots the \({}^{60}\)Fe yields from the five different groups, with each individual model plotted separately. Lines connect the individual masses to give a better sense of the model's range; the single point from Wanajo et al. (2018) focuses on a specific mass ECSN model.
The focus of this paper is not to describe these supernova models in detail, but instead to find a range of the ejected mass of \({}^{60}\)Fe that can be used in Eq. (2). From Fig. 3, we see that the possible range extends from
\begin{table}
\begin{tabular}{l|c} \hline \hline \multicolumn{1}{c|}{ Parameter} & Range \\ \hline Ejected \({}^{60}\)Fe mass, \(M_{\rm ej,60}\) & \([M_{\odot}]\) & \(3\times 10^{-6}-3\times 10^{-4}\) \\ Dust fraction, \(f_{60}\) & & \(1\%-100\%\) \\ Travel time, \(t_{\rm trav}\) & [Myr] & \(0.1-1.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Astronomical Parameter Ranges
\(M_{\odot}\) and is covered fairly evenly by all groups. A recent further discussion of \({}^{60}\)Fe production in supernovae appears in Diehl et al. (2021).
Figure 4 shows the radioactivity distance implied by these yields. Here we adopt the \({}^{60}\)Fe fluence from W16 sediments, along with a dust fraction \(f_{60}\)= 0.1. We see that the wide range of yields in Fig. 3 leads to a substantial range in the radioactivity distance, even with the \({M_{\rm ej,60}}^{1/2}\) scaling. Encouragingly, we see that almost all models give distance between these limits. This is represents a nontrivial success of the nearby supernova scenario, because calculation of \(D\) in eq. (2) depends only on the fluence measurements and yields, with no astrophysical distances built in. Moreover,
Figure 3: _Supernova model yields for \({}^{60}\)Fe. Masses within models are connected with lines. Note that the possible ranges for \(M_{\rm ej,60}\) vary between about \(3\times 10^{-6}\) and \(3\times 10^{-4}\)\(M_{\odot}\). The three Limongi and Chieffi (2018) models drop off the plot and demonstrate direct collapse to a black hole. The relevant paper citations and details can be found in Tab. 3._
\begin{table}
\begin{tabular}{l c|c c c} \hline \hline Authors & Model & Metallicity & Rotation & Mass range (\(M_{\odot}\)) & Type \\ \hline \hline Rauscher et al. (2002) & & solar & no & 12-25 & CCSN \\ \hline Sukhbold et al. (2016) & N60 & 1/3 solar & no & 12.25-120 & CCSN \\ Sukhbold et al. (2016) & W18 & 1/3 solar & yes & 12.25-120 & CCSN \\ \hline Limongi and Chieffi (2018) & v0 & solar & no & 13-120 & CCSN \\ Limongi and Chieffi (2018) & v150 & solar & yes & 13-120 & CCSN \\ Limongi and Chieffi (2018) & v300 & solar & yes & 13-120 & CCSN \\ \hline Wanajo et al. (2018)1 & e8.8 & solar & no & 8.8 & ECSN \\ \hline Curtis et al. (2019) & w & solar & no & 12-120 & CCSN \\ \hline \hline \end{tabular}
\end{table}
Table 3: SN \({}^{60}\)Fe Yield Models
the allowed distance range encompasses the Local Bubble, and star clusters proposed to be the sites of the supernovae, as discussed below in Section 6.
While Fig. 4 shows the full range of masses for which \({}^{60}\)Fe is presented in recent models, these stars are not all equally probability. The stellar initial mass function shape indicated that lower mass CCSN progenitors should common; here we see that across several models these all provide reasonable distances. Indeed we do not see clear systematic differences between the distances inferred with lower vs higher mass models, reflecting the lack of a clear trend in \({}^{60}\)Fe yields vs progenitor mass in Fig. 3. This suggests that it will be difficult or impossible to use \({}^{60}\)Fe alone to probe the mass of the progenior; for this multiple SN radioisotopes are needed.
We stress that the distances shown in Fig. 4 derive from the dust fraction choice \(f_{60}=0.1\), and scale as \(D\propto f_{60}^{1/2}\). Thus, significant systematic changes in \(D\) can result from different choices for this poorly-determined parameter. This point is discussed further in the following section.
With this caveat in mind, it is notable that Figure 4 also shows that a few \({}^{60}\)Fe calculations do not fall into the allowed range. Most notable are the Limongi & Chieffi (2018) models, which give \(D\approx 0\) for progenitor masses \(\geq 30M_{\odot}\). This arises because in these models there is a direct collapse to a black hole without an explosion and the accompanying ejection of nucleosynthesis product; thus the only \({}^{60}\)Fe that escapes is the small amount in the stellar wind. Clearly these models are excluded, but the lower mass Limongi & Chieffi (2018) give \({}^{60}\)Fe in good agreement with other calculations and thus give plausible radioactivity distance.
Finally, we note that Fig. 4 updates a similar calculation by Fry et al. (2015, their Fig. 3). Our results are broadly quite similar. This agreement is somewhat accidental, since those earlier results were based on K04 and F08 \({}^{60}\)Fe crust measurements prior to the more reliable sediment measurement of W16 (used in this work). Moreover, uptake and dust assumptions were different. On the other hand, while detailed stellar yields have changed, they continue to span a similar range.
Figure 4: Radioactivity distance for the core-collapse \({}^{60}\)Fe yields shown in Fig. 3. Results use the W16 sediment data and assume \(f_{60}=0.1\). Estimates of limiting distances are shown as dashed horizontal red lines: the kill distance at 10 pc is a lower limit, and the fadeaway distance at 160 pc is an upper limit (Draine, 2011). We see that for most models, the radioactivity distance lies between these limits, showing that a consistent picture is possible for this wide class of supernova models.
#### 4.1.2 The Average \({}^{60}\)Fe Yield from Gamma-Ray Line Observations
Gamma-ray line astronomy provides an estimate of the average \({}^{60}\)Fe yield from supernovae. The radioactive decay of \({}^{60}\)Fe atoms leads to emission of gamma-ray and X-ray lines. Interstellar \({}^{60}\)Fe thus produces observable gamma-ray lines that probe its production over approximately one \({}^{60}\)Fe lifetime.
Measurements of diffuse Galactic gamma rays give a steady state \({}^{60}\)Fe mass of \(M_{60,\rm{SS}}=2.9^{+2.2}_{-0.78}\ M_{\odot}\)(Diehl et al., 2021). In steady state, \(M_{60,\rm{SS}}=\langle M_{\rm{ej,60}}\rangle\,\tau_{60}\,\mathcal{R}_{\rm{ CCSN}}\), where \(\mathcal{R}_{\rm{CCSN}}=1.79\pm 0.55\ {\rm{century}}^{-1}\) is the present-day Galactic core-collapse rate (Rozwadowska et al., 2021). We thus find the mean \({}^{60}\)Fe yield to be
\[\langle M_{\rm{ej,60}}\rangle = \frac{M_{60,\rm{SS}}}{\tau_{60}\,\mathcal{R}_{\rm{SN}}} \tag{8}\] \[= 4^{+4}_{-2}\times 10^{-5}M_{\odot}\ \left(\frac{M_{60,\rm{SS}}}{2.85 \ M_{\odot}}\right)\left(\frac{1.79\ {\rm{event}}/{\rm{century}}}{\mathcal{R}_{\rm{SN}}}\right)\ \.\]
This result averages over all core-collapse supernovae, which are assumed to be the only important \({}^{60}\)Fe source. If another source such as AGB stars makes an important contribution to the Galactic \({}^{60}\)Fe inventory, then the mean SN yield would be lower.
Interestingly, the result in Eq. (8) is in the heart of the predictions shown in Fig. 3. This supports the idea that core-collapse supernovae indeed dominate \({}^{60}\)Fe production, and suggests that the theoretical predictions are in the right ballpark. Further, if this is a typical yield, then there are implications for the dust fraction, to which we now turn.
### Dust Fraction
Our current understanding of the interactions between the SN shock front and the heliosphere require the \({}^{60}\)Fe to be in the form of dust grains in order to reach Earth (Benitez et al., 2002; Athanassiadou and Fields, 2011; Fry et al., 2015, 2016). The heliosphere blocks the supernova shock front from pushing inward past 5 au for supernova distances \(>30\) pc (Miller and Fields, 2022), and therefore only dust grains \(\gtrsim 0.1\mu\)m can ballistically push through the barrier (Athanassiadou and Fields, 2011; Fry et al., 2020). The dust fraction parameter \(f_{60}\) describes the fraction of ejected \({}^{60}\)Fe mass which condenses into dust and therefore makes it to Earth -- this parameter encompasses the multiple boundaries and survivability filters that the dust must traverse. As laid out in Fry et al. (2015) Tab. 3, these include:
1. The amount of \({}^{60}\)Fe that initially forms into dust.
2. The amount of \({}^{60}\)Fe-bearing dust which survives the reverse shock, sputtering, collisions, and drag forces in the supernova remnant (SNR) to then encounter the heliosphere.
3. The amount of dust that makes it past the shock-shock collision at the heliosphere boundary.
4. The amount of dust which manages to traverse the solar system to 1 au and be collected on Earth (and the Moon).
Observational work on SN1987A has demonstrated refractory elements form dust within years of the explosion and that nearly 100% of supernova-produced elemental iron condenses immediately into dust (Matsuura et al., 2011, 2017, 2019; Cigan et al., 2019; Dwek and Arendt, 2015). The composition of \({}^{60}\)Fe-bearing dust is of specific interest, especially considering that different compositions have significantly different survival rates due to grain sputtering (Fry et al., 2015; Silvia et al., 2010, 2012). Given that the \({}^{60}\)Fe is formed in the shell layers of the progenitor and not in the iron core with the bulk of the supernova-produced elemental iron (Diehl et al., 2021), it is quite possible that the \({}^{60}\)Fe dust is not in predominately metallic iron grains -- this in turn can affect the dust's survival chances (Silvia et al., 2010).
The degree to which dust is produced and destroyed within supernova remnants is an area of ongoing research; a recent review by Micelotta et al. (2018) summarizes the current theory and observational work. CCSN are producers of dust, as shown by observations of grain emission in young remnants, e.g., in recent JWST observations (Shahbandeh et al., 2023). The portion of grains that survive and escape the remnant is more difficult to establish. Factors such as dust composition, size, clumpiness within the remnant, and the density of the ambient medium all impact the dust survival rate. Table 2 in Micelotta et al. (2018) lists the calculated dust survival fractions within the SNR for various models and simulations. These dust fractions vary wildly between models and ultimately range from 0% to 100% of
the supernova-produced dust surviving the forward and reverse shocks. More recent papers continue to find that a large range is possible (Slavin et al., 2020; Marassi et al., 2019).
There are many dynamics and effects within the solar system which can filter dust grain sizes and prevent grains from easily entering the inner solar system (Altobelli, 2005; Mann, 2010; Wallner et al., 2016; Altobelli et al., 2016; Strub et al., 2019); however, the \(\sim 100\) km/s speed at which the supernova-produced dust is traveling causes these effects to be negligible (Athanassiadou & Fields, 2011; Fry et al., 2015). Therefore, we will consider the dust fraction at Earth's orbit to be equal to the surviving dust fraction within the SNR. Note that this assumption is in contrast to the assumptions made in Tab. 2 in Fry et al. (2015), which assumes that only 10% of the metallic iron (Fe) dust and none of the troilite (FeS) dust cross the heliosphere boundary.9 In contrast, Wallner et al. (2016) follows a similar route to our approach in this paper, in assuming that the \({}^{60}\)Fe dust survives from the shock boundary to the Earth essentially unchanged.
Footnote 9: Upon closer examination of the sources cited (Linde & Gombosi, 2000; Silvia et al., 2010; Slavin et al., 2010), we find that they are focused on ISM grains which are traveling \(\sim 26\) km/s; our SNR grains are traveling at \(\sim\)100 km/s. Further research is needed to work out the details of the dust’s ability to penetrate the heliosphere depending on size and velocity — for the purpose of this paper, we follow the approach outlined in W16.
With the assumptions that 100% of the ejected mass of \({}^{60}\)Fe condenses into dust and that the dust survives the heliosphere boundary fully intact, the main source of dust loss is within the SNR itself. As discussed above, the fraction of dust that survives the SNR environment ranges wildly depending on model parameters such as size, composition, and ambient environment -- although at least some of the dust must survive this journey, as we find the \({}^{60}\)Fe signal on Earth. For the purposes of this work, we do not focus on the details of dust survival calculations but instead choose a general dust fraction of between \(1-100\%\).
It should be noted that W16 and W21 calculate the dust survival fraction using the fluence from the W16 sediment sample (which samples 100% of the \({}^{60}\)Fe flux onto Earth). They assume that the \({}^{60}\)Fe dust traverses the solar system essentially unchanged and therefore the \({}^{60}\)Fe fluence at Earth is the same as the interstellar fluence. W16 finds a dust survival fraction of \(0.4-9\%\), using the additional assumptions that source of the 3 Mya signal occurred somewhere between 50 and 120 pc from Earth and with an ejected mass of \(M_{\rm ej,60}\sim 9\times 10^{-5}\ M_{\odot}\).10 We do not use these numbers directly in this work, as they are unavoidably correlated to the observed fluence. Thus we see that our adopted benchmark value \(f_{60}=10\%\) lies comfortably within the large allowed range, but clearly more work is needed to understand this quantity better. Indeed, as W16 and W21 show and we will discuss below, \({}^{60}\)Fe observations place novel limits on dust survival.
Footnote 10: This \(M_{\rm ej,60}\) value is found by assuming the supernova signal at 3 Mya is the debris from three separate supernovae, given the long \(>1\) Myr infall timescale (similar to what is proposed in Breitschwerdt et al., 2016). As discussed in Ertel et al. (2023), Fry et al. (2020), and Chaikin et al. (2022), more than one SN is not required to produce the observed signal and therefore this might overestimate the ejected mass. However, the value is still well within the range of possible values for \(M_{\rm ej,60}\) for a single supernova, as seen in Subsection 4.1.
### Travel Time
The travel time parameter (\(t_{\rm trav}\)) is defined as how long the supernova dust containing the \({}^{60}\)Fe will travel within the remnant before reaching Earth -- specifically, it is the dust's travel time before the start of the \({}^{60}\)Fe signal.11 As the remnant expands, it cools and slows, achieving a maximum size of around \(100-200\) pc (Fry et al., 2015; Micelotta et al., 2018). At most, this expansion should last around \(1-2\) Myr (see Fig. 1 in Micelotta et al., 2018). We therefore consider the range for \(t_{\rm trav}\) to be \(0.1-1.5\) Myr. Note that any value of \(t_{\rm trav}\) less than the half-life of \({}^{60}\)Fe (\(t_{1/2}=2.62\) Myr) has little to no impact on the resulting distance calculated.
Footnote 11: The dust containing the \({}^{60}\)Fe will continue to raindown for \(\gtrsim 1\) Myr after the initial deposition. We therefore define the travel time as tracing the SNR shock front and not the specific dust dynamics within the remnant that extend the signal.
It should be noted that at least SN Plio exploded in the Local Bubble environment and not the general ISM (Zucker et al., 2022, for further discussion, see Subsection 6.2). The low density does not change our estimate of \(t_{\rm trav}\): while the low density allows the remnant to expand farther, the blast also travels faster and therefore this extra distance is negligible.
There are also additional constraints due to the fact that we have observed an \({}^{60}\)Fe signal on Earth twice in the last 10 Myr. The dust containing \({}^{60}\)Fe arrived on Earth approximately 3 Mya for SN Plio and 7 Mya for SN Mio. Our ability to detect the \({}^{60}\)Fe signal from that point is dependent on the half-life of \({}^{60}\)Fe. For SN Plio, we have lost about one half-life of the isotope since deposition, and about three half-lives for SN Mio. It is therefore reasonable to believe that the dust is not traveling for millions of years before reaching Earth, as further loss of \({}^{60}\)Fe would imply either enormous \({}^{60}\)Fe supernova production or raise questions over our ability to detect the \({}^{60}\)Fe signal at all.
We combine the ranges of the three astronomical parameters discussed in Section 4 with the observed \({}^{60}\)Fe signal in Section 3 to calculate the full possible range of distances to SN Plio (3 Mya). Figure 5 maps out these results, following the same plotting convention of distance to supernova vs fluence into the material used in Figs. 1 and 2. For each subplot, two of the parameters are held constant at a mid-range value while the third is allowed to vary for the full range considered in this paper.
To reduce visual confusion, we have chosen to only show the F08 option A and the K04 crust with \(U_{60}=0.04\) points; using the K04 crust with \(U_{60}=0.04\) means that the F08 and F16 distances are the only ones not correlated to the W16 sediment (see Subsection 3.1). As a reminder, the error bars on the fluence reflect the actual fluence errors -- the
Figure 5: _Effect of the three astrophysical parameters on the distance to SN Plio._ For the purposes of reducing visual confusion, we only show the K04 distance calculated with \(U_{60}=0.04\) and the F08 option A points. Note that this means the F08 and F16 distances are the only ones not correlated to the W16 sediment. **Top:** Dust fractions (\(f_{60}\)) of 1%, 10%, and 100%, holding the ejected mass constant and travel time constant. **Middle:** Ejected masses (\(M_{\rm ej,60}\)) of \(3.0\times 10^{-6}\) to \(3.0\times 10^{-4}\)\(M_{\odot}\) holding the dust fraction and travel time constant. **Bottom:** Travel times (\(t_{\rm trav}\)) of 0.1, 1.0, and 5.0 Myr, holding the dust fraction and ejected mass constant. The dark blue, magenta, and yellow points for each plot indicate the distances calculated with the low, middle, and high values for each range, respectively. Lines have been drawn at 10 pc and 100 pc.
error bars on the distance only trace the fluence errors and do not account for the error in \(U_{60}\) present on the FeMn crusts and nodules (see Section 3).
As can been seen in the bottom plot of Fig. 5, the travel time does not have a large effect on the distance range and is the least important parameter. The difference between 0.1 and 1.0 Myr is negligible, on the order of \(\sim 5\) pc, which is well within the uncertainties of the Earth-based parameters. It is only when the travel time is increased to a non-physical 5 Myr that any effect is observed, and that effect is minimal compared to the ranges seen in the \(M_{\rm ej,60}\) and \(f_{60}\) parameters.
Both the dust fraction and ejected mass range two full orders of magnitude and have a significant influence on the distance. Untangling these parameters is challenging, and as discussed in Section 2, the product \(f_{60}\times M_{\rm ej,60}\) is more robust. Table 4 lists a range of possible \(f_{60}\times M_{\rm ej,60}\) values, covering the lowest and highest combinations, the middle of both ranges, as well as four distances of specific interest. All of the approximate distances in Tab. 4 are calculated using the W16 sediment fluence, as we believe this measurement best reflects the \({}^{60}\)Fe fluence from SN Plio (see Subsection 3.4 for further details).
Both the lowest and highest possible combinations of \(M_{\rm ej,60}\) and \(f_{60}\) put SN Plio at implausible distances: any distance closer than 20 pc should have left distinct biological damage tracers in the fossil record (Melott and Thomas, 2011; Fields et al., 2020), while distances farther than \(\sim 160\) pc prevent the dust from reaching Earth (Fry et al., 2015). The remaining combinations produce a large range of possible supernova distances. Using an \(f_{60}\times M_{\rm ej,60}\) range of \(5\times 10^{-7}-5\times 10^{-5}\)\(M_{\odot}\) yields distances between \(20-100\) pc; these values for \(f_{60}\times M_{\rm ej,60}\) can be found using any high-low or low-high combination of \(M_{\rm ej,60}\) and \(f_{60}\) as well as the middle range for both parameters.
In the interest of a complete analysis, we have expanded \(M_{\rm ej,60}\) and \(f_{60}\) to the full possible range of values -- this does not necessarily reflect the most likely range. While the available \({}^{60}\)Fe yield models cover the full \(3\times 10^{-6}-3\times 10^{-4}\)\(M_{\odot}\) for the mass range of interest (\(8-30\)\(M_{\odot}\)), as seen in Fig. 3, the dust fraction survival range is more likely to be between \(1-50\%\)(Wallner et al., 2016, 2021; Slavin et al., 2020).
### Distance to the 7 Mya supernova
The W21 dataset includes a distinct FeMn crust measurement for the 7 Mya SN. We can use this fluence to calculate the distance to SN Mio following the same procedure as described for SN Plio. The fluence for SN Mio is \(\mathcal{F}=1.77\pm 0.25\times 10^{6}\) atoms/cm\({}^{2}\), with an uptake factor of \(U_{60}=17\%\)(Wallner et al., 2021) -- the fluence is already decay corrected, so the arrival time is not needed. We assume the same average supernova properties as SN Plio, with \(M_{\rm ej,60}=3.0\times 10^{-5}\)\(M_{\odot}\) and \(f_{60}=0.1\), although these properties do not have to be the same for both supernovae. Using Eq. (2), we find that the distance to SN Mio is
\[D(\text{SN Mio})\simeq(108\pm 8\ \text{pc.})\left(\frac{f_{60}}{0.1}\right)^{1/ 2}\left(\frac{M_{\rm ej,60}}{3.0\times 10^{-5}M_{\odot}}\right)^{1/2} \tag{9}\]
\begin{table}
\begin{tabular}{c|c|c} \hline \hline \(f_{60}\times M_{\rm ej,60}\) [\(M_{\odot}\)] & Approximate distance [pc] & Notes \\ \hline \(3.0\times 10^{-8}\) & \(5.87\pm 0.22\) & lowest combination \\ \(8.85\times 10^{-8}\) & 10 & 10 pc supernova \\ \(3.5\times 10^{-7}\) & 20 & 20 pc supernova \\ \(2.18\times 10^{-6}\) & 50 & 50 pc supernova \\ \(3.0\times 10^{-6}\) & \(59.7\pm 4.0\) & mid-range \\ \(8.75\times 10^{-6}\) & 100 & 100 pc supernova \\ \(3.0\times 10^{-4}\) & \(586\pm 22\) & highest combination \\ \hline \hline \end{tabular}
\end{table}
Table 4: Influence of \(f_{60}\times M_{\rm ej,60}\) on SN Plio distance
where the _errors reflect only the reported uncertainties in the fluence_. Calculating the distance using a ratio of the fluences found in W21 provides the same results, with \(D_{2}=D_{1}\sqrt{\mathcal{F}_{1}/\mathcal{F}_{2}}\sim 110\) pc. Similarly, we note that the W21 crust 3 fluence (for SN Mio) and the distance calculated in Eq. (9) fall exactly on the estimated range for SN Mio as shown in the lunar fluence (see Subsection 3.5.1) -- this is unsurprising, as all three of these calculations are based off of the W21 crust fluences and are therefore correlated.
Figure 6 shows the distance to SN Mio compared to SN Plio. The distance calculated with the W21 crust 3 fluence for SN Mio is plotted along with the portioned lunar range corresponding to SN Mio. Two equivalent values are shown for SN Plio: the W21 crust 3 fluence for SN Plio and the portioned lunar range for SN Plio. To make the W21 crust 3 data directly comparable to the F16 lunar fluence, we have corrected for the 17% uptake factor in the crusts.12
Footnote 12: As a reminder, the uptake factor accounts for how much of the \({}^{60}\)Fe fluence is lost between arrival at Earth and absorption into the sampled material. The lunar regolith, similar to the deep-sea sediments, has a 100% uptake factor, as it is assumed no fluence is lost. The W21 crust has a 17% uptake factor (see Section 3). Thus in Fig. 6, we have accounted for the remaining 83% of the fluence in the crust measurements, to make them directly comparable to the lunar regolith measurements.
We note that, given the available measurements, the fluence for SN Mio is around a factor of two smaller than the fluence for SN Plio. These values have been decay-corrected and therefore this difference is real, excluding unknown geophysical parameters that could affect the signals. As discussed in Section 4, the influence this has on the calculated distance could be the result of a genuine difference in distances between the two supernovae or due to differences in the \(f_{60}\times M_{\rm ej,60}\) parameter; in reality, it is likely a combination of both scenarios.
Ideally, this calculation should be repeated with greater precision when there are more data available on SN Mio. With a detailed time profile, such as can be provided with sediment measurements, we might be able to investigate differences in the astronomical properties between the two supernovae; although as the above sections discuss there are significant ranges in such factors as the \({}^{60}\)Fe ejected mass and dust fraction, and differences in the supernovae distances could obscure these variations.
## 6 Discussion
Our results have an interplay with several areas of astrophysics, which we summarize here.
### External limits on distance:
Figure 6: _Distance to SN Mio._ Plotted are the portioned lunar fluences with gold (SN Plio) and purple (SN Mio) lines (see Subsection 3.5.1 and Fig. 2), the original lunar fluence with a black dashed line, the W21 crust 3 fluence for SN Plio in yellow, and the W21 crust 3 fluence for SN Mio in black. For ease of comparison, the W21 crust 3 measurements have been corrected for the uptake factor of 17%; they are therefore directly equivalent to the lunar measurements, which assume a 100% uptake. The distances to both SN Plio and SN Mio are calculated using the same \(f_{60}\times M_{\rm ej,60}\).
As discussed in Section 5, it is difficult to limit the possible distance range to SN Plio from the astrophysical parameters alone. The theoretical values for both the ejected mass and dust fraction of \({}^{60}\)Fe individually cover two orders of magnitude and, although we can rule out a few of the most extreme combinations, we are unable to use them to put strong limits on the supernova distance. We therefore turn to other methods that can be used to provide limits.
**Biological damage and extinctions:**
There has been a long history of speculating on the damage that supernova-driven cosmic rays could do to ozone in the atmosphere and implications for life on Earth (Shindewolf, 1954; Shklovskii and Sagan, 1966; Ruderman, 1974; Benitez et al., 2002; Melott and Thomas, 2011; Thomas et al., 2016). Gehrels et al. (2003) was the first to calculate a "kill distance" of 8 pc, inside of which supernovae could be responsible for mass-extinction events -- by depleting the ozone, UVB radiation from the Sun can cause significant damage to DNA. Later updates by Fry et al. (2015) extend the kill distance to 10 pc which we have adopted in this paper, although recent work suggests that the distance could be as far as 20 pc (Thomas and Yelland, 2023). As there is still life on Earth, we can rule out distances \(<10\) pc.
While more minor biological damage events and other climate-driven disruptions are still possible (Thomas, 2018; Melott et al., 2019; Melott and Thomas, 2019), we can also rule out any distance that would result in a mass-extinction, as there are no major mass-extinction events 3 Mya.13 This prevents the distance to SN Plio from being within 20 pc of Earth (Fields et al., 2020).
Footnote 13: There are also no such events 7 Mya, for SN Mio.
**Cosmic ray distance calculation:**
There is a local \({}^{60}\)Fe component measured in cosmic rays, which is associated with a nearby supernova around 2 Mya (Binns et al., 2016). Kachelriess et al. (2015, 2018) examines the proton, antiproton, and positron fluxes and anomalies in the cosmic ray spectrum; they argue for a local source in addition to the contribution from cosmic ray acceleration in supernova remnants throughout the Galaxy. Savchenko et al. (2015) shows that a feature in cosmic ray anisotropy at \(2-20\) TeV is due to a single, recent, local event. Using these data they calculate a distance, estimating the source to be roughly 200 pc from Earth. It should be noted that while the \({}^{60}\)Fe cosmic ray background is local and recent, these analysis relied on the detection of the \({}^{60}\)Fe signal for SN Plio -- we now know of two supernovae which occurred in the last 10 Myr and further analysis of the local cosmic ray background will need to take this into account.
**Nearby clusters:**
The massive stars that explode in CCSN or ECSN tend to form in clusters, most likely including the two near-Earth supernovae. We can use the locations of nearby stellar groups and associations as another method for constraining the supernova distances. The Tuc-Hor association is \(\sim 60\) pc from Earth (Mamajek, 2015) and considered to be the mostly likely to host SN Plio (Fry et al., 2015; Hyde and Pecaut, 2018).14 Hyde and Pecaut (2018) surveyed the local associations and groups within 100 pc and concluded that Tuc-Hor is the only association with an initial mass function (IMF) large enough to host a CCSN, thus making it the most likely candidate.
Footnote 14: As a reminder, SN Mio was only measured in detail in 2021 (Wallner et al., 2021) and is therefore not considered in these evaluations, although we expect it to have similar results.
Another consideration is the OB Sco-Cen association, favorable because it has the IMF to host multiple CCSN and is likely the source of the supernovae which formed the Local Bubble (Frisch et al., 2011; Fry et al., 2015; Breitschwerdt et al., 2016). Sco-Cen is located around 130 pc from Earth (Fuchs et al., 2006) and this puts it at the farther edge of the possible distance range. Neuhauser et al. (2020) even backtracked the positions of both a nearby pulsar and runaway star, claiming that they shared a common binary in proximity to Sco-Cen 1.78 Mya and could be the remnants of SN Plio -- unfortunately, the timing does not quite work out for SN Plio, as the initial \({}^{60}\)Fe signal starts at 3 Mya and the progenitor of the signal must precede the deposition time.
**Dust stopping time:**
Fry et al. (2020) simulates supernova-produced dust under the assumption that the dust is charged and will therefore be confined within the magnetized remnant. They find that their models can consistently propagate grains to 50 pc, but that greater distances are affected by magnetic fields and drag forces and unlikely to be reached. Although further research is needed, this work provides an interesting potential limit on the distances to SN Plio and SN Mio that is not dependent on the local interstellar low-density environment.
### Local Bubble implications
An interesting complexity arises when considering the solar system and the two near-Earth supernovae in conjunction with the large picture of the local Galactic neighborhood. The solar system is currently inside of the Local Bubble, a
region defined by low densities and high temperatures which is the result of numerous supernova explosions in the last 20 Myr (Frisch, 1981; Smith and Cox, 2001; Frisch et al., 2011; Breitschwerdt et al., 2016; Zucker et al., 2022). Although not inside the Local Bubble at the time of its formation, Earth crossed into the region around 5 Mya (Zucker et al., 2022). The timing places SN Plio (3 Mya) within the Local Bubble and therefore supernova remnant's expansion and properties should be considered in the context of a very low density ambient medium. However, SN Mio (7 Mya) could have feasibly exploded outside the Local Bubble (thus expanding into a more general ISM medium) or, if it was inside, the \({}^{60}\)Fe-bearing dust grains would have had to cross the Local Bubble wall in order to reach the solar system. Examining these potential differences and their effects in detail is beyond the scope of this work, however this interconnected picture is something to keep in mind in further studies.
## 7 Conclusions
The distance to the 3 Mya supernova was last calculated using only two \({}^{60}\)Fe signal measurements (Fry et al., 2015). With the large range of new data (Ludwig et al., 2016; Fimiani et al., 2016; Wallner et al., 2016, 2021), it is the perfect time to update this distance. We have also taken the opportunity to perform a parameter study on the astrophysical aspects of this problem and explore their effects on the supernova distance. The main points are listed below.
* We have evaluated the distance to SN Plio using all available \({}^{60}\)Fe fluence data. This allows us to examine the consistency among these measurements. Comparison among results hinges on the adopted uptake or incorporation efficiency, which varies among sites and groups; more study here would be useful. We find broad agreement among measurements, some of which are independent.
* Fixing \(M_{\rm ej,60}=3\times 10^{-5}\ M_{\odot}\) and \(f_{60}=10\%\), we find the distance to SN Plio (3 Mya) is \(D\sim 20-140\) pc. The distance to SN Mio (7 Mya) for the same astronomical parameters is \(D\sim 110\) pc -- further variation is expected in this distance once more data have been analyzed.
* While the range quoted above for SN Plio covers the full potential range of the data, more realistically the distance to SN Plio is between \(50-65\) pc. This accounts for the measurements by Wallner et al. (2016, 2021) and Fitoussi et al. (2008), falls inside the lunar range (Fimiani et al., 2016), and is the approximate distance to Tuc-Hor, the stellar association most likely to host the CCSN (Fry et al., 2015, 2020; Hyde and Pecaut, 2018).
* Wallner et al. (2021) measured both near-Earth supernova \({}^{60}\)Fe signals in the same FeMn crust sample; by using a ratio of the fluences from these measurements, we can portion the excess \({}^{60}\)Fe signal seen in the lunar regolith (Fimiani et al., 2016) into the contributions from the two supernovae. We find that about 90% of the lunar signal is from SN Plio and about 10% is from SN Mio.
* The sediment \({}^{60}\)Fe detections from Ludwig et al. (2016) are a valuable contribution to the field; unfortunately, a distance calculation reveals that the fluence quoted in their work produces an unrealistically far distance. We must therefore suggest that their assumption of a 100% uptake factor was incorrect or that some of the \({}^{60}\)Fe in their samples was discarded.
* The possible change in uptake factor for the Knie et al. (2004) data accounts for the entirety of the \(20-140\) pc range quoted for SN Plio -- efforts to constrain this uptake factor will be of great value to the field and help narrow down the Earth-based spread in the distance range.
* The astronomical parameters of ejected \({}^{60}\)Fe mass and dust fraction have significant influence on the supernova distance and their possible values cover a wide range. We can say that combinations of low \(M_{\rm ej,60}\) and low \(f_{60}\) produce unrealistically close distances, while combinations of high \(M_{\rm ej,60}\) and high \(f_{60}\) lead to unrealistically far distances. For a supernova at about 50 pc, the combined parameter \(f_{60}\times M_{\rm ej,60}\ \simeq 2\times 10^{-6}\,M_{\odot}\). Future observations that can shed new light on these parameters include CCSN dust measurements by JWST, and \({}^{60}\)Fe gamma-ray line measurements by the upcoming COSI mission (Tomsick and COSI Collaboration, 2022).
* The travel time parameter (how long the \({}^{60}\)Fe is traveling from production to deposition on Earth) has a negligible effect on the supernova distance.
The plethora of work that has gone into analyzing the \({}^{60}\)Fe signals over the last seven years has greatly increased our understanding of the two near-Earth supernovae; further efforts will help constrain the geophysical parameters in the distance calculations. We are especially interested in results from sediment samples, as they are easiest to relate to the fluence of \({}^{60}\)Fe. Furthermore, sediment samples of the 7 Mya SN will allow us to focus more closely on the astronomical differences between the two supernovae.
The field of supernova dust dynamics is an active area of research and applies to far more than what we have summarized in this paper. We look forward to advancements in the understanding of dust survival and destruction within remnants, as constraints on these numbers will help narrow our distance range. Investigations into \({}^{60}\)Fe production within supernovae and simulations of the explosion can also help tighten values for the \({}^{60}\)Fe ejected mass. Surveys, maps, and exploration of the Local Bubble allow further constraints on the distances to SN Plio and SN Mio. Observations of dust in supernovae, such as with JWST, can probe grain production and evolution. These independent measurements are invaluable, as they deal directly with the local neighborhood but are not correlated to the \({}^{60}\)Fe signals detected on Earth.
Finally, and regardless of any and all possible limiting effects, enough \({}^{60}\)Fe must travel from the supernovae to Earth to be detected by precision AMS measurements _at least twice over_ in the last 10 Myr. That such a signal has been observed twice in the (relatively) recent geologic past that the process of getting the \({}^{60}\)Fe to Earth cannot overly inhibiting, and indeed suggests that the interstellar spread of supernova radioisotope ejecta may be a robust process.
We are grateful to Shawn Bishop, Thomas Faestermann, Caroline Fitoussi, Jenny Feige, Gunther Korschinek, Peter Ludwig, and Toni Wallner for answering our questions about their data. It is a pleasure to acknowledge useful discussions with Jesse Miller, Carla Frohlich, Sanjana Curtis, Zhenghai Liu, and Phil Coady. The work of AFE and BDF was supported in part by the NSF under grand number AST-2108589, and benefitted from Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements).
| 地球近傍の超新星爆発は、太陽系を包み込む、地球と月面記録にそのejectaの痕跡を残しています。現在、活発な放射性 ${}^{60}$Fe のデータは、3 Myr 前の超新星を指摘し、7 Myr 前のイベントの発見もされています。私たちは、これらのイベントの距離を評価するために利用可能な測定値を用いて計算します。3 Myr の分析がより良い超新星については、サンプルには深海沉积物、鉄Manganese 殻、月面 regolith を含みます。これらの測定値の整合性(または不整合性)を調べることは、鉄の吸収と ${}^{60}$Fe Fallout の可能性のある異方性によって大きく影響されます。これらの計算には、必要な天文学的なパラメータには、重要な不確実性があります。これらの計算に影響を与える可能性のある ejected ${}^{60}$Fe Mass と残骸 |
2309.17275 | Utility-based Adaptive Teaching Strategies using Bayesian Theory of Mind | Good teachers always tailor their explanations to the learners. Cognitive
scientists model this process under the rationality principle: teachers try to
maximise the learner's utility while minimising teaching costs. To this end,
human teachers seem to build mental models of the learner's internal state, a
capacity known as Theory of Mind (ToM). Inspired by cognitive science, we build
on Bayesian ToM mechanisms to design teacher agents that, like humans, tailor
their teaching strategies to the learners. Our ToM-equipped teachers construct
models of learners' internal states from observations and leverage them to
select demonstrations that maximise the learners' rewards while minimising
teaching costs. Our experiments in simulated environments demonstrate that
learners taught this way are more efficient than those taught in a
learner-agnostic way. This effect gets stronger when the teacher's model of the
learner better aligns with the actual learner's state, either using a more
accurate prior or after accumulating observations of the learner's behaviour.
This work is a first step towards social machines that teach us and each other,
see https://teacher-with-tom.github.io. | Clémence Grislain, Hugo Caselles-Dupré, Olivier Sigaud, Mohamed Chetouani | 2023-09-29T14:27:53 | http://arxiv.org/abs/2309.17275v1 | # Utility-based Adaptive Teaching Strategies using Bayesian Theory of Mind
###### Abstract
Good teachers always tailor their explanations to the learners. Cognitive scientists model this process under the rationality principle: teachers try to maximise the learner's utility while minimising teaching costs. To this end, human teachers seem to build mental models of the learner's internal state, a capacity known as Theory of Mind (ToM). Inspired by cognitive science, we build on Bayesian ToM mechanisms to design teacher agents that, like humans, tailor their teaching strategies to the learners. Our ToM-equipped teachers construct models of learners' internal states from observations and leverage them to select demonstrations that maximise the learners' rewards while minimising teaching costs. Our experiments in simulated environments demonstrate that learners taught this way are more efficient than those taught in a learner-agnostic way. This effect gets stronger when the teacher's model of the learner better aligns with the actual learner's state, either using a more accurate prior or after accumulating observations of the learner's behaviour. This work is a first step towards social machines that teach us and each other, see [https://teacher-with-tom.github.io](https://teacher-with-tom.github.io).
## 1 Introduction
When tasked with imparting an understanding of the solar system, a physics teacher tailors their explanation based on the audience. The approach taken for a 10-year-old astrophysics enthusiast differs significantly from that employed for an advanced master's student. In fact, the teacher provides an explanation that maximises the likelihood of the listener understanding the concept. This pedagogical sampling phenomenon has been explored in cognitive science notably in Gweon et al. (2018). This study involves children being asked to demonstrate the use of a toy to knowledgeable or ignorant children learners. It shows that the behaviour of the teacher-child depends on prior observations of the learner-child. Specifically, if the learner has previously interacted with a similar toy in the presence of the teacher, the teacher only exhibits partial functionality of the toy. Conversely, when no prior interaction is observed, the teacher demonstrates the complete use of the toy.
By definition, the aim of a teacher is to ensure the learner's understanding. An option for the teacher would be to demonstrate the full functionality of the toy each time, but this comes with a cost. Rather, the teacher strikes a balance between the learner's understanding, reflected in its subsequent behaviour, and the costs of teaching. Assuming the teacher is rational, we can thus consider that this trade-off is the teacher's _utility_(Goodman and Frank, 2016). Importantly, teachers who solely provide the missing information for the learner to achieve the task are also perceived as more trustworthy than over-informative ones (Gweon et al., 2018).
More generally, human teachers choose how to teach based on a prediction of how their guidance signal will be received, as outlined in the Inferential Social Learning (ISL) framework (Gweon, 2021). In this framework, humans acquire knowledge by making inferences from observing others' behaviour and leverage this knowledge to help others learn. More precisely, ISL is grounded on a set of cognitive mechanisms constituting the Theory of Mind (ToM), which refers to the human ability to understand and predict the actions of others by inferring their mental states, such as prior knowledge, goals, intentions, beliefs etc. (Baker and Saxe, 2011). ToM can be understood as the
inverse planning of an intuitive behavioural model predicting what others would do given their mental state (Baker et al., 2009). To be efficient, human pedagogical interventions such as selection of examples (Shafto et al., 2014) or demonstrations (Ho et al., 2021) require ToM. ISL is considered a key component to humans mutual understanding as well as a foundation of humans' powerful capacity to efficiently learn from others. Therefore, incorporating ISL mechanisms into AI systems is a promising way to make human-machine interactions more informative, productive, and beneficial to humans (Gweon et al., 2023; Sigaud et al., 2022).
In this paper, we introduce teacher agents equipped with a ToM model of the learner agent's internal state, including its goal, intention, belief, and sensory capacity. The goal of this work is to study whether learner-specific teachers who model the learner's internal state are more efficient than learner-agnostic ones. In particular, we explore the limitations of ToM models not being able to recover the learner actual internal state from its behaviour, either due to inaccurate priors or limited observation, in a context where providing guidance incurs a cost proportional to its informativeness.
To achieve this, as depicted in Figure 1, we define _ToM-teachers_ able to
1. update a _belief_ about the internal state (i.e. goal, intention, belief, sensory capacity) of an unknown learner through Bayesian inference based on observations of its behaviour in a simple environment, see Figure 1(A), and
2. leverage this belief to estimate the utility of different demonstrations in a more complex environment, similarly to human planning as described in Ho et al. (2022), in order to select the most effective one for the specific observed learner, see Figure 1(B).
To conduct our experiments, we present two environments: a toy environment reminiscent of Gweon's study mentioned above (Gweon et al., 2018), and a more challenging gridworld environment for goal-conditioned 2D navigation, see Figure 1. Depending on its sensory capacity, the learner might require the help of a teacher agent providing a demonstration showing the locations of the objects needed to complete the task. However, the teacher ignores the goal of the learner and its sensory capacity, but can infer them from a past trajectory of the learner in a simpler environment.
In this setup, the teacher must select the most useful demonstration providing enough information to help the learner reach its goal, but at a minimal teaching cost. The demonstration utility is optimal if it contains the necessary and sufficient amount of information for the learner to reach its goal. In this context, we show that the teacher must display accurate ISL abilities, inferring the learner's goal
Figure 1: (A) The teacher observes a learner with a particular internal state behaving in a simple environment \(\mathcal{M}^{\text{obs}}\) and infers a ToM model of this learner. (B) In a more complex environment \(\mathcal{M}^{\text{demo}}\), the teacher uses this ToM model to predict the usefulness for the observed learner of each demonstration of a provided dataset \(\mathcal{D}\), out of which it selects the utility-optimal demonstration \(d^{*}\). The learner observes \(d^{*}\) and updates its knowledge about \(\mathcal{M}^{\text{demo}}\). (C) The learner behaves in \(\mathcal{M}^{\text{demo}}\) and receives a reward. The teacher is evaluated on the utility of \(d^{*}\), which is the learner’s reward minus the cost incurred by the teacher in delivering that demonstration.
and sensory capacity from the past trajectory to effectively assist the learner. However, we find that this depends on the accuracy of the ToM-teacher's behavioural model of the learner as well as the amount of observation of its behaviour.
## 2 Related work
In addition to cognitive science researches on human pedagogy (Shafto et al., 2014; Gweon, 2021; Ho et al., 2021), this work is related to the following interconnected research areas:
**Theory of Mind (ToM):** Observer agents capable of inferring the internal state, including the goal, of another agent have been developed based on Bayesian Inference (Ying et al., 2023; Reddy et al., 2019) and neural networks (Rabinowitz et al., 2018; Nguyen et al., 2022). However, these works do not explore how to leverage these models of ToM to assist the learner in achieving its goal, as humans do, as explained in Ho et al. (2022). Our teacher agent is capable of both modelling the learner's internal state, including its goal as well as sensory capacity, and leveraging this model to assist the learner through adapted demonstration selection.
**Machine teaching:** Machine Teaching is formalised as the problem of identifying the minimal teaching signal maximising the learner's reward (Zhu et al., 2018; Brown and Niekum, 2019). The teacher possesses knowledge of the learner's goal and aims to either generate the teaching data (Zhu, 2013) or to extract it from a dataset (Yang and Shafto, 2017), helping the learner agent achieve its goal. A teaching signal is considered optimally useful if it maximises utility, that is it enables the learner to achieve its goal while minimising the teaching cost (Zhu et al., 2018). In our framework as in Machine Teaching, the teacher must select the most helpful demonstration from a given set. However, in contrast to these previous works, our teacher assists various learners with different goals and sensory capacities, and thus different optimal demonstrations. Furthermore, when teaching, the teacher is unaware of the learner's goal and infers it from past interactions, hence the introduction of a ToM model of the learner. The demonstration selection strategy of our teacher is similar to the one used in cognitive science to model human's strategy as described in Ho et al. (2022): it uses the learner's ToM model to predict the outcomes of different possible demonstrations for the learner, in order to select the demonstration of optimal utility. While our work uses communication through demonstrations as sequences of actions, enhancing teaching by incorporating ToM model of the learner has already been investigated in the context of language-based teacher-learner communication in Zhao et al. (2023); Zhou et al. (2023).
**Bayesian Inference:** Bayesian Inference is a widely used mechanism for inferring the goals of other agents by computing posterior probabilities based on their actions and policies (Baker et al., 2009; Baker and Saxe, 2011; Zhi-Xuan et al., 2020; Ying et al., 2023). In our work, we employ it as a tool to infer the internal state of the learner, including its goal and sensory capacity. Additionally, similarly to Zhu (2013); Ho et al. (2022), we assume a Bayesian learner to ensure direct communication from the teacher to the learner as the demonstration selected by the teacher modifies the belief of the learner about the environment.
## 3 Methods
Our general framework is depicted in Figure 1. Below we describe the components in more details.
### Environment
We introduce our environment as a Goal-Conditioned Partially Observable Markov Decision Problem (GC-POMDP), which is a combination of a Goal-Conditioned Markov Decision Problem (GC-MDP) and a Partially Observable Markov Decision Problem (POMDP). In GC-POMDPs, agents aim at achieving different goals with limited information on the current state of the environment. An instance \(\mathcal{M}^{j}\) of a GC-POMDP is defined by:
\(\bullet\) A set of states \(\mathcal{S}^{i}\), a set of possible actions \(\mathcal{A}^{j}\), a transition function \(\mathcal{T}^{j}:\mathcal{S}^{j}\times\mathcal{A}^{j}\rightarrow\mathcal{S}^{j}\),
\(\bullet\) A set of possible goals \(\mathcal{G}^{j}\),
\(\bullet\) A history-dependent goal-conditioned reward function \(R^{j}:\mathcal{H}^{j}\times\mathcal{G}^{j}\rightarrow\mathbb{R}\), where \(\mathcal{H}^{j}\) is the space of histories. We define a _history_ as a sequence of state-action pairs over time, which can be formulated as \(\mathcal{H}^{j}=\bigcup_{t}\mathcal{H}^{j}_{t}\) in which \(\mathcal{H}^{j}_{t}=\{(s_{0},a_{0},\ldots,s_{t-1},a_{t-1})\}=\prod_{t}\big{(}S ^{j}\times\mathcal{A}^{j}\big{)}\).
We consider that all GC-POMDPs share their action and goal spaces denoted \(\mathcal{A}\) and \(\mathcal{G}\). In summary, a GC-POMDP is defined as \(\mathcal{M}^{j}=(\mathcal{S}^{j},\mathcal{A},\mathcal{T}^{j},\mathcal{G},R^{j})\).
In practice, our GC-POMDPs are different instances of similar gridworld environments constructed from the MiniGrid library (Chevalier-Boisvert et al., 2023). Another example with a toy environment is described in Appendix A.
### Learner
We consider a finite family of agents \(\mathcal{L}=\{L_{i},i\in I\}\) that we call _learners_. A learner \(L_{i}\) is defined by a goal \(g_{i}\in\mathcal{G}\) and an observation function \(v_{i}\), i.e. \(L_{i}=(g_{i},v_{i})\).
In an environment \(\mathcal{M}^{j}=(\mathcal{S}^{j},\mathcal{A},\mathcal{T}^{j},\mathcal{G},R^{j})\), the observation function is defined on the state space towards an observation space \(\Omega_{i}\), \(v_{i}:\mathcal{S}^{j}\rightarrow\Omega_{i}\). The set of observation functions is denoted \(\mathcal{V}\) and is assumed to be identical for all the considered GC-POMDPs. The aim of the learner is to maximise the reward functions \(R^{j}\), conditioned on the learner's goal \(g_{i}\). In practice, the learner must achieve its goal in minimum time to maximise its reward. We characterise the behaviour of a learner \(L_{i}\) on \(\mathcal{M}^{j}\) as a trajectory \(\tau_{i}=\{(s_{t},a^{i}_{t})\in\mathcal{S}^{j}\times\mathcal{A}\}_{t=0}^{T}\). For the same trajectory, two learners \(L_{i}\) and \(L_{i^{\prime}}\) with different observation functions \(v_{i}\neq v_{i^{\prime}}\) acquire different knowledge about the environment, and two learners with different goals \(g_{i}\neq g_{i^{\prime}}\) receive different rewards.
As shown in Kaelbling et al. (1998); Ross et al. (2007), a POMDP, and by extension a GC-POMDP, can be defined as a Bayes Adaptive Partially Observable Markov Decision Problem (BAPOMDP). In this formulation, the observation is augmented by a belief of the agent about uncertain aspects of the environment, such as the reward function, transition function, or state. In our context, from the learner's point of view, the uncertainty is limited to the state of the environment.
To model learner's \(L_{i}\) policy, we thus consider at every step \(t\) its _belief_\(b^{i,j}_{t}\), which is a probability distribution over a set of possible states \(\mathcal{S}^{j}_{B}\) of environment \(\mathcal{M}^{j}\). We assume that the support of the belief contains the real state space, \(\mathcal{S}^{j}\subset\mathcal{S}^{j}_{B}\) and note \(\mathcal{B}^{j}\) the continuous space of beliefs.
At every step \(t\), the environment being in a state \(s_{t}\in\mathcal{S}^{j}\) and the observation being \(o^{i}_{t}=v_{i}(s_{t})\), the belief of learner \(L_{i}\) about the state \(s\in\mathcal{S}^{j}_{B}\) of the environment is updated using Bayesian update:
\[\forall s\in\mathcal{S}^{j}_{B},\quad b^{i,j}_{t+1}(s)=\frac{b^{i,j}_{t}(s) \times\mathbb{P}(o^{i}_{t}|s)}{\int_{s^{\prime}\in\mathcal{S}^{j}_{B}}b^{i,j}_ {t}(s^{\prime})\times\mathbb{P}(o^{i}_{t}|s^{\prime})}. \tag{1}\]
Unless mentioned otherwise, we assume that the learner's initial belief \(b^{i,j}_{0}\) on the state of \(\mathcal{M}^{j}\) is uniform over the set of possible states \(\mathcal{S}^{j}_{B}\). In the experiments presented below, we additionally assume that all learners share a policy on the environment \(\mathcal{M}^{j}\) conditioned by a goal, an observation function and a belief:
\[\pi^{j}(.|g,v,b^{L}):\cup_{i}\Omega_{i}\times\mathcal{A}\rightarrow[0,1],\quad \text{with }(g,v,b^{L})\in\mathcal{G}\times\mathcal{V}\times\mathcal{B}^{j}. \tag{2}\]
To simulate a trajectory \(\tau^{i}\) of learner \(L_{i}\) on \(\mathcal{M}^{j}\), one only needs to know the tuple \((\pi^{j},g_{i},v_{i},b^{i,j}_{0})\). In practice, the learners use a single policy denoted \(\pi\) for all the considered GC-POMDPs.
Moreover, within MiniGrid environments, the observation functions \(v_{i}\) are defined by a square area of size \(v_{i}\times v_{i}\) cells, known as the _receptive field_ of learner \(L_{i}\). This receptive field defines the localised region in front of the learner, mimicking visual sensory capacities and a larger receptive field size helps the learner reach its goal faster.
We denote \(C^{i}_{t}\) the set of visible cells in observation \(o^{i}_{t}\) at time \(t\). The probability \(\mathbb{P}(o^{i}_{t}|s)\) in Equation 1 is then computed as \(\mathbb{P}(o^{i}_{t}|s)=\prod_{c\in C^{i}_{t}}\mathds{1}(o^{i}_{t}[c_{o}]=s[c])\), where \(c_{o}\) corresponds to the cell in the observation matching cell \(c\).
### Teacher
We introduce an agent called _teacher_ whose aim is to optimally help the learner maximise its reward on a GC-POMDP \(\mathcal{M}^{\text{demo}}=(\mathcal{S}^{\text{demo}},\mathcal{A},\mathcal{T}^{ \text{demo}},\mathcal{G},R^{\text{demo}})\) by providing a demonstration.
#### 3.3.1 Utility based demonstration selection strategy
We define a demonstration of length \(n\in\mathbb{N}\) on \(\mathcal{M}^{\text{demo}}\) as a sequence of actions \(d=(a_{0}^{\text{demo}},\dots,a_{n-1}^{\text{demo}})\in(\mathcal{A})^{n}\). We consider the demonstration to be provided as if the teacher were _teleoperating_ the learner as described in Silva and Costa (2019). Thus, at step \(t\) of the demonstration, learner \(L_{i}\) observes \(\bar{o}_{t+1}^{t}=v_{i}\left(\mathcal{T}_{\text{demo}}(s_{t},a_{t}^{\text{demo }})\right)\). The learner's belief about the new environment \(\mathcal{M}^{\text{demo}}\) is updated based on the observations \((\bar{o}_{1}^{i},\dots,\bar{o}_{n}^{i})\) resulting from the demonstration, as in Equation 1 and depicted in Figure 1(B).
This updated belief is then used as initial belief \(b_{0}^{i,\text{demo}}\) by the learner. In other words, the aim of the demonstration is to provide to the learner a prior knowledge about the new environment. The environment is then reset to its initial state, and the learner behaves following a policy \(\pi^{\text{demo}}\) defined in Equation 2 starting with belief \(b_{0}^{i,\text{demo}}\). As shown in Figure 1(C), the execution of this policy produces a trajectory \(\tau^{\text{demo}}=\{(\hat{s}_{\text{demo}}^{\text{demo}},a_{\text{demo}}^{ \text{demo}})\}_{t=0}^{T}\) where \(T\in\mathbb{N}\) and the learner receives a reward \(R^{\text{demo}}(\tau^{\text{demo}},g_{i})\) denoted \(R^{\text{demo}}(L_{i}|d)\), which represents the reward of learner \(L_{i}\) on environment \(\mathcal{M}^{\text{demo}}\) after having observed demonstration \(d\).
We assume that the teacher knows the environment \(\mathcal{M}^{\text{demo}}\) and has access to a set of potential demonstrations \(\mathcal{D}\) to be shown on \(\mathcal{M}^{\text{demo}}\) as well as a teaching cost function \(c_{\alpha}:\mathcal{D}\rightarrow\mathbb{R}\) parameterised \(\alpha\in\mathbb{R}_{+}\). For a given parameter \(\alpha\), the cost of a demonstration \(d\in\mathcal{D}\), denoted \(c_{\alpha}(d)\), represents the cost for the teacher of showing demonstration \(d\) to a learner. In our context, this function increases with the length of the demonstration.
We introduce on the environment \(\mathcal{M}^{\text{demo}}\) the _utility_ of a demonstration \(d\) for a learner \(L_{i}\) as the reward of the learner after having observed the demonstration \(d\) on \(\mathcal{M}^{\text{demo}}\) minus the cost for the teacher of showing this demonstration: \(u_{\alpha}(d,L_{i})=R^{\text{demo}}(L_{i}|d)-c_{\alpha}(d)\). The aim of the teacher is to select the demonstration \(d_{i}^{*}\) that maximises the utility for the learner \(L_{i}\):
\[d_{i}^{*}=\arg\max_{d\in\mathcal{D}}\underbrace{u_{\alpha}(d,L_{i})}_{R^{ \text{demo}}(L_{i}|d)-c_{\alpha}(d)}. \tag{3}\]
However, the teacher does not know neither the learner's goal \(g_{i}\) nor its observation function \(v_{i}\). Instead, it can only access a past trajectory \(\tau^{\text{obs}}\) of the same learner \(L_{i}\), but in a different environment \(\mathcal{M}^{\text{obs}}=(\mathcal{S}^{\text{obs}},\mathcal{A},\mathcal{T}^{ \text{obs}},\mathcal{G},R^{\text{obs}})\), see Figure 1(A). Therefore, in order to approximate Equation 3, the teacher should estimate the utility of each demonstration \(d\) in \(\mathcal{D}\) for this learner, see Figure 1(B). As the teacher knows the teaching cost function, this is equivalent to estimating the learner's reward.
#### 3.3.2 Bayesian ToM-teacher
To estimate the utility of a demonstration \(d\) for an unknown learner \(L\), we introduce a teacher equipped with a Theory of Mind (ToM) model that we refer to as _ToM-teacher_. In our case, the ToM model is used to predict the learner's behaviour on \(\mathcal{M}^{\text{demo}}\) after having observed demonstration \(d\), leading to the estimation of the demonstration's utility.
We present a ToM-teacher using Bayesian inference, called _Bayesian ToM-teacher_. We assume that the teacher has knowledge of the learner's uniform initial belief and has access to a behavioural model of the learner - that is an approximation of its policy \(\hat{\pi}\) - along with sets of possible goals \(\mathcal{G}_{B}\) and observation functions \(\mathcal{V}_{B}\). These spaces are assumed discrete.
In practice, the latter set represents a range of possible sizes of receptive fields. We assume that both sets contain the real sets of goals and observation functions (\(\mathcal{G}\subset\mathcal{G}_{B}\) and \(\mathcal{V}\subset\mathcal{V}_{B}\)). In this context, from the teacher's perspective, the uncertainty relies solely on the goals and observation functions of the learners. Therefore a teacher considers learner \(L_{i}\) as the tuple \((\hat{\pi},g_{i},v_{i})\).
From a past trajectory \(\tau^{\text{obs}}=\{(s_{k},a_{k}^{\text{obs}})\}_{k=0}^{K-1}\) of an unknown learner \(L\) on the first environment \(\mathcal{M}^{\text{obs}}\), the Bayesian ToM-teacher computes a probability distribution over the joint space \(\mathcal{G}_{B}\times\mathcal{V}_{B}\)
that is its belief \(b^{T}\) about the goal and observation function of the learner. At step \(k\in[0,K-1]\) of the observed trajectory \(\tau^{\text{obs}}\), for every pair \((g,v)\in\mathcal{G}_{B}\times\mathcal{V}_{B}\), it derives from Equation 1 the belief that a learner would have with observation function \(v\) after producing the trajectory \(\tau^{\text{obs}}[0:k-1]\), denoted \(b^{v}_{k}\). It then updates its own belief about the learner goal and observation function based on the Bayesian update rule:
\[\forall(g,v)\in\mathcal{G}_{B}\times\mathcal{V}_{B},\quad b^{T}_{k+1}(g,v)= \frac{b^{T}_{k}(g,v)\times\hat{\pi}\left(v(s_{k-1}),a^{\text{obs}}_{k}|g,b^{v} _{k}\right)}{\sum_{g^{\prime}\times v^{\prime}\in\mathcal{G}_{B}\times \mathcal{V}_{B}}b^{T}_{k}(g^{\prime},v^{\prime})\times\hat{\pi}\left(v^{\prime }(s_{k-1}),a^{\text{obs}}_{k}|g^{\prime},b^{v}_{k}\right)}. \tag{4}\]
The quantity \(b^{T}_{k}(g,v)\) represents the probability of the learner having a goal \(g\) and an observation function \(v\), given that it produced trajectory \(\tau^{\text{obs}}[0:k-1]\), under the assumption that, to generate \(\tau^{\text{obs}}[0:k-1]\), the learner follows policy \(\hat{\pi}\).
After having observed the entire trajectory, the teacher estimates the utility of a demonstration \(d\in\mathcal{D}\) on a second environment \(\mathcal{M}^{\text{demo}}\) for the observed learner by computing the expected value:
\[\hat{u}_{\alpha}(d)=\sum_{(g,v)\in\mathcal{G}_{B}\times\mathcal{V}_{B}}\hat{u} _{\alpha}\left(d,L=(g,v)\right)\times b^{T}_{K}(g,v), \tag{5}\]
where \(\hat{u}_{\alpha}(d,L)\) is the estimated utility of demonstration \(d\) for learner \(L=(\hat{\pi},g,v)\). To compute this quantity, the teacher computes the initial belief \(b^{v,\text{demo}}_{0}\) of the learner \(L=(g,v)\) on the environment \(\mathcal{M}^{\text{demo}}\) after having observed demonstration \(d\), based on Equation 1. From the tuple \((\hat{\pi},g,v,b^{v,\text{demo}}_{0})\), the teacher simulates a trajectory \(\hat{\tau}^{\text{demo}}\) and computes the associated estimated reward \(\hat{R}^{\text{demo}}(L|d)=R^{\text{demo}}(\hat{\tau}^{\text{demo}},g)\) leading to the estimated utility \(\hat{u}_{\alpha}(d,L)=\hat{R}^{\text{demo}}(L|d)-c_{\alpha}(d)\). The expected utility can be expressed as the expected reward of the observed learner after following demonstration \(d\) minus the cost of the demonstration:
\[\hat{u}_{\alpha}(d)=\underbrace{\left(\sum_{(g,v)\in\mathcal{G}_{B}\times \mathcal{V}_{B}}\hat{R}^{\text{demo}}(L=(g,v)|d)\times b^{T}_{K}(g,v)\right)}_{ \text{Expected reward}}-c_{\alpha}(d). \tag{6}\]
The teacher selects the utility-optimal demonstration \(d^{*}\), approximating Equation 3 with \(d^{*}=\arg\max_{d\in\mathcal{D}}\hat{u}_{\alpha}(d)\).
We define two ToM-teachers which differ in their prior model of the learner's policy \(\hat{\pi}\):
\(\bullet\) The _aligned ToM-teacher_ possesses exact knowledge of the learner's policy, \(\hat{\pi}=\pi\).
\(\bullet\) The _rational ToM-teacher (with parameter \(\lambda\))_ only assumes that the learner is rational, meaning it tries to reach the goal in minimum time, but its approximate policy \(\hat{\pi}\neq\pi\) is based on a Boltzmann policy that considers the expected distance between the learner and the goal after taking different actions. The temperature parameter \(\lambda\) of the Boltzmann policy represents the assumed degree of rationality of the learner in terms of how much the learner favours actions towards its goal, see Appendix B.3 for more details.
## 4 Experiments
**Environments:** The observation environment \(\mathcal{M}^{\text{obs}}\) is a \(11\times 11\) MiniGrid gridworld (Chevalier-Boisvert et al., 2023) and is enclosed by walls along its borders. The environments contains four door-key pairs of colours in the set \(\mathcal{G}=\{green,blue,purple,yellow\}\). To open a door, an agent has to possess the key of the same colour. The demonstration environment \(\mathcal{M}^{\text{demo}}\), contains the same objects as the observation environment but over \(33\times 33\) cells. It is composed of nine rooms of \(11\times 11\) cells, separated by walls. In both environments, a trajectory stops either when the learner opens its goal door or when the maximum number of actions is elapsed.
**Learner:** The learner's goal is to open a door as fast as possible. To model this, we use the default goal-conditioned trajectory reward function of the MiniGrid environments:
\(\frac{\text{length}(\tau)}{\text{max\_steps}}\) if the door of colour \(g\in\mathcal{G}\) is open at the end of trajectory \(\tau\), and \(R(\tau,g)=0\) otherwise. In \(\mathcal{M}^{\text{obs}}\), we set \(\text{max\_steps}=11^{2}=121\), and in \(\mathcal{M}^{\text{demo}}\), we use \(\text{max\_steps}=\frac{33^{2}}{2}=544\).
The learner possesses either a view with dimensions \(v\times v\) cells with \(v\in\{3,5\}\) or full observability (\(v=full\_obs\)) of the environment.
We define the learner's policy as a decision tree depicted in Appendix B.1. We assume that the learner attempts to reach the corresponding key before trying to open the door and acts greedily when it knows the location of the object to reach and actively explores otherwise. The greedy policy follows the shortest path computed by the \(A^{*}\) algorithm (Hart et al., 1968) within the parts of the environment that have been discovered to go to the object. The active exploration policy selects actions best reducing the uncertainty on the environment state.
**Teachers:** As defined above in Section 3.3, we consider two teachers equipped with a ToM model of the learner, an _aligned ToM-teacher_ and a _rational ToM-teacher_ with a parameter \(\lambda\). We compare the utilities of their demonstrations to that of 5 baseline teachers, one for upper-bound and four learner-agnostic teachers which do not leverage the past observations of the learner in their strategies for demonstration selection:
The _omniscient teacher_ knows the actual goal, observation function and policy of the learner and provides the utility-optimal demonstration. It sets an upper-bound teacher's utilities.
The _reward-optimal non-adaptive teacher_ selects the demonstration in \(\mathcal{D}\) maximising the mean reward over all the possible learners without considering the teaching cost. In practice, this teacher provides the demonstration showing all the objects (keys and doors) of the environment.
The _utility-optimal non-adaptive teacher_ selects the demonstration in \(\mathcal{D}\) maximising the mean utility over all possible learners.
The _uniform modelling teacher_ uniformly samples a learner in \(\mathcal{L}\): it uniformly samples a goal \(g\) and a receptive field size \(v\) for the observed learner and provides the demonstration maximising the utility for \(L=(g,v)\).
The _uniform sampling teacher_ selects a demonstration uniformly among the set \(\mathcal{D}\) of available demonstrations. This teacher does not have any model of the learner.
**Demonstration set:** The demonstration set \(\mathcal{D}\) contains shortest demonstrations for each goal-observation function pairs \((g,v)\in\mathcal{G}\times\mathcal{V}\) showing the learner's key and door goal at a distance of at least \(v\). In addition, we generate demonstrations showing \(N\in[3,8]\) random objects (key or door) of the environment, see Appendix B.2 for details. We use a linear teaching cost with parameter \(\alpha=0.6\) normalised by the size \(l_{max}\) of the longest demonstration of \(\mathcal{D}\). For a demonstration of length \(l_{d}\), the teaching cost is \(c_{\alpha}(l_{d})=\alpha\times\frac{l_{d}}{l_{max}}\). In practice, the longest demonstration is the one showing all objects, \(N=8\).
**Metrics:** A teacher is evaluated based on the measured utility of the demonstration it has selected for the observed learner \(L\), given by \(u_{\alpha}(d^{*},L)=R^{\text{demo}}(L|d^{*})-c_{\alpha}(d^{*})\).
**Experiments**: We conducted \(100\) experiments for each pair \((g,v)\in\mathcal{G}\times\mathcal{V}\). The mean utilities of the demonstrations selected by the teachers for learners with a fixed receptive field size \(v\) are displayed in Figure 2 and detailed in Appendix C Table 1. They are computed over \(400\) trials with a \(95\%\) confidence interval and we perform Student T-tests to assess significant difference between the mean utilities of two teachers. In each trial, both the observation and demonstration environments are randomly generated, and all teachers are evaluated within the same environment pair (\(\mathcal{M}^{\text{obs}},\mathcal{M}^{\text{demo}}\)) - all teachers select a demonstration from the same demonstration set \(\mathcal{D}\), and the ToM-teachers observe the same trajectory of the learner on \(\mathcal{M}^{\text{obs}}\).
## 5 Results
We provide results when the learners are observed under two conditions: for a full episode or for only their \(10\) first actions, leading to more uncertain inference about their goals and sensory capacities.
### Observing a full trajectory of the learner
Figure 2 illustrates the mean utility of the demonstrations selected by each teacher, for learners with varying receptive field sizes acting in \(\mathcal{M}^{\text{obs}}\) during a full episode.
Across all the considered learners with varying receptive field sizes, the demonstrations chosen by the ToM-teachers outperform those of learner-agnostic baseline teachers. As the task difficulty increases for the learner (i.e., when its receptive field size decreases), the learner requires both more informative and more specific demonstrations to achieve its goal. Consequently, having an accurate model of the learner becomes necessary to ensure the selection of helpful demonstrations.
The mean utility of aligned ToM-teachers is not significantly different from that of the omniscient demonstrations (p-values \(>0.3\))1 for learners with receptive field of sizes \(3\) and \(5\). In contrast, uniform teachers select demonstrations with close-to-null mean utility for learners with a receptive field size of \(3\) and demonstrations that are four times less useful than those of the ToM-teachers for learners with receptive field size of \(5\). The utility-optimal and reward-optimal non-adaptive teachers perform at most half as well as the ToM-teachers for these learners, see Appendix C Table 1.
Footnote 1: A t-test with null hypothesis \(H_{0}\): there is no significant difference between the utilities of both teachers.
On the contrary, as the task becomes easier for the learners (with wider sensory capacities), the mean utilities of the demonstrations selected by learner-agnostic teachers get closer to those of the ToM and omniscient teachers' demonstrations, as the need for selecting a specific demonstration based on an accurate model of the learner decreases. In fact, with full observability, any demonstration from the demonstration set suffices for the learner to reach the goal.
With a teaching cost of \(\alpha=0.6\) it is worth noting that the utility-optimal non-adaptive teacher tends to select less informative demonstrations (with low teaching cost) leading to higher mean utility for learners with full observability and lower mean utility for learners with a limited view. Selecting the demonstration maximising the mean reward over the learners proves to be too expensive and consistently results in poor utility. We further discuss the teaching cost parameter in Appendix F.
The precision of the ToM-teacher's behavioural model of the learner (i.e. its policy) directly impacts the utility of the selected demonstrations. The aligned ToM-teacher selects more beneficial demonstrations on average than the rational ToM-teacher which relies on an approximation of the learner's policy, for learners with receptive field of sizes \(3\) and \(5\) (p-values \(<0.01\)) and their utilities are not significantly different for learner with full observability (p-value \(>0.15\)), see Appendix C Table 1.
A high degree of accuracy of the ToM-teacher's model of the learner's behavioural policy enhances belief updates of Equation 4, resulting in more accurate modelling of the learner's internal state. To illustrate this, we derive in Appendix D explicit inferences regarding the learner's goal and receptive field size from ToM-teachers beliefs featuring varying degrees of accuracy.
Figure 2: Mean utilities and 95% confidence interval of ToM-teachers (rational teacher with parameter \(\lambda=0.01\)) and baseline teachers for learners with varying receptive field sizes of \([3,5,full\_obs]\) observed on \(\mathcal{M}^{\text{obs}}\) during a full episode.
### Limited observation of the learner
Now, instead of having access to the entire trajectory \(\tau^{\text{obs}}\) of the learner in \(\mathcal{M}^{\text{obs}}\), the teacher only has access to its first \(10\) actions, that is the partial trajectory \(\tau^{\text{obs}}[:10]\).
As expected, with limited information about the learner, both ToM-teachers select demonstrations achieving mean utilities that are further away from the utility of the omniscient teacher's demonstrations. Nonetheless, the aligned ToM-teacher still outperforms the learner-agnostic teachers on average for all the considered learners, as depicted in Figure 3.
However, relying solely on the hypothesis that the learner is highly rational is not enough to accurately model its internal state when having access to limited observation of its behaviour. In fact, the utility of the demonstration selected by the rational ToM-teacher with low temperature parameter \(\lambda=0.01\) decreases approximately by \(100\%\), \(75\%\) and \(25\%\) for learners with receptive field sizes of 3, \(5\) and full observability, see Appendix C Table 2. As detailed in Appendix F E, with the approximate learner's policy, the rational ToM-teacher misinterprets the learner's behaviour. This leads to incorrect conclusions about the learner's internal state and, consequently, inaccurate demonstration selection. As a result, the performance of the rational teacher is not significantly different from that of the uniform modelling teacher for learners with limited view (p-values \(>0.15\)) but significantly lower for learners with full observability (p-value \(<0.01\)).
Furthermore, in this limited information context, providing the demonstration maximising the mean utility on all the learners proves to be more useful that relying on an imprecise behavioural model of the learner. For all considered learners, the utility-optimal non-adaptive teacher significantly outperforms the rational ToM-teacher (p-values \(<0.01\)), see Appendix C Table 2.
## 6 Conclusion and future works
In this work, we have studied the integration of ISL mechanism for teaching learners with different goals, beliefs or sensory capacities. We integrated a Theory of Mind model using Bayesian inference into a teacher agent to infer the learner's internal state and adapt its teaching strategy. We demonstrated that leveraging this ToM model, combined with a behavioural model of the learner, is more efficient than adopting learner-agnostic teaching strategies. We also explored the limitations of ToM models with limited observation of the learner and approximate behavioural models. In summary, we have shown that machine ISL can enhance knowledge transmission between AI systems, and we are convinced that it represents a pathway toward richer and more trustworthy knowledge exchange between AI systems and humans (Gweon et al., 2023; Sigaud et al., 2022).
There are many exciting directions for future work, particularly towards more tractable models of ToM mechanisms in higher-dimensional environments, for example, using variational methods (Zintgraf et al., 2020) or ensembling to approximate Bayesian inference. Another direction for fu
Figure 3: Mean utilities and 95% confidence interval of teachers as in Figure 2 observed on \(\mathcal{M}^{\text{obs}}\) during the \(10\) first steps of an episode (\(\tau^{\text{obs}}[:10]\)).
ture research is to employ reinforcement learning to train the teacher to generate the appropriate demonstration as done in Caselles-Dupre et al. (2022), rather than selecting demonstrations from a provided set. Finally, the prior information introduced in the teacher's Bayesian ToM model of the learners, particularly through belief supports, could be reduced by employing deep neural network-based ToM models as in Rabinowitz et al. (2018).
## Acknowledgements
We thank Cedric Colas for useful discussions and feedback. This work has received funding from the European Commission's Horizon Europe Frameworks Program under grant agreements \(N^{o}\) 101070381 (PILLAR-robots) and \(N^{o}\) 101070596 (euRobin), European Union's Horizon 2020 ICT-48 research and innovation actions under grant agreement No 952026 (HumanE-AI-Net). This work was performed using HPC resources from GENCI-IDRIS (Grant 2022-[A0131013011]).
| 優れた教師は常に生徒に合わせて説明を提供します。認知科学者は、このプロセスを理性原理の下でモデル化しています。教師は、生徒の利点を最大化しながら教育コストを最小化することを目指します。そのために、人間教師は、生徒の内在状態の mentales モデルを作成し、これは「Theory of Mind (ToM)」と呼ばれています。認知科学にインスパイアを受けて、私たちはBayesian ToM メカニズムを構築し、教師の代理人を設計しました。教師の代理人は、人間と同じように生徒に合わせて授業の戦略を調整します。これらのToMを備えた教師は、観察から生徒の内在状態のモデルを作成し、これらを最大化し、生徒の報酬を最大化しながら教育コストを最小化するための選択を行います。私たちのシミュレーション環境での実験の結果は、この方法で教えられた生徒は、生徒に無関係な方法で教えられた生徒よりも効率的であることを示しています。この効果は、 |
2309.08355 | Semi-supervised Sound Event Detection with Local and Global Consistency
Regularization | Learning meaningful frame-wise features on a partially labeled dataset is
crucial to semi-supervised sound event detection. Prior works either maintain
consistency on frame-level predictions or seek feature-level similarity among
neighboring frames, which cannot exploit the potential of unlabeled data. In
this work, we design a Local and Global Consistency (LGC) regularization scheme
to enhance the model on both label- and feature-level. The audio CutMix is
introduced to change the contextual information of clips. Then, the local
consistency is adopted to encourage the model to leverage local features for
frame-level predictions, and the global consistency is applied to force
features to align with global prototypes through a specially designed
contrastive loss. Experiments on the DESED dataset indicate the superiority of
LGC, surpassing its respective competitors largely with the same settings as
the baseline system. Besides, combining LGC with existing methods can obtain
further improvements. The code will be released soon. | Yiming Li, Xiangdong Wang, Hong Liu, Rui Tao, Long Yan, Kazushige Ouchi | 2023-09-15T12:29:48 | http://arxiv.org/abs/2309.08355v1 | # Semi-supervised sound event detection with local and global consistency regularization
###### Abstract
Learning meaningful frame-wise features on a partially labeled dataset is crucial to semi-supervised sound event detection. Prior works either maintain consistency on frame-level predictions or seek feature-level similarity among neighboring frames, which cannot exploit the potential of unlabeled data. In this work, we design a Local and Global Consistency (LGC) regularization scheme to enhance the model on both label- and feature-level. The audio CutMix is introduced to change the contextual information of clips. Then, the local consistency is adopted to encourage the model to leverage local features for frame-level predictions, and the global consistency is applied to force features to align with global prototypes through a specially designed contrastive loss. Experiments on the DESED dataset indicate the superiority of LGC, surpassing its respective competitors largely with the same settings as the baseline system. Besides, combining LGC with existing methods can obtain further improvements. The code will be released soon.
Yiming Li\({}^{1,2}\), Xiangdong Wang\({}^{1,\star}\), Hong Liu\({}^{1}\), Rui Tao\({}^{3}\), Long Yan\({}^{3}\), Kazushige Ouchi\({}^{3}\)\({}^{1}\) Beijing Key Laboratory of Mobile Computing and Pervasive Device,
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.
\({}^{2}\) University of Chinese Academy of Sciences, Beijing, China.
\({}^{3}\) Toshiba China R&D Center, Beijing, China. sound event detection, consistency regularization, audio CutMix, prototypical contrastive learning
## 1 Introduction
Sound event detection (SED) aims at identifying the category of sound events as well as their temporal boundaries, which can assist in understanding acoustic scenes and perceiving physical environments [1]. Thanks to the advances in Deep Learning (DL) techniques, the capability of SED systems has been greatly improved [2, 3]. While applying DL to SED tasks, the frame-level features are first extracted by an encoder and then transformed into probability vectors by a classifier. As a result, the quality of frame-wise features matters a lot to the performance of SED systems. However, learning discriminative and robust features is a tricky problem as the SED dataset is usually partially labeled to avoid laborious annotating.
To leverage the unlabeled SED dataset for feature learning, semi-supervised learning becomes a promising remedy, which usually adopts the teacher-student framework with the former holding an exponential moving average (EMA) of the latter. Hence, the teacher model is supposed to generate more reliable pseudo labels to guide the student model in learning informative features on unlabeled data, which can be viewed as a form of label-level consistency. Among methods applying label-level consistency, MeanTeacher [4] utilizes a soft consistency loss to regularize the predictions of the teacher and the student model, while CMT [5] improves it by filtering out low-confident pseudo labels. Furthermore, ICT [6] incorporates mixup [7] to encourage the predictions for interpolated frames to be consistent with the interpolation of predictions for original frames. With the emergence of audio augmentations [8, 9], many methods resort to smoothness assumptions [10] which require similar predictions for the same data under different perturbations. They usually warp the spectrograms and maintain consistency between predictions of original inputs and perturbed inputs to ensure the robustness of learned features. For instance, SCT [11] retains consistency with shift-related operations, including time and frequency shift, while RCT [12] seeks a feasible combination of random augmentations, such as pitch shift and time mask, and applies a self-consistency loss. Although methods like RCT can achieve supreme performance, the training cost is rather noticeable.
In addition to label-level regularization, TCL [13] and MPR [14] encourage the neighboring frame-wise representations to be similar if the two share the same strong labels while being different if there is an event boundary, which can be considered as a kind of feature-level consistency. Although such methods can be combined with label-level consistency and achieve additional performance gains, they solely regularize the neighboring embeddings and are only applicable to strongly labeled settings since ground truth is requested to compute the related loss, which precludes their applications.
In this paper, we propose to explore both label- and feature-level consistency to regularize the feature learning process in a collaborative way. Specifically, audio CutMix is adopted to modify the boundaries or contextual information of sound events. Based on the CutMixed inputs, label-level _local consistency_ is exploited to learn robust frame features with limited contexts, and feature-level _global consistency_ is devised to reduce the intra-class variance while increasing the inter-class variance of frame features in a global view. We refer to our method as Local and Global Consistency (**LGC**) regularization, and an overview is given in Figure 1:
* **Local Consistency** As shown in Figure 1, we adapt CutMix [15] to cut complementary segments from two input spectrograms before jointing them accordingly and force the predictions of mixed inputs to be consistent with the mixed pseudo labels by the local consistency loss \(\mathcal{L}_{\text{CLC}}\). This consistency term helps detach temporal dependency in mixed inputs so that the SED model can focus on local semantics, thereby learning robust patterns under varying contexts. Compared to SCT [11] and RCT [12], our method does not rely on audio warping to reach label-level consistency, making it more efficient and scalable to complement other methods.
* **Global Consistency** We adopt multiple prototypes, which can be viewed as the global representatives of class-wise frame features across the whole dataset, to model the feature space. Under the su
pervision of a global consistency loss \(\mathcal{L}_{\text{PGC}}\) shown in Figure 1, frame features from the student model are appealed to corresponding class prototypes and repelled to prototypes of other classes. As a result, per-class frame features can be clustered compactly on the feature space, making it easier for the classifier to learn a decision boundary at the low-density region [16], which can also help promote the label-level consistency. Different from TCL [13] and MPR [14], our approach does not require frames to be neighbors or manually labeled, indicating that each frame can be regularized on feature-level.
Extensive experiments suggest that LGC achieves superior results on the DESED validation dataset. While combining it with audio augmentations or other consistency regularization methods, the performance can also be significantly improved.
## 2 Methodology
### Preliminary
Currently, a typical semi-supervised SED system usually consists of a feature encoder \(f^{(l)}(\cdot)\) and a predictor \(g^{(l)}(\cdot)\), where \(l\in\{\text{S},\text{T}\}\) represents the student and teacher models respectively. We refer to the input as \(\mathbf{X}\in\mathbb{R}^{T\times F}\), where \(T\) and \(F\) denote the time and frequency axis dimensions of the spectrogram. Given \(\mathbf{X}\), \(f^{(l)}\) extracts the embedded features by \(f^{(l)}(\mathbf{X}):\mathbf{X}\rightarrow\mathbf{Q}^{(l)}\in\mathbb{R}^{T^{ \prime}\times D}\), after which \(g^{(l)}\) transforms \(\mathbf{Q}^{(l)}\) to the frame-level predictions by \(g^{(l)}\circ f^{(l)}(\mathbf{X}):\mathbf{Q}^{(l)}\rightarrow\mathbf{P}^{(l)}\in \mathbb{R}^{T^{\prime}\times M}\), where \(T^{\prime}\) represents the final number of frames (Note that \(T^{\prime}\)\(<\)\(T\) as the desired time resolution for SED is lower than that of original clips) and \(D,M\) denote the number of feature dimensions and sound classes. In SED systems, \(f^{(l)}\) can be a CRNN [17] or Transformer [18] and \(g^{(l)}\) is a dense layer with sigmoid activation. The loss function is defined as \(\mathcal{L}=\mathcal{L}_{\text{Sup}}+r(step)\mathcal{L}_{\text{MT}}\), where \(\mathcal{L}_{\text{Sup}}\) is the BCE loss for labeled data and \(\mathcal{L}_{\text{MT}}\) is the L2 consistency loss between student and teacher predictions whose weight is adjusted along training steps by \(r(step)\). In this work, we reserve the above framework while adding two loss terms to explore the label-level local consistency and the feature-level global consistency.
### Audio CutMix
CutMix is a useful augmentation strategy in computer vision. It randomly crops a patch from one image and then pastes it to another to synthesize the augmented sample. Inspired by that, we employ CutMix to the audio domain but only crop spectrograms along the time axis to preserve the information in frequency domains as shown in Figure 1. At first, we retain a copy of the data batch and randomly shuffle it. As a result, each spectrogram \(\mathbf{X}_{i}\) can be matched to another one \(\mathbf{X}_{\sigma(i)}\) in the copied batch at index \(i\). The corresponding frame-level predictions from the teacher models are \(\mathbf{P}_{i}^{(\text{T})}\) and \(\mathbf{P}_{\sigma(i)}^{(\text{T})}\). We define the CutMix operation CM as:
\[\text{CM}(\mathbf{X}_{i},\mathbf{X}_{\sigma(i)})=\mathbf{m}\odot\mathbf{X}_{i}+( \mathbf{1}-\mathbf{m})\odot\mathbf{X}_{\sigma(i)} \tag{1}\]
\[\text{CM}(\mathbf{P}_{i}^{(\text{T})},\mathbf{P}_{\sigma(i)}^{(\text{T})})= \mathbf{m}^{\prime}\odot\mathbf{P}_{i}^{(\text{T})}+(\mathbf{1}-\mathbf{m}^{\prime}) \odot\mathbf{P}_{\sigma(i)}^{(\text{T})} \tag{2}\]
where \(\mathbf{m}\in\{0,1\}^{T}\) is a binary mask of length \(T\), \(\mathbf{m}^{\prime}\) is the pooling version of \(\mathbf{m}\) with length \(T^{\prime}\) and \(\odot\) is the element-wise product. We initialize \(\mathbf{m}\) as a zero vector and randomly replace a consecutive part with 1. By CutMix, we change the boundaries or contextual information of a sound event instead of warping its features.
### Label-level Local Consistency
To encourage the model to alleviate excessive temporal dependency and make it less vulnerable to varying contexts while making predictions, we introduce the label-level consistency loss \(\mathcal{L}_{\text{CLC}}\) based on the audio CutMix operation. Specifically, it forces the student's predictions of CutMixed spectrograms to be consistent with the Cut-Mixed teacher's predictions of original samples, which is written as:
\[\mathcal{L}_{\text{CLC}}=\sum_{i}\|g^{(\text{S})}\circ f^{(\text{S})}\left( \text{CM}\left(\mathbf{X}_{i},\mathbf{X}_{\sigma(i)}\right)\right)-\text{CM} \left(\mathbf{P}_{i}^{(\text{T})},\mathbf{P}_{\sigma(i)}^{(\text{T})}\right)\| _{2}^{2} \tag{3}\]
The above objective is challenging for the SED model because it is expected to react accurately to the confusing event boundaries introduced by CutMix, even with limited contextual information, which may enhance its localization sensitivity. Similarly, \(r(step)\) is used to control the weight of \(\mathcal{L}_{\text{CLC}}\), and the overall loss can be written as \(\mathcal{L}_{1}=\mathcal{L}_{\text{Sup}}+r(step)(\mathcal{L}_{\text{MT}}+ \mathcal{L}_{\text{CLC}})\).
### Feature-level Global Consistency
Prototypical network [19] has been widely adopted in few-shot learning. It uses the global average representation of a class as the prototype and assigns labels based on the distance between sample embeddings and class prototypes. The intuition is that samples belonging to the same class are similar in feature space. Built on this idea, we propose the concept of global consistency, which regularizes the frame feature of SED models to be close to its class prototype and far away from prototypes of other classes. To account for the intra-class diversity, we utilize _Multiple Prototypes_ (_MP_) to represent a class. Besides, we add a projector \(h^{(l)}\) to project \(\mathbf{Q}^{(l)}\) to a low dimension \(D^{\prime}\), and only the projected features are required to maintain global consistency to prevent the original features losing semantic information. In the following sections, we detail the process of estimating multiple prototypes for a given class and reaching global consistency with prototypical contrastive learning.
#### 2.4.1 Prototype Estimation
The success of prototype-based consistency relies on a set of informative prototypes derived from frame features. Thus, it is vital to design strategies to initialize and maintain the pool of prototypes.
Offline Prototype InitializationWe first train the SED model using \(\mathcal{L}_{1}\) for several epochs to ensure it can extract meaningful features and grasp basic detection ability. Then, we suspend the training process before feeding the training set into the teacher model. As a result, for each frame \(k\) in audio clips from the training set, its probability vector \(\mathbf{p}_{k}^{(\text{T})}\in\mathbb{R}^{M}\) and projected feature vector \(\mathbf{v}_{k}^{(\text{T})}\in\mathbb{R}^{D^{\prime}}\) can be obtained through forward pass. For each class \(i\), a set \(Q_{i}\) is created to store high-quality projected features \(\mathbf{v}_{k}^{(\text{T})}\) whose corresponding \(\mathbf{p}_{k,i}^{(\text{T})}\) is larger than a threshold \(\tau_{+}\). We refer to such features
Figure 1: Illustration of proposed LGC method, where the pipeline of traditional MeanTeacher method is omitted for simplicity.
as high-quality features as they can be utilized to generate highly confident predictions. After collecting all the projected features of class \(i\), the K-Means algorithm is applied to implement intra-class clustering on \(Q_{i}\), resulting in \(C\) cluster centroids \(\mathbf{c}_{i,j,}j=1,\cdots,C\), which can be viewed as initial prototypes for class \(i\).
**Online Prototype Iteration** In the following training progress, prototypes are dynamically updated from the teacher's projected features extracted from both labeled and unlabeled frames to better capture the status of the SED model. Specifically, at each training step, \(Q_{i}\) is emptied and used to re-collect high-quality features from the teacher model. To guarantee the stability of prototypes, for the \(j\)-th prototype of class \(i\), we update \(\mathbf{c}_{i,j}\) in a moving average manner by
\[\mathbf{c}_{i,j}\leftarrow\text{Normalize}(\beta\mathbf{c}_{i,j}+(1-\beta)\mathbf{c}_{i,j}) \tag{4}\]
where \(\beta\) is a momentum coefficient, \(\text{Normalize}(\cdot)\) is the L2 normalization function and \(\mathbf{\hat{c}}_{i,j}\) is the feature centroid of frames in \(Q_{i}\) which satisfy that \(\mathbf{c}_{i,j}\) should be the most similar prototype with these frames among all other prototypes of class \(i\), its mathematical formulation is defined as:
\[\mathbf{\hat{c}}_{i,j}=\frac{\sum\limits_{k=1}^{\text{len}(Q_{i})}\mathbf{v}_{k}^{( \text{T})}\mathbb{I}\left[\operatorname*{arg\,max}_{n=1,\ldots,C}\langle\mathbf{c} _{i,n},\mathbf{v}_{k}^{(\text{T})}\rangle=j\right]}{\sum\limits_{k=1}^{\text{len} (Q_{i})}\mathbb{I}\left[\operatorname*{arg\,max}_{n=1,\ldots,C}\langle\mathbf{c} _{i,n},\mathbf{v}_{k}^{(\text{T})}\rangle=j\right]} \tag{5}\]
where \(\mathbf{v}_{k}^{(\text{T})}\) is the \(k\)-th element of \(Q_{i}\), \(\text{len}(Q_{i})\) is the size of \(Q_{i}\), \(\langle\cdot,\cdot\rangle\) denotes the cosine similarity function and \(\mathbb{I}[\cdot]\) denotes the indicator function which evaluates to 1 if \(\cdot\) is true and 0 otherwise.
#### 2.4.2 Selective Prototypical Contrastive Learning
Inspired by progress in self-supervised audio representation learning [20, 21], which applies unsupervised contrastive learning to force the clip embedding to be close to its augmented view while being dissimilar to embeddings of other clips, we design a class-aware prototypical contrastive learning scheme. It aims to learn a global feature space where frames from the same class are close to the class prototypes while frames from different classes are well separated.
Similar to Section 2.3, a batch of CutMixed inputs are fed into the student model to obtain the projected feature \(\mathbf{v}_{k}^{(\text{S})}\) of each frame \(k\) as shown in Figure 1. We then generate the frame-to-class similarity score \(s_{k,i}=\max\limits_{j=1,\cdots,C}\langle\mathbf{v}_{k}^{(\text{S})},\mathbf{c}_{i,j}\rangle\) between the frame feature \(\mathbf{v}_{k}^{(\text{S})}\) and prototypes of class \(i\), which uses the maximal similarity between a frame and prototypes of a class as the similarity between a frame and a class. Then, we optimize the following loss to reach our goal:
\[\mathcal{L}_{\text{PQC}}=-\sum\limits_{k}\sum\limits_{i=1}^{M}\mathbb{I}( \mathbf{p}_{k,i}^{(\text{T})}>\tau_{+})\log\frac{e^{\frac{\mathbf{s}_{k,i}}{\gamma}} }{\sum\nolimits_{m=1}^{M}e^{\frac{\mathbf{s}_{k,m}}{\gamma}}} \tag{6}\]
where \(\gamma\) is a scalar temperature parameter and the meanings of \(\tau_{+}\) and \(\mathbf{p}_{k,i}^{(\text{T})}\) are consistent as mentioned in Section 2.4.1. The above loss follows a similar formulation as the InfoNCE loss [22] which novelly introducing an indicator function to fit for the semi-supervised setting. If the teacher probability \(\mathbf{p}_{k,i}^{(\text{T})}\) is larger than \(\tau_{+}\), optimizing \(\mathcal{L}_{\text{PQC}}\) will maximize \(s_{k,i}\) but minimize \(s_{k,m}\) (\(m=1,\cdots,M\) and \(m\neq i\)), thereby pulling together \(\mathbf{v}_{k}^{(\text{S})}\) and class \(i\)'s prototype which is most similar to \(\mathbf{v}_{k}^{(\text{S})}\) while pushing away \(\mathbf{v}_{k}^{(\text{S})}\) from prototypes belonging to other classes in the feature space. Otherwise, the teacher model does not make a confident prediction, and the loss term will be evaluated to 0 for fear that the frame feature may be pushed to an improper prototype. Note that \(\mathbf{v}_{k}^{(\text{S})}\) can be pushed to more than one class prototype as long as it satisfies the indicator function, making it applicable to multi-label settings. Furthermore, the student's inputs are also CutMixed, making the above process more challenging as contextual reliance in audio clips is removed by CutMix. As a result, the SED model can learn how to use temporal cues more properly when extracting features.
However, we find that not every frame needs to be involved in the contrastive process. We then devise the _Selective Anchor Sampling (SAS)_ strategies to choose candidate frames:
* We only select frames from weakly labeled and unlabeled clips since the feature learning process of strongly labeled frames can be supervised by the ground truth well enough.
* We only select frames where the student is likely to make wrong predictions due to the inferiority of frame features. Specifically, given a frame \(k\), if the teacher model predicts it as class \(i\) with a high confidence \(\mathbf{p}_{k,i}^{(\text{T})}>\tau_{+}\) but the student does not (\(\mathbf{p}_{k,i}^{(\text{S})}<\tau_{-}\) and \(\tau_{-}\ll\tau_{+}\)), then it will be involved.
By applying SAS, only about 2% frames are required to compute \(\mathcal{L}_{\text{PQC}}\), resulting in significant improvements in training efficiency and model performance as potential overcorrections of features are alleviated. The overall loss considering prototype-based global consistency is \(\mathcal{L}_{2}=\mathcal{L}_{1}+\alpha\mathcal{L}_{\text{PQC}}\), where \(\alpha\) is a trade-off parameter.
By incorporating \(\mathcal{L}_{\text{CLC}}\) and \(\mathcal{L}_{\text{PQC}}\) simultaneously, LGC exploits both label-level local consistency and feature-level global consistency, and we argue that the two consistency terms can benefit from each other. On the one hand, while pursuing global consistency, inputs of the student model are CutMixed, encouraging the encoder to refine feature extraction since much irrelevant noise is induced by CutMix. On the other hand, the class-wise frame features can be more compact and discriminative with global consistency regularization, enhancing the robustness of the classifier to recognize an event with limited contextual information to reach local consistency.
## 3 Experiments
The DESED dataset [23] (the official dataset of DCASE 2022 Task 4) is chosen to conduct experiments, which contains 1578 weakly labeled clips, 10000 strongly labeled synthetic clips, and 14412 unlabeled clips. For each 10-second audio clip, we resample it to 16 kHz and transform it into 626 frames using the short-term Fourier transform with a window length of 2048 samples and a hop length of 256 samples. And 128 log mel-band magnitudes are then extracted as input features. We implement all methods based on the same CRNN network with the official baseline network of DCASE 2022 Task 4. Evaluation of each method is performed with Event-Based macro F1 (EB-F1) [24] and Polyphonic Sound Detection Scores (PSDSs) [25] on the validation set composed of 1168 clips. For the proposed LGC, we empirically set \(C=3\), \(\beta=0.99\), \(\tau_{+}=0.9\), \(\tau_{-}=0.5\) and \(\gamma=\alpha=0.1\). The model is trained with \(\mathcal{L}_{1}\) for the first 100 epochs and then with \(\mathcal{L}_{2}\) for the last 100 epochs. As for other training settings, we follow those of the official baseline.
### Comparison with Other Methods
To verify the effectiveness of LGC, we first compare it with existing works that do not utilize audio warping either. Among them, TCL and MPR pursue feature-level consistency, while Baseline, CMT, and ICT are trained with label-level consistency regularization. The related results are shown in Table 1. It can be observed that LGC exceeds to the baseline significantly by 6.5% on EB-F1, 0.045 and 0.034 on PSDSs. Moreover, it surpasses its counterparts to a large extent in terms of the time-sensitive metrics, namely PSDS\({}_{1}\) and EB-F1, indicating that LGC can boost the event localization capacity.
To exploit the potentials of LGC and seek a fair comparison with methods that leverage audio augmentations to maintain consistency, we promote LGC in two ways: (1) we impose FilterAug [9] on LGC without introducing any additional consistency; (2) we integrate LGC into SCT and RCT. The comparison results are reported in Table 2. As seen, simply adding augmentations to LGC (LGC + Aug) leads to remarkable improvements compared to the vanilla LGC, implying that LGC is also robust to audio perturbations. Moreover, LGC + Aug is also advanced compared with prior works. It outperforms SCT by approximately 0.04 on both PSDSs and achieves higher PSD\({}_{1}\) and comparable PSDS\({}_{2}\) with much less training consumption than RCT. When combining SCT or RCT with LGC, further improvements can be obtained with little additional training cost, demonstrating its effectiveness while working with existing consistency regularization techniques. Finally, we extend the proposed LGC to FDY-CRNN [26], a powerful backbone adopted by many recent works, without additional training augmentations. The notable improvements in Table 3 suggest its scalability.
### Ablation Studies on Proposed Techniques
We evaluate the performance of LGC trained without a specific component, and the results can be found in Table 4. As illustrated in Table 4, two consistency regularization methods both contribute to the performance gain, without which an absolute drop of 3.5% on EB-F1 and 0.02 on PSDS\({}_{1}\) can be witnessed. We argue that without LC, the model can not learn robust frame representations, while without GC, no global information is available for feature learning. In addition, SAS and MP are also essential to LGC as they assist the student encoder in modeling class-specific features and aligning with prototypes. We further visualize the frame-wise latent representations of a well-trained model for each class and calculate the intra-class variance \(S_{w}\) and inter-class variance \(S_{b}\), the larger \(tr(S_{b})/tr(S_{w})\) is, the more well-structured feature space that the model learns. As shown in Figure 2, our method LGC enables better intra-class compactness and inter-class dispersion of the feature space compared to the baseline model. Figure 3 gives two examples in which LGC makes accurate detections, but the baseline does not. As marked in the figure, the frame-wise features for LGC are more consistent within each sound event while varying dramatically on the boundaries, making it easier for the classifier to discriminate different events. | 意味のあるフレーム単位の特性を部分的にラベル付けされたデータセットで学習することは、半教師付き音声イベント検出に不可欠です。過去の研究は、フレームレベルの予測を維持したり、隣り合うフレーム間の特徴レベルの類似性を求めていましたが、未ラベルデータの潜在能力を活かせませんでした。この研究では、ローカルとグローバルの整合性 (LGC) を正規化スキームとしてモデルのラベルレベルと特徴レベルの向上に設計しました。オーディオ CutMix は、クリップの文脈情報を変更するために導入されました。その後、ローカル整合性を採用して、モデルはフレームレベルの予測のためにローカルの特徴を活用するように促し、グローバル整合性を適用して、特別に設計された対比損失を介して特徴をグローバルプロトタイプと一致させるように強制します。DESED データセットの実験では、LGCの優位性が明らかになり、基底システムと同じ設定で使用した場合でも |
2301.13631 | TopoBERT: Plug and Play Toponym Recognition Module Harnessing Fine-tuned
BERT | Extracting precise geographical information from textual contents is crucial
in a plethora of applications. For example, during hazardous events, a robust
and unbiased toponym extraction framework can provide an avenue to tie the
location concerned to the topic discussed by news media posts and pinpoint
humanitarian help requests or damage reports from social media. Early studies
have leveraged rule-based, gazetteer-based, deep learning, and hybrid
approaches to address this problem. However, the performance of existing tools
is deficient in supporting operations like emergency rescue, which relies on
fine-grained, accurate geographic information. The emerging pretrained language
models can better capture the underlying characteristics of text information,
including place names, offering a promising pathway to optimize toponym
recognition to underpin practical applications. In this paper, TopoBERT, a
toponym recognition module based on a one dimensional Convolutional Neural
Network (CNN1D) and Bidirectional Encoder Representation from Transformers
(BERT), is proposed and fine-tuned. Three datasets (CoNLL2003-Train,
Wikipedia3000, WNUT2017) are leveraged to tune the hyperparameters, discover
the best training strategy, and train the model. Another two datasets
(CoNLL2003-Test and Harvey2017) are used to evaluate the performance. Three
distinguished classifiers, linear, multi-layer perceptron, and CNN1D, are
benchmarked to determine the optimal model architecture. TopoBERT achieves
state-of-the-art performance (f1-score=0.865) compared to the other five
baseline models and can be applied to diverse toponym recognition tasks without
additional training. | Bing Zhou, Lei Zou, Yingjie Hu, Yi Qiang, Daniel Goldberg | 2023-01-31T13:44:34 | http://arxiv.org/abs/2301.13631v2 | # TopoBERT: Plug and Play Toponym Recognition Module Harnessing Fine-tuned BERT*
###### Abstract
Extracting precise geographical information from textual contents is crucial in a plethora of applications. For example, during hazardous events, a robust and unbiased toponym extraction framework can provide an avenue to tie the location concerned to the topic discussed by news media posts and pinpoint humanitarian help requests or damage reports from social media. Early studies have leveraged rule-based, gazetteer-based, deep learning, and hybrid approaches to address this problem. However, the performance of existing tools is deficient in supporting operations like emergency rescue, which relies on fine-grained, accurate geographic information. The emerging pretrained language models can better capture the underlying characteristics of text information, including place names, offering a promising pathway to optimize toponym recognition to underpin practical applications. In this paper, TopoBERT, a toponym recognition module based on a one-dimensional Convolutional Neural Network (CNN1D) and Bidirectional Encoder Representation from Transformers (BERT), is proposed and fine-tuned. Three datasets (CoNLL2003-Train, Wikipedia3000, WNUT2017) are leveraged to tune the hyperparameters, discover the best training strategy, and train the model. Another two datasets (CoNLL2003-Test and Harvey2017) are used to evaluate the performance. Three distinguished classifiers, linear, multi-layer perceptron, and CNN1D, are benchmarked to determine the optimal model architecture. TopoBERT achieves state-of-the-art performance (f1-score=0.865) compared to the other five baseline models and can be applied to diverse toponym recognition tasks without additional training.
Natural Language Processing; Geoparser; Convolutional Neural Network; Toponym Recognition; BERT
## 1 Introduction
Since the emergence of social sensing, scholars have been endeavoring to sense the pulse of society with the help of satellite images, sensor networks from IoT and various forms of textual information from the Internet. Extra attention has been paid to mining knowledge from social media because people nowadays are consciously or unconsciously sharing their views towards ongoing events online, which propels social media to become one of the few agents that reflects the real-time societal awareness, reactions and impacts of particular events. This trait is a rare feature seldom shared by other forms of data sources.
In the light of this feature, Avvenuti et al. presented an early earthquake detecting and warning system using Twitter data, which offers prompt detection of events [1]. Several case studies processed social media data with geocoding and sentiment analysis tools to analyze the spatial patterns of changing public awareness and emotions toward hurricanes in different phases of the disaster management cycle [2, 3]. Huang et al. scrutinized the human mobility patterns during the COVID-19 pandemic at multiple scales based on geotagged Twitter data [4]. Zhou et al. proposed VictimFinder which is capable of harvesting social media help requests during hurricanes [5].
Let alone the fact that geographical information being one of the key elements of knowledge generation, the aforementioned studies and other similar spatial analysis and modeling are highly dependent on the location information of the social media data. However, social media users start to pay more attention to user privacy, which results in a significant drop of the number of geotagged tweets. Simultaneously, Twitter published policies forbidding users to attach precise longitudes and latitudes to tweets. Moreover, the geographical information bound up with the social media posts might not necessarily be equivalent to the place names described in the textual content of the post. Thus, extracting location information from the textual content of social media data has inevitably become an issue that needs to be addressed. This breeds the process of geoparsing, a two-step approach which includes toponym recognition (identifying place names from texts) and toponym resolution (transforming location names to geographical coordinates). This paper focuses on the first component of geoparsing.
Existing studies on toponym recognition can be categorized into four parties based on the character of the solutions, namely rule-based, gazetteer-based, statistical learning-based, and hybrid approaches. In general, statistical learning and hybrid methods that incorporate deep learning techniques render better performance than methods that solely rely on rules or gazetteers [6, 7, 8, 9]. Based on Bidirectional Long Short-Term Memory (BiLSTM), Wang et al. introduced NeuroTPR to extract place names [6]. Qi et al. extended CoreNLP and brought about an open-sourced named entity recognition python toolkit called Stanza, which is able to detect place names and support multiple languages [7]. SAVITR is a
system that combines both NLP techniques and gazetteers for real-time location extraction [8]. Hu et al. addressed the incompleteness of gazetteers and fused gazetteers, rules, and deep learning to render a reliable place name extractor, GazPNE [9].
However, those studies suffer from several limitations. First, some models do not focus only on place names, so their prediction of location name extraction might be disturbed. Second, recurrent neural network based deep learning models might suffer from information vanishing problems when the input sequence gets larger and network deeper. Third, complicated deep neural networks frequently require large, annotated datasets and are time-consuming to train to achieve promising results.
To address the aforementioned latent flaws, this paper proposes TopoBERT, a toponym recognition module based on a one-dimensional Convolutional Neural Network (CNN) and Bidirectional Encoder Representation from Transformers (BERT). It contributes in the following directions. First, several classifiers were tested and one feasible model and classifier combination based on the evaluation result of a standard dataset is determined. Second, TopoBERT was tested by an unseen dataset together with some other existing tools to verify its generalizability. Third, the tool is ready-to-use and the dataset we generated in this study can be used by other scholars to train, test, and compare different toponym recognition models and tools.
The remainder of this paper is structured as follows. The datasets involved in fine-tuning and testing the framework, a concise introduction of the holistic design of the framework, the implementation of the framework, and the parameters used in fine-tuning the framework are detailed in section 2. The results of the experiments conducted are documented in section 3. Section 4 illustrates the potential limitations of this work and lists several future research directions. Section 5 epitomizes the findings of this paper and presents the implications of this study.
## 2 Methodology
### Datasets
Totally four different datasets were utilized to train the module and evaluate the performance. CoNLL2003 is a shared task that concerns named entity recognition, which has been widely applied to training deep learning models [10]. The data contains entities of five types: persons (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC) and other words that are irrelevant to named entities of the aforementioned four groups (O). The prefix "B-" and "I-" are used to tag the beginning of a named entity and words that fall inside a named entity [10]. The dataset is originally divided into training, validation, and test data which are noted as CoNLL2003-Train, CoNLL2003-Validation and CoNLL2003-Test. Training data is used to train a deep learning model, validation data is used to tune the hyperparameters of the model, and the test data is used to evaluate the performance of the trained model. The data distribution of each label type in the three datasets is depicted in Figures 1(a), 1(b), and 1(c), respectively. The dataset is later modified to suit the purpose of this study by labeling all the named entities as "O" except for the location entities.
Around 4.1% of the tags are location entities in these datasets.
WNUT2017 is a relatively smaller dataset collected from Twitter and manually annotated, the objective of which is to tackle the issues caused by novel, emerging, singleton named entities in noisy text [11]. It aims to offer support to sustainable named entity recognition systems. This dataset contains seven different groups: person, location, corporation, product, creative work, group and none of the above. Considering the main focus of this paper and different tags used to label the dataset, this dataset is preprocessed to retain only the location entities tag and to unify the tag symbols used based on CoNLL2003 (location entities are tagged with "B-LOC" or "I-LOC" while the rest are tagged with "O"). The distribution of data under each label type in the modified dataset is shown in Figure 2(a). The total number of location names in this dataset is 1140.
Wiki3000 is an automatically generated dataset from Wikipedia articles by a data producing workflow proposed by Wang et al. [6]. The proposed auto-annotation approach utilizes the first paragraph of Wikipedia articles which usually encompass various entities presented with hyperlinks. These hyperlinks are later checked if they are associated with a geographical location. If so, the hyperlinked word will be labeled as a toponym. Then the Wikipedia article is divided into multiple short sentences within 280 characters with additional strategies such as random flipping to mimic the general patterns of Twitter posts [6]. The distribution of data under each label type is shown in Figure 2(b).
Harvey2017 is a dataset originally collected from the North Texas University repository ([https://digital.library.unt.edu/ark:/67531](https://digital.library.unt.edu/ark:/67531) /metadc993940/), which contains 7,041,866 tweets collected based on hashtag query. It was pruned, randomly subsampled and manually annotated by Wang et al. to form a new dataset with 1000
Figure 1: Data Distribution of CoNLL2003 Dataset
Figure 2: Data Distribution of WNUT2017, Wiki300 and Harvey2017 Dataset
tweets aiming to evaluate NeuroTPR [6]. This dataset is adopted by this paper to test the performance of TopoBERT. The distribution of data under each label type is shown in Figure 2(c).
### Framework Design and Implementation
As mentioned in section 1, there is an acute conflict between robust spatial analysis on social media or news media and the diminishing availability of geolocated textual context. Additionally, the location mentioned in the textual content of the tweets might differ from the geotags attached. A reliable and ready-to-use geoparser can be the mediator of such conflicts. Therefore, we present a general location extractor that can be used upon social media and news media. The workflow is shown in Figure 3.
The existing geotags of the data will be retained, and the textual contents will go through a rule-based data preprocessing module before they are fed to a zip code extractor and place name extractor. Once the place names are pulled out, a geocoding service will be applied to transform the place names into precise coordinates. The place name extractor is marked with an orange dashed rectangle in Figure 3 and serves as the crucial backbone of the entire workflow.
Identifying location names from input sentences is a token classification task (Figure 4), which contains two parts. A language model and a classifier. It behaves similar to how human beings analyze whether the given words are place names or not. First the language model attempts to understand the language by transforming the tokenized input data into higher dimensional space which captures the meaning of words in a given sentence, then the classifier makes predictions based on the transformed vectors and determines whether the input word belongs to location entity.
The heart of the proposed toponym recognition module, TopoBERT, is the Bidirectional Encoder Representation from Transformers (BERT). It is structured by stacking the encoder components of the Transformer architecture and is designed to be pretrained in an unsupervised manner. BERT takes advantage of the Attention [25] mechanism, which resolves the information vanishing issue that often upsets recurrent neural networks such as Long Short-Term Memory [26] and Gated Recurrent Neural Network [27] when the input sequence gets longer. Moreover, distinguished from many other bidirectional language models, such as ELMo designed by Peters et al. [28], in which the contextual representation of every word is the concatenation or summation of the forward and backward representations, BERT reads the entire sequence of words at once and is trained using a Masked Language Model (MLM) approach and a Next Sentence Prediction (NSP) approach which genuinely implemented the bidirectional concept or unidirectional concept. These two features combined facilitate better language understanding and bring the topply to BERT throughout a number of NLP tasks under the General Language Understanding Evaluation (GLUE) benchmark [12].
Off-the-shelf pretrained BERT model weights can be separated into several categories based on the size of the model, whether upper and lower cases are taken into consideration, the targeted language, and unique training strategies ([https://huggingface.co/transformers/v3.3.1/pretrained_models.ht](https://huggingface.co/transformers/v3.3.1/pretrained_models.ht) ml). Since place names are highly case sensitive and only the English language is involved in this study, 'bert-base-cased' and 'bert-large-cased' are selected as the candidate pretrained models
Figure 4: Demonstration of token classification workflow.
Figure 3: Holistic Design of Location Extraction Framework for Textual Contents
to be evaluated. The "bert-base-cased' model comprises 12 layers, and each hidden layer has 768 nodes, with 12 self-attention heads and a total number of 110 million parameters. The 'bert-large-cased' model consists of 24 layers, and each hidden layer has 1024 nodes, with 16 self-attention heads and 340 million parameters. The parameters are pretrained with English text from BooksCorpus (800 million words) and English Wikipedia (2,500 million words). By stacking a classifier on top of BERT, the combo can be fine-tuned to accomplish this downstream. Recent study showed that model performance can be enhanced by applying classifiers more complex than simple linear classifier or Conditional Random Field (Zhou et al., 2022). Therefore, three classifiers were examined in this study, namely linear classifier, multi-layer perceptron (MLP, Figure 5) and one-dimensional CNN (CNN1D, Figure 6). The simple linear classifier connects the output of the language model to the final prediction results with the softmax activation function. MLP applied in this study contains three fully connected layers and links the language model output with a layer with the input size equivalent to the output vector size. The number of hidden layer nodes is 256 and the output layer size equals the number of distinct labels from the training dataset. The CNN models are competent in detecting underlying features (Zhou et al., 2022) and one-dimensional CNN has been successfully applied to process natural language (Xu et al., 2019; Chen et al., 2020). Realizing location names might share some common characteristics, the idea of CNN1D is adopted. The vector output of the language model can be considered as a one-dimensional signal and a CNN1D with kernel size 3 is applied. The output channel of the convolution is 16. Followed by a max pooling layer of size 2, which further generalizes the features and reduces model complexity. All channels of the max pooling layer output are concatenated into a single vector and is fed to a fully connected MLP with hidden layer size equals to 128.
All model combinations were implemented using Python language and pertinent packages. The dataset splitting took advantage of the ScikitLearn library and the BERT models were implemented based on the huggingface Transformer library ([https://huggingface.co/transformers/](https://huggingface.co/transformers/)). The model finetuning pipeline was built using PyTorch functions.
### Training and Evaluation
TopoBERT is envisioned to be a ready-to-use module that renders optimal performance in toponym recognition. Models with different architectures were trained and evaluated with six datasets specified in Section 2.1 to determine the best model architecture and training strategy. The training process utilized CoNLL2003-Train as the training dataset by default and compared to another larger dataset fusing CoNLL2003, Wiki3000, and WNUT2017. The original dataset is labelled at word-level which cannot be input to BERT directly due to BERT's word-piece encoding, otherwise it will lead to large numbers of out of vocabulary words. To tackle with this issue, we first split the input data at word-level, and applied BERT word-piece tokenizer to each word. The same label was assigned to each word-piece of a single word. The labeled word-pieces are then merged to form the new input data which could be processed by BERT. This experiment aimed at measuring the performance fluctuations caused by training data size and heterogeneity. CoNLL2003-Validation was used during the training process to tune several fundamental hyperparameters such as training epochs and learning rate. CoNLL2003-Test and Harvey2017 datasets were used to evaluate the model performance. The Harvey2017 dataset was also used to benchmark TopoBERT with five prevailing toponym recognition models, namely Stanford NLP (Xu et al., 2019), spaCy ([https://spacy.io/](https://spacy.io/)), Bidirectional LSTM-CRF (Xu et al., 2019), DM_NLP (Xu et al., 2019), and NeuroTPR (Xu et al., 2019).
The parameters of the classifier component of the module were initialized with random non-zero numbers and the BERT
Figure 5: TopoBERT Architecture with Multi-layer Perceptron as Classifier
Figure 6: TopoBERT Architecture with One-Dimensional Convolutional Neural Network as Classifier
component was initialized with pre-trained parameters. The entire module was trained with the fine-tuning approach [12], and the parameters were updated using a mini-batch gradient descent approach with early stopping. The maximum length of the input sequence was limited to 128 in this paper. The maximum number of training epochs was set to 50. As recommended by the original BERT paper, the initial learning rate and the training batch size were set to 2e-5 and 32 respectively [12]. Most commonly used loss function for multi-class classification task, the cross-entropy loss was employed. AdamW was selected as the optimizer during training which adjusts the learning rate dynamically to accelerate parameter convergence and implements weight decay to lower the chance of overfitting. Warm up steps, which is using a very low learning rate for the first several weight updating iterations, were also introduced during training to reduce the impact of deviating the model drastically from sudden exposure to unseen datasets.
Three commonly used evaluation metrics, precision, recall, and F1-score (Equation 1-3), were applied to gauge the performance and bias of the models. Precision calculates the percentage of correctly identified location names (noted as True Positives, TP) among all the location names predicted by the model, which combines both TP and False Positives (FP). Recall measures the percentage of correctly identified ones amongst all ground truth, which is the combination of TP and False Negatives (FN). F1-score is the harmonic mean of precision and recall, providing a comprehensive metric to evaluate model performance.
\[Precision=\frac{TP}{TP+FP}\] (Equation 1) \[Recall=\frac{TP}{TP+FN}\] (Equation 2) \[F1-score=2*\frac{Precision+Recall}{Precision+Recall}\] (Equation 3)
The outputs of BERT models are at word-piece level and they are concatenated using the special prefix '\(\#\#\)' and the word-level labels are assigned base on the starting word-piece of the word. The evaluation metrics are based on 'per-token' scores. Additionally, location name entity consists of two types of labels (B-LOC and I-LOC). In order to gauge the comprehensive performance of the model on toponym recognition, the evaluation metrics were calculated using a micro average approach, which computes a global average of precision, recall, and F1-score. It calculates the TP, FP and FN by counting the total number of TP, FP and FN under each class, namely, "B-LOC" and "I-LOC".
## 3 Results and Analysis
The first step of the experiment targeted at determining the optimal pretrained parameters for BERT model. We hypothesize that larger models outperform smaller models. To verify this hypothesis, the performance of the models initialized with 'bert-base-cased' and 'bert-large-cased' with a linear classifier stacked on top were tested. The results are displayed in Table 1.
These two models were trained with CoNLL2003-Train and evaluated with CoNLL2003-Test. Compared to 'bert-base-cased', the precision of the prediction increased from 0.900 to 0.934 by using 'bert-large-cased' while the recall almost remained static. The F1-scores showed that 'bert-large-cased' rendered better results which is in conformity with the original BERT paper [12] and validated our initial hypothesis. Therefore, 'bert-large-cased' was harnessed in all the follow-up experiments.
The second step of the experiments aimed to measure the influence of the training data and determine the optimal classifier. The model performances were evaluated using two different datasets, CoNLL2003-Test and Harvey2017. We hypothesize that (a) the model with CNN1D classifier yield better results and (b) models trained with larger datasets perform better in placename recognition. Table 2 and Table 3 list the evaluation metrics of all the tests.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Training Data** & **Classifier** & **Precision** & **Recall** & **F1-score** \\ \hline CoNLL2003 & Linear & 0.895 & 0.804 & 0.847 \\ \hline CoNLL2003 & MLP & 0.885 & 0.811 & 0.846 \\ \hline CoNLL2003 & CNN1D & **0.898** & **0.835** & **0.865** \\ \hline Combined & Linear & 0.872 & 0.589 & 0.703 \\ \hline Combined & MLP & 0.932 & 0.541 & 0.685 \\ \hline Combined & CNN1D & **0.941** & **0.668** & **0.781** \\ \hline \end{tabular} The “CoNLL2003” under the Training Data column means CoNLL2003-Train dataset and the “Combined” represents the dataset merging CoNLL2003-Test, Wiki3000 and WNUT2017.
\end{table}
Table 3: Evaluation results with Harvey2017 dataset for testing on training data variation and classifier types.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**BERT Model** & \begin{tabular}{c} **Classifier** \\ **r** \\ \end{tabular} & \begin{tabular}{c} **Precision** \\ **n** \\ \end{tabular} & \begin{tabular}{c} **Recall** \\ **1** \\ \end{tabular} & \begin{tabular}{c} **F1-score** \\ **score** \\ \end{tabular} \\ \hline \begin{tabular}{c} bert-base-cased \\ \end{tabular} & Linear & 0.900 & **0.904** & 0.902 \\ \hline
\begin{tabular}{c} bert-large- \\ cased \\ \end{tabular} & Linear & **0.934** & 0.901 & **0.917** \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation results for testing on different pretrained parameters.
In Table 2, when models were trained with CoNLL2003-Train, the one with a simple linear classifier produced the best precision (0.934), and the one with CNN1D produced the best recall (0.920) and F1-score (0.921). MLP performed the worst among the three classifiers. When models were trained with a combined dataset, the model with CNN1D outperformed the rest in all three metrics with precision equal to 0.942, recall of 0.916, and F1-score of 0.929. The one with a linear classifier produced the worst results with an F1-score of 0.866. In Table 3, when models were trained with CoNLL2003-Train, the one with the CNN1D classifier outperformed the rest with precision equal to 0.898, recall of 0.835, and F1-score of 0.865. When models were trained with a combined dataset, the model with CNN1D successfully defended its trophy by rendering precision of 0.941, recall of 0.668, and F1-score of 0.781. The models with MLP worked slightly worse than the ones with linear classifiers.
The above elucidation certifies the hypothesis that models with CNN1D generate the optimal performance. It also shows that more complicated classifiers like multi-layer perceptron do not necessarily render better results.
However, when viewing Tables 2 and 3 contemporaneously, the results from training with different datasets, the metrics indicated that the model trained with the combined dataset generally performed worse than the ones trained with merely CoNLL2003-Train. This phenomenon contradicts the hypothesis that models trained with larger datasets perform better. After scrutinizing the dataset used for training, we noticed some inconsistencies in the labeling criteria of the datasets. Some examples are listed in Table 4 and the unexpected phenomenon can be interpreted by the heterogeneity of the datasets.
Twitter developer API. The locations of those tweets without geotags are retrieved by running TopoBERT and google geocoding service. The module also enjoys the potential of being used for location name detection for news media to pinpoint the discussed topics [14; 15] and help to identify fake news [16].
This paper concentrates mainly on designing a novel architecture of a reliable and versatile module for toponym recognition. However, the performance enhancement can continue by addressing the following issues.
First, the models are trained and evaluated based on well prepared datasets. This can be regarded as a best-case scenario compared to real life situations. Place name usage can be highly ambiguous and random, especially within social media platforms. Typos are extremely common which might cause out-of-vocabulary words in language models. Place name abbreviations such as "Boulevard" and "blvd", "Drive" and "Dr.", "Street" and "St." and so forth are frequently utilized interchangeably. People might unconsciously ignore the correct upper-case and lower-case usage, such as "college station" and "College Station", "mexico" and "MEXICO". Meticulous data preprocessing methods can be incorporated to tackle this problem in order to achieve better overall performance.
Second, several rule-base approaches can be leveraged to further boost the performance. Enlightened by the success of hybrid models [9], sets of grammar rules based on the composition of nouns, determiners, adjectives, conjunctions, numbers and possessive ending can be designed [17]. Additionally, commonly used gazetteers such as OpenStreetMap and GeoNames can be used as extra named entity matching criteria which will enhance the True Positives of the model. Regional criteria can be appended to the model while identifying place names by making country name, state names, county names, or bounding boxes as input variables of the model. This will allow the model to add constraints during processing. The top-N words from word embedding models [9; 35], which are not place names, can be applied to filter words during data preprocessing. This will to some extent eliminate the False Positives of the prediction.
Third, due to the data-hunpy nature of deep learning, data availability and quality are topics being inevitably discussed when large complicated deep learning models are involved. It is common knowledge in the deep learning world that larger datasets lead to better generalizability and performance. However, this statement fails to hold true in this paper due to the fact that the larger datasets are derived from several distinguished smaller datasets labeled under their own unique regime. Therefore, there is an urgent need to define criteria and build unified datasets for toponym recognition model training, evaluating and benchmarking. The dataset can be manually modified based on existing datasets and augmented using rule-based methods, gazetteers or Generative Adversarial Network [18; 19; 20].
Fourth, fine-tuned language models can be few-shot or zero-shot learners, which means that the models can be applied directly to certain downstream tasks with very little or even no further training [21; 22; 23]. This is because advanced language models can better capture the meaning of the text. This claim is also underpinned by the result of this paper which leverages BERT to boost the module capability. Therefore, incorporating gigantic models such as GPT-3 [24] might lead to another round of performance enhancement.
## 5 Conclusion
To further enhance the performance of toponym recognition by better understanding natural language, TopoBERT, which incorporate pretrained language model, BERT, is introduced. Experiments on the pretrained parameters, training dataset combinations, and model architecture reveal the following findings. First, the toponym recognition model performance is sensitive to the architecture of pre-trained language models and classifiers. The models initialized with a larger-structured BERT model ("bert-large-cased") show an advantage over the models initialized with a basic BERT model ("bert-base-cased"). More complicated classifiers like MLP do not necessarily win over simple linear classifiers. Second, increasing training data size produces worse results, especially for the recall, due to data heterogeneity. The model trained with single dataset, CoNLL2003-Train, and stacked on top with a CNN1D classifier renders the optimum results both on CoNLL2003-Test and Harvey2017 datasets. Finally, the developed TopoBERT module outperforms existing models in recognizing place names in texts. The clinched TopoBERT with the optimal model architecture and training strategy produces reliable toponym prediction and achieves F1-score of 0.865 on Harvey2017 dataset, which surpasses other prevailing models or tools by at least 18%.
In nutshell, the discoveries of this paper contribute in determining the optimal model structure on toponym recognition tasks and urges a large standardized dataset labeled with unified regime to support model training and benchmarking. A plug and play module is implemented and open sourced to support pertinent applications and similar research.
## Acknowledgments
The research is supported by a project funded by the U.S. National Science Foundation: Reducing the Human Impacts of Flash Floods
Figure 7: Toponym recognition applied to locate Twitter posts during disasters.
- Development of Microdata and Causal Model to Inform Mitigation and Preparedness (Award No. 1931301).
| ```
地理情報の精度の高い抽出は、多岐にわたるアプリケーションにおいて必須です。例えば、危険なイベントが発生した場合、信頼性が高く、偏見のない地名抽出フレームワークが、その場所をニュースメディア投稿の話題に関連付けるための道筋を提供し、人道支援の要請や被害報告をソーシャルメディアから提供することができます。初期の研究では、ルールベース、Gazetteerベース、深層学習、ハイブリッドアプローチを組み合わせることで、この問題に取り組んできました。しかし、既存のツールのパフォーマンスは緊急救助などの作業に欠けており、それは微細な正確な地理情報に基づいています。出現したプレースの言語モデルは、テキスト情報を基盤に、場所の名前を捕捉できるなど、実用的なアプリケーションを支えるための地名的認識の最適化のための有望な経路を提供する可能性があります。この論文では、TopoBERTという、1次元畳み込みニュー |
2309.07066 | CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion
Prediction | Human motion prediction is important for mobile service robots and
intelligent vehicles to operate safely and smoothly around people. The more
accurate predictions are, particularly over extended periods of time, the
better a system can, e.g., assess collision risks and plan ahead. In this
paper, we propose to exploit maps of dynamics (MoDs, a class of general
representations of place-dependent spatial motion patterns, learned from prior
observations) for long-term human motion prediction (LHMP). We present a new
MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data
efficient, explainable, and insensitive to errors from an upstream tracking
system. Our approach uses CLiFF-map, a specific MoD trained with human motion
data recorded in the same environment. We bias a constant velocity prediction
with samples from the CLiFF-map to generate multi-modal trajectory predictions.
In two public datasets we show that this algorithm outperforms the state of the
art for predictions over very extended periods of time, achieving 45% more
accurate prediction performance at 50s compared to the baseline. | Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Luigi Palmieri, Kai O. Arras, Achim J. Lilienthal, Martin Magnusson | 2023-09-13T16:26:48 | http://arxiv.org/abs/2309.07066v1 | # CliFF-LHMP: Using Spatial Dynamics Patterns for
###### Abstract
Human motion prediction is important for mobile service robots and intelligent vehicles to operate safely and smoothly around people. The more accurate predictions are, particularly over extended periods of time, the better a system can, e.g., assess collision risks and plan ahead. In this paper, we propose to exploit _maps of dynamics_ (MoDs, a class of general representations of place-dependent spatial motion patterns, learned from prior observations) for long-term human motion prediction (LHMP). We present a new MoD-informed human motion prediction approach, named CliFF-LHMP, which is data efficient, explainable, and insensitive to errors from an upstream tracking system. Our approach uses CliFF-map, a specific MoD trained with human motion data recorded in the same environment. We bias a constant velocity prediction with samples from the CLiFF-map to generate multi-modal trajectory predictions. In two public datasets we show that this algorithm outperforms the state of the art for predictions over very extended periods of time, achieving 45% more accurate prediction performance at 50s compared to the baseline.
## I Introduction
Accounting for long-term human motion prediction (LHMP) is an important task for autonomous robots and vehicles to operate safely in populated environments [1]. Accurate prediction of future trajectories of surrounding people over longer periods of time is a key skill to improve motion planning, tracking, automated driving, human-robot interaction, and surveillance. Long-term predictions are useful to associate observed tracklets in sparse camera networks, or inform the robot of the long-term environment dynamics on the path to its goal [2, 3], for instance when following a group of people. Very long-term predictions are useful for global motion planning to produce socially-aware unobtrusive trajectories, and for coordinating connected multi-robot systems with sparse perception fields.
Human motion is complex and may be influenced by several hard-to-model factors, including social rules and norms, personal preferences, and subtle cues in the environment that are not represented in geometric maps. Accordingly, accurate motion prediction is very challenging [1]. Prediction on the very long-term scale (i.e., over \(20\,\mathrm{s}\) into the future) is particularly hard as complex, large-scale environments influence human motion in a way that cannot be summarized and contained in the current state of the moving person or the observed interactions but rather have to be modelled explicitly [4].
In this paper, we examine and address the novel task of very long-term human motion prediction [5], aiming to predict human trajectories for up to \(50\,\mathrm{s}\) into the future. Prior works have addressed human motion prediction using physics-, planning- and pattern-based approaches [1]. The majority of existing approaches, however, focuses on relatively short prediction horizons (up to \(10\,\mathrm{s}\)) [6] and the popular ETH-UCY benchmark uses \(4.8\,\mathrm{s}\)[1, 7, 8, 9].
To predict very long-term human motion, we exploit _maps of dynamics_ (MoDs) that encode human dynamics as a feature of the environment. There are several MoD approaches for mapping velocities [10, 11, 12, 13, 14]. In this work, we use Circular Linear Flow Field map (CLiFF-map) [12], which captures multimodal statistical information about human flow patterns in a continuous probabilistic representation over velocities. The motion patterns represented in a CLiFF-map implicitly avoid collisions with static obstacles and follow the topological structure of the environment, e.g., capturing the dynamic flow through a hall into a corridor (see Fig. 1). In this paper we present a novel, MoD-informed prediction approach (CLiFF-LHMP)1 that predicts stochastic trajectories by sampling from a CLiFF-map to guide a velocity filtering model [6]. Examples of prediction results are shown in Fig. 1.
Footnote 1: The approach is available at [https://github.com/test-bai-cpu/CLiFF-LHMP](https://github.com/test-bai-cpu/CLiFF-LHMP)
In qualitative and quantitative experiments we demonstrate our CLiFF-LHMP approach is 45% more accurate than the baseline at \(50\,\mathrm{s}\), with average displacement error (ADE)
Fig. 1: Long-term (\(50\,\mathrm{s}\)) motion prediction result obtained with CLiFF-LHMP for one person in the ATC dataset. **Red** line: ground truth trajectory. Green line: observed trajectory. **Blue** lines: predicted trajectories. The CLiFF-map is shown with colored arrows.
below \(5\,\mathrm{m}\) up to \(50\,\mathrm{s}\). In contrast to prior art in long-term environment-aware motion prediction [4], our method does not make any assumptions on the optimality of human motion and instead generalizes the features of human-space interactions from the learned MoD. Furthermore, our method does not require a list of goals in the environment as input, in contrast to prior planning-based prediction methods. Finally, our method can flexibly estimate the variable time end-points of human motion, predicting both short- and long-term trajectories, in contrast to the prior art which always predicts up to a fixed prediction horizon.
The paper is structured as follows: we review related work in Sec. II, describe the proposed approach in Sec. III, present our evaluation in Sec. IV, discuss the results in Sec. V and conclude in Sec. VI.
## II Related Work
Human motion prediction has been studied extensively in recent years. With different prediction horizons, the human motion prediction problem can be divided into short-term (\(1\)-\(2\,\mathrm{s}\)), long-term (up to \(20\,\mathrm{s}\)) [1], and very long-term (which we define as over \(20\,\mathrm{s}\)). Several approaches address long-term motion prediction, e.g., full-body motion [5] or in the context of vehicle routing and GPS positioning [15, 16], but, to the best of our knowledge, very long-term prediction of dense navigation trajectories has not been addressed before.
One approach to predict long-term human motion is to account for various semantic attributes of the static environment. For instance, prior knowledge of potential goals in the environment can be used in planning-based methods. Ziebart et al. [17] and Karasev et al. [18] propose planning MDP-based approaches for long-term goal-directed global motion prediction. Rudenko et al. [4] extends this line of work by accounting for local social interactions, which is shown to outperform prior art in the long-term map-aware perspective.
Another popular approach to make long-term predictions is using clustering to represent observed long-term motion patterns, e.g., using expectation-maximization [19]. Chen et al. [20] use constrained gravitational clustering for dynamically grouping the observed trajectories, learning also how motion patterns change over time. Bera et al. [21] learn global and local motion patterns using Bayesian inference in real-time. One shortcoming of clustering-based methods is that they depend on complete trajectories as input. In many cases, e.g. in cluttered environments or from a first-person perspective [22], it is difficult to observe long trajectories, or cluster shorter tracklets and incomplete trajectories in a meaningful way.
Clustering-based methods directly model the distribution over full trajectories and are non-sequential. By contrast, transition-based approaches [23, 24, 25, 26, 27] describe human motion with causally conditional models and generate sequential predictions from learned local motion patterns.
Further, there are physics-based approaches that build a kinematic model without considering other forces that govern the motion. The constant velocity model (CVM) is a simple yet potent approach to predict human motion. Scholler et al. [28] have shown CVM to outperform several state-of-the-art neural predictors at the \(4.8\,\mathrm{s}\) prediction horizon. On the other hand, CVM is not reliable for long-term prediction as it ignores all environment information.
Finally, many neural network approaches for motion prediction have been presented in recent years, based on LSTMs [29], GANs [30], CNNs [31], CVAEs [32] and transformers [33]. Most of these approaches focus on learning to predict stochastic interactions between diverse moving agents in the short-term perspective in scenarios where the effect of the environment topology and semantics is minimal. Our approach, on the other hand, targets specifically the long-term perspective, where the environment effects become critical for making accurate predictions.
Our approach to motion prediction leverages maps of dynamics (MoDs), which encode motion as a feature of the environment by building spatio-temporal models of the patterns followed by dynamic objects (such as humans) in the environment [14, 12]. There are several approaches for building maps of dynamics from observed motion. Some MoDs represent human dynamics in occupancy grid maps [24]. Another type of MoDs clusters human trajectories as mentioned above [19]. Chen et al. [34] present an approach that uses a dictionary learning algorithm to develop a part-based trajectory representation.
The above mentioned MoDs encode the direction but not the speed of motion. MoDs can also be based on mapping sparse velocity observations into flow models, which has the distinct advantage that the MoD can be built from incomplete or spatially sparse data. An example of this class of MoDs is the probabilistic Circular-Linear Flow Field map (CLiFF-map) [12] that we use in this paper. CLiFF-map uses a Gaussian mixture model (GMM) to describe multimodal flow patterns at each location. In this paper, we use sampled directions from the CLiFF-map to predict stochastic long-term human motion.
A method similar to ours is presented in Barata et al. [35]. It constructs a vector field that represents the most common direction at each point and predicts human trajectories by inferring the most probable sequence through this vector field. By contrast, our approach uses a probabilistic vector field that represents speed and direction jointly in a multimodal distribution. Further, the evaluation in Barata et al. [35] assumes a fixed prediction horizon of \(4.8\,\mathrm{s}\), whereas we show our approach to estimate human motion more accurately than the state of the art for up to \(50\,\mathrm{s}\).
## III Method
In this section, we first describe the CLiFF-map representation for site-specific motion patterns (Sec. III-A) and then present the CLiFF-LHMP approach for single-agent long-term motion prediction exploiting the information accumulated in a CLiFF-map (Sec. III-B).
### _Circular-Linear Flow Field Map (CLiFF-map)_
To predict human trajectories we exploit the information about local flow patterns represented in a CLiFF-map as a
multimodal, continuous distribution over velocities. CLIFF-map [12] is a probabilistic framework for mapping velocity observations (independently of their underlying physical processes), i.e., essentially a generalization of a vector field into a Gaussian mixture field. Each location in the map is associated with a Gaussian mixture model (GMM). A CLIFF-map represents motion patterns based on local observations and estimates the likelihood of motion at a given query location.
CLiFF-maps represent speed and direction jointly as velocity \(\mathbf{V}=[\theta,\rho]^{T}\) using direction \(\theta\) and speed \(\rho\), where \(\rho\in\mathbb{R}^{+}\), \(\theta\in[0,2\pi)\). As the direction \(\theta\) is a circular variable and the speed is linear, a mixture of _semi-wrapped_ normal distributions (SWNDs) is used in CLiFF-map. At a given location, the semi-wrapped probability density function (PDF) over velocities can be visualized as a function on a cylinder. Direction values \(\theta\) are wrapped on the unit circle and the speed \(\rho\) runs along the length of the cylinder. An SWND \(\mathcal{N}_{\mathbf{\Sigma},\mathbf{\mu}}^{SW}\) is formally defined as \(\mathcal{N}_{\mathbf{\Sigma},\mathbf{\mu}}^{SW}(\mathbf{V})=\sum_{k\in\mathbb{Z}} \mathcal{N}_{\mathbf{\Sigma},\mathbf{\mu}}[(\theta,\rho]^{T}+2\pi[k,0]^{T})\), where \(\mathbf{\Sigma},\mathbf{\mu}\) denote the covariance matrix and mean value of the directional velocity \((\theta,\rho)^{T}\), and \(k\) is a winding number. Although \(k\in\mathbb{Z}\), the PDF can be approximated adequately by taking \(k\in\{-1,0,1\}\) for practical purposes [36]. To preserve the multi-modal characteristic of the flow, a semi-wrapped Gaussian mixture model (SWGMM) is used, which is a PDF represented as a weighted sum of \(J\) SWNDs: \(p(\mathbf{V}|\mathbf{\xi})=\sum_{j=1}^{J}\pi_{j}\mathcal{N}_{\mathbf{\Sigma}_{j},\mathbf{\mu}_{j}}^{SW}(\mathbf{V})\), where \(\mathbf{\xi}=\{\xi_{j}=(\mathbf{\mu}_{j},\mathbf{\Sigma}_{j},\pi_{j})|j\in\mathbb{Z}^{+}\}\) denotes a finite set of components of the SWGMM, and \(\pi_{j}\) denotes the mixing factor and satisfies \(0\leq\pi_{j}\leq 1\).
### _Human Motion Prediction Using CLiFF-map_
We frame the task of predicting a person's future trajectory as inferring a sequence of future states. The algorithm is presented in Alg. 1. With the input of an observation history of \(O_{p}\) past states of a person and a CLiFF-map \(\Xi\), the algorithm predicts \(T_{p}\) future states. The length of the observation history is \(O_{s}\in\mathbb{R}^{+}\)\(\mathrm{s}\), equivalent to \(O_{p}>0\) observation time steps. With the current time-step denoted as the integer \(t_{0}\geq 0\), the sequence of observed states is \(\mathcal{H}=\langle s_{t_{0}-1},...,s_{t_{0}-O_{p}}\rangle\), where \(s_{t}\) is the state of a person at time-step \(t\). A state is represented by 2D Cartesian coordinates \((x,y)\), speed \(\rho\) and direction \(\theta\): \(s=(x,y,\rho,\theta)\).
```
Input:\(\mathcal{H}\), \(x_{t_{0}}\), \(y_{t_{0}},\Xi\) Output:\(\mathcal{T}\)
1\(\mathcal{T}=\{\}\)
2\(\rho_{\mathrm{obs}},\theta_{\mathrm{obs}}\leftarrow\) getObservedVelocity(\(\mathcal{H}\))
3\(s_{t_{0}}=(x_{t_{0}},y_{t_{0}},\rho_{\mathrm{obs}},\theta_{\mathrm{obs}})\)
4for\(t=t_{0}+1\), \(t_{0}+T_{p}\)do
5\(x_{t},y_{t}\leftarrow\) getNewPosition(\(s_{t-1}\))
6\(\theta_{s}\leftarrow\) sampleDirectionFromCLiFFmap(\(x_{t},y_{t},\Xi\))
7(\(\rho_{t}\), \(\theta_{t}\)) \(\leftarrow\) predictVelocity(\(\theta_{s}\), \(\rho_{t-1}\), \(\theta_{t-1}\))
8\(s_{t}\leftarrow(x_{t},y_{t},\rho_{t},\theta_{t})\)
9\(\mathcal{T}\leftarrow\mathcal{T}\cup s_{t}\)
10 return\(\mathcal{T}\)
```
**Algorithm 1**CLiFF-LHMP
From the observed sequence \(\mathcal{H}\), we derive the observed speed \(\rho_{\mathrm{obs}}\) and direction \(\theta_{\mathrm{obs}}\) at time-step \(t_{0}\) (line 2 of Alg. 1). Then the current state becomes \(s_{t_{0}}=(x_{t_{0}},y_{t_{0}},\rho_{\mathrm{obs}},\theta_{\mathrm{obs}})\) (line 3 of Alg. 1). The values of \(\rho_{\mathrm{obs}}\) and \(\theta_{\mathrm{obs}}\) are calculated as a weighted sum of the finite differences in the observed states, as in the recent ATLAS benchmark [6]. With the same parameters as in [6], the sequence of observed velocities is weighted with a zero-mean Gaussian kernel with \(\sigma=1.5\) to put more weight on more recent observations, such that \(\rho_{\mathrm{obs}}=\sum_{t=1}^{O_{p}}v_{t_{0}-t}g(t)\) and \(\theta_{\mathrm{obs}}=\sum_{t=1}^{O_{p}}\theta_{t_{0}-t}g(t)\), where \(g(t)=(\sigma\sqrt{2\pi}e^{\frac{1}{2}(\frac{t}{2})^{2}})^{-1}\).
Given the current state \(s_{t_{0}}\), we estimate a sequence of future states. Similar to past states, future states are predicted within a time horizon \(T_{s}\in\mathbb{R}^{+}\)\(\mathrm{s}\). \(T_{s}\) is equivalent to \(T_{p}>0\) prediction time steps, assuming a constant time interval \(\Delta t\) between two predictions. Thus, the prediction horizon is \(T_{s}=T_{p}\Delta t\). The predicted sequence is then denoted as \(\mathcal{T}=\langle s_{t_{0}+1},s_{t_{0}+2},...,s_{t_{0}+T_{p}}\rangle\).
To estimate \(\mathcal{T}\), for each prediction time step, we sample a direction from the CLiFF-map at the current position (\(x_{t}\), \(y_{t}\)) to bias the prediction with the learned motion patterns represented by the CLiFF-map. The main steps for each iteration are shown in lines 5-9 of Alg. 1.
For each iteration, we first compute the predicted position \((x_{t},y_{t})\) at time step \(t\) from the state at the previous time step
Fig. 2: Steps of sampling a direction \(\theta_{s}\) from the CLiFF-map. **(a)** CLIFF-map built from the ATC data. The location to sample from is marked with an orange arrow. **(b)** Selection of SWGMMs in the CLiFF-map: The red circle contains all SWGMMs within \(r_{s}\) distance to the sampling location. From these SWGMMs, the SWGMM with the highest motion ratio is selected (marked with a blue circle). **(c)** The SWGMM distribution in the selected location wrapped on a unit cylinder. The speed is represented by the position along the \(\rho\) axis and the direction is \(\theta\). The probability is represented by the distance from the surface of the cylinder. A velocity vector (marked with a red arrow) is sampled from this SWGMM. **(d)** The direction value \(\theta_{s}\) of the sampled velocity is shown in the sampled direction and marked with an orange circle.
(line 5 of Alg. 1):
\[\begin{split} x_{t}&=x_{t-1}+\rho_{t-1}\cos\theta_{t-1} \Delta t,\\ y_{t}&=y_{t-1}+\rho_{t-1}\sin\theta_{t-1}\Delta t, \end{split} \tag{1}\]
Afterwards, we estimate the new speed and direction using constant velocity prediction biased by the CLiFF-map. The bias impacts only the estimated direction of motion, speed is assumed to be unchanging.
To estimate direction at time \(t\), we sample a direction from the CLiFF-map at location \((x_{t},y_{t})\) in the function sampleDirectionFromCLiFFmap() (line 6 of Alg. 1). Alg. 2 outlines its implementation. The inputs of Alg. 2 are: the sample location \((x,y)\) and the CLiFF-map \(\Xi\) of the environment. The sampling process is illustrated in Fig. 2. To sample a direction at location \((x,y)\), from \(\Xi\), we first get the SWGMMs \(\Xi_{\rm near}\) whose distances to \((x,y)\) are less than the sampling radius \(r_{s}\) (line 1 of Alg. 2). In a CLiFF-map, each SWGMM is associated with a motion ratio. To sample from the location with the highest intensity of human motions, in line 2, from \(\Xi_{\rm near}\), we select the SWGMM \(\xi\) with highest motion ratio. In line 3 of Alg. 2, from \(\xi\), an SWND is sampled from the selected SWGMM, based on the mixing factor \(\pi\). A velocity is drawn randomly from the sampled SWND. Finally, the direction of the sampled velocity is returned and used for motion prediction.
With the direction sampled from the CLiFF-map, we predict the velocity (\(\rho_{t}\), \(\theta_{t}\)) in line 7 of Alg. 1 assuming that a person tends to continue walking with the same speed as in the last time step, \(\rho_{t}=\rho_{t-1}\), and bias the direction of motion with the sampled direction \(\theta_{s}\) as:
\[\theta_{t}=\theta_{t-1}+(\theta_{s}-\theta_{t-1})\cdot K(\theta_{s}-\theta_{t- 1}), \tag{2}\]
where \(K(\cdot)\) is a kernel function that defines the degree of impact of the CLiFF-map. We use a Gaussian kernel with a parameter \(\beta\) that represents the kernel width:
\[K(x)=e^{-\beta\left\|x\right\|^{2}}. \tag{3}\]
An example of velocity prediction results is shown in Fig. 3. With kernel \(K\), we scale the CLiFF-map term by the difference between the direction sampled from the CLiFF-map and the current direction according to the CVM. The sampled direction is trusted less if it deviates more from the current direction. A larger value of \(\beta\) makes the proposed method behave more like a CVM, and with a smaller value of \(\beta\), the prediction will follow the CLiFF-map more closely.
In the end of each iteration, we add \(s_{t}\) to the predicted trajectory \(\mathcal{T}\) (line 9 of Alg. 1) and update \(t\) for the next iteration. After iterating for \(T_{p}\) times, the output is a sequence \(\mathcal{T}\) of future states that represents the predicted trajectory.
## IV Experiments
This section describes the experimental setup for qualitative and quantitative evaluation of our CLiFF-LHMP approach. Accurate map-aware long-term motion predictions are typically addressed with Markov Decision Process (MDP) based methods [17, 18, 37, 38, 4]. Among them, as the baseline for CLiFF-LHMP, we chose the recent IS-MDP approach [4]. We also compare our method with the constant velocity predictor [28, 6].
We evaluate the predictive performance using the following two real-world datasets:
1. **THOR**[39]: This dataset captures human motion in a room with static obstacles. It includes two settings: with one obstacle (denoted as THOR1, see the top row in Fig. 9) and with three obstacles (denoted as THOR3, see the bottom row in Fig. 9). The size of the room for data collection is 8.4\(\times\)18.8 \(\,\mathrm{m}\).
2. **ATC**[40]: This dataset contains trajectories recorded in a shopping mall in Japan. The dataset covers a large indoor environment with total area of around \(900\,\mathrm{m}^{2}\). The map of the environment is shown in Fig. 1.
THOR1 and THOR3 both include four rounds of collected data. We use the first round to build the CLiFF-map and use the remaining three rounds for evaluation. After filtering out short trajectories (shorter than the observation horizon \(O_{s}\)) for evaluation, there are in total 247 trajectories in the THOR1 dataset and 327 trajectories in the THOR3 dataset. This gives us the train-to-test ratio of about 1 to 3 in both THOR1 and THOR3.
The ATC dataset consists of 92 days in total. For building the CLiFF-map, we used the data from the first day (Oct. 24th, 2012). From the remaining 91 days, again after filtering
Fig. 3: Example predictions that visualize the adaptive influence of the CLiFF-map and the constant velocity model on the prediction, based on the sampled direction. **Green** dots show the observed past states \(\mathcal{H}\), **red** dots show the ground truth future states and **blue** dots show the predicted states \(\mathcal{T}\). In each predicted state, the **orange** arrow shows the sampled direction from the CLiFF-map \(\theta_{s}\) and the **green** arrow shows the direction from the last time step \(\theta_{t-1}\). **Blue** arrows between predicted states show the direction of the predicted trajectory. In locations like (**a**) where the sampled CLiFF-map direction greatly opposes the CVM prediction, the CVM prediction is trusted more. In locations like (**b**) where the sampled CLiFF-map direction is close to the CVM prediction, the CVM prediction is biased more towards the CLiFF-map direction.
out trajectories shorter than the observation horizon \(O_{s}\), we use 1 803 303 trajectories that have continuous motion.
We downsampled both datasets to \(2.5\,\mathrm{Hz}\). For observation, we take \(3.2\,\mathrm{s}\) (the first 8 positions) of the trajectory and use the remaining (up to \(50\,\mathrm{s}\) or 125 positions) as the prediction ground truth. In the parameter analysis, we also evaluate the effect of setting the observation horizon to different values.
Given the area covered by the ATC dataset (\(\sim\)\(900\,\mathrm{m}^{2}\)) and the THOR dataset (\(\sim\)\(150\,\mathrm{m}^{2}\)), the size and number of obstacles in THOR dataset, and the trajectory lengths available in the datasets, we selected the parameters shown in Table I for our quantitative and qualitative experiments. Because the size of obstacles in the THOR setting is less than \(1\,\mathrm{m}\), we set the grid resolution to \(0.5\,\mathrm{m}\) when building the CLIFF-map from the THOR dataset, in contrast to \(1\,\mathrm{m}\) in the ATC dataset. Also, we set the prediction time step \(\Delta t\) to \(0.4\,\mathrm{s}\) for the cluttered THOR dataset, in contrast to \(1\,\mathrm{s}\) for the ATC dataset. In the parameter analysis we evaluate the impact of selecting \(\Delta t\) on prediction accuracy.
Sampling radius \(r_{s}\) and kernel \(\beta\) are the main parameters in CLIFF-LHMP. The value of \(r_{s}\) is set to a multiple of the CLIFF-map grid resolution. For biasing the current direction with the sampled one, we use the default value of \(\beta=1\) for both datasets. The impact of both parameters is evaluated in the experiments. Using the ATC dataset, we specifically evaluate the influence of the three parameters (see Fig. 6): observation horizon \(O_{s}\in[1.2,3.2]\) s, sampling radius \(r_{s}\in[1,3]\)\(\,\mathrm{m}\), and kernel parameter \(\beta\in[0.5,10]\). We also evaluated the influence of the prediction time step \(\Delta t\in[0.4,1.0]\) s using the THOR dataset (see Fig. 7).
For the evaluation of the predictive performance we used the following metrics: _Average_ and _Final Displacement Errors_ (ADE and FDE) and _Top-k ADE/FDE_. ADE describes the error between points on the predicted trajectories and the ground truth at the same time step. FDE describes the error at the last prediction time step. _Top-k ADE/FDE_ compute the displacements between the ground truth position and the closest of the \(k\) predicted trajectories. For each ground truth trajectory we predict \(k\) = 20 trajectories.
We stop prediction according to Alg. 1 when no dynamics data (i.e. SWGMMs) is available within the radius \(r_{s}\) from the sampled location (line 6). If one predicted trajectory stops before \(T_{s}\), it will only be included in the ADE/FDE evaluation up to the last available predicted point. When predicting for each ground truth trajectory, the prediction horizon \(T_{s}\) is either equal to its length or \(50\,\mathrm{s}\) for longer trajectories.
## V Results
In this section, we present the results obtained in ATC and THOR with our approach compared to two baselines. The performance evaluation is conducted using both quantitative and qualitative analysis, and we further investigate the approach's performance through a parameter analysis.
### _Quantitative Results_
Figs. 4 and 5 show the quantitative results obtained in the ATC and THOR datasets. We compare our CLIFF-LHMP approach with IS-MDP [4] and CVM. In the short-term perspective all approaches perform on par. The mean ADE is marginally lower for CVM compared to the other predictors below \(6\,\mathrm{s}\) in ATC, below \(10\,\mathrm{s}\) in THOR1, and below \(4\,\mathrm{s}\) in THOR3. In THOR3 there are more obstacles that people need to avoid, while THOR1 and ATC include more open spaces. In open spaces without obstacles, a constant velocity prediction is often a very good short-term predictor [6]. For our approach which accounts for possible deviations from straight trajectories the ADE for short-term predictions is slightly higher. For prediction horizons less than \(10\,\mathrm{s}\), IS-MDP performs better than CLIFF-LHMP. However, the IS-MDP method requires additional input (goal points and the obstacle map) and its performance strongly depends on both. In contrast, our approach makes predictions without explicit knowledge about goals and implicitly accounts for the obstacle layout, as well as the specific ways people navigate in the environment.
In long-term predictions above \(10\,\mathrm{s}\), both CLIFF-LHMP and IS-MDP outperform the CVM method. Our approach is substantially better than IS-MDP when the prediction horizon is above \(20\,\mathrm{s}\) since it implicitly exploits location-specific motion patterns, thus overcoming a known limitation of MDP-based methods [4]. Table II summarises the performance results of our method against the baseline approaches at the maximum prediction horizon. Our CLIFF-LHMP approach accurately predicts human motion up to \(50\,\mathrm{s}\) with a mean ADE of \(5\,\mathrm{m}\). At \(50\,\mathrm{s}\) in the ATC dataset, our method achieves a 45% ADE and 55% FDE improvement in performance compared to IS-MDP. At \(12\,\mathrm{s}\) in THOR1 and THOR3, our method achieves an improvement of 6.3% and 13.3% ADE (25.7%, 27.8% FDE) over IS-MDP, respectively.
Figs. 4 and 5 also show that the standard deviation of ADE and FDE is generally lower for CLIFF-LHMP predictions, compared to CVM and IS-MDP. This indicates that our approach makes more consistent predictions, both in the short- and long-term perspective.
predictor trust the CLiFF-map more, which can lead to jumps between distinct motion patterns. Setting \(\beta\) to a high value such as 10 slightly improves the performance in short-term predictions, however, as for the CVM model, the CLiFF-LHMP predictor with high values of \(\beta\) is prone to fail delivering long-term predictions. The reason is that we stop predicting when the CLiFF-map is not any longer available close to the predicted location. So, if more trust is put on the CVM component, many ground truth trajectories cannot be predicted successfully for long prediction times. When the planning horizon is set to \(50\,\mathrm{s}\), 84% of ground truth trajectories can be predicted successfully with \(\beta=1\), while with \(\beta=10\), the ratio drops to 52.3%. Also when the prediction is dominated by the CVM component, the top k-ADE/FDE scores are worse due to a reduced diversity of the predictions.
In the experiments with different values of the sampling radius \(r_{s}\) (see Fig. 6, right), we observed a stable prediction performance. Therefore, it is reasonable to set \(r_{s}=1\) in order to reduce the computation cost.
In our experiments with the prediction time step \(\Delta t\), we observe robust performance with slight improvement when making higher frequency predictions (\(\Delta t=\)\(0.4\,\mathrm{s}\) vs. \(1.0\,\mathrm{s}\), see Fig. 7). Smaller \(\Delta t\) is recommended in cluttered environments, such as in the THOR dataset. Making iterative predictions with a smaller time step naturally comes at the expense of computational cost increasing linearly for CLiFF-LHMP. Selecting a larger prediction time step \(\Delta t=\)\(1.0\,\mathrm{s}\) drops the performance in THOR by only approx. 5% at the maximum prediction horizon, as compared to \(\Delta t~{}=~{}0.4\,\mathrm{s}\).
### _Qualitative Results_
Figures 8 and 9 show qualitative results with example predictions. Our approach correctly captures the motion patterns in each scenario, utilizing the environment information during the prediction. Figure 9 shows that the predicted trajectories avoid the obstacles, even though an obstacle map is not used for predictions. Furthermore, using maps of dynamics built from the observations of human motion makes it possible to predict motion through regions which appear as obstacles in an occupancy map, for example across stairs and through narrow passages (see Fig. 8). Similarly, using the MoD input keeps predictions in more intensively used areas of the environment, avoiding semantically-insignificant and empty regions, e.g., corners of the room (see Fig. 9).
## VI Conclusions
In this paper we present the idea to use _Maps of Dynamics_ (MoDs) for long-term human motion prediction. By using MoDs, motion prediction can utilize previously observed spatial motion patterns that encode important information about spatial motion patterns in a given environment. We present the CLiFF-LHMP approach to predict long-term motion using a CLiFF-map - a probabilistic representation of a velocity field from isolated and possibly sparse flow information (i.e. complete trajectories are not required as input). In our approach, we sample directional information from a CLiFF-map to bias a constant velocity prediction.
We evaluate CLiFF-LHMP with two publicly available real-world datasets, comparing it to several baseline approaches. The results demonstrate that our approach can predict human motion in complex environments over very long time horizons. Our approach performs on-par with the state of the art for shorter periods (\(10\,\mathrm{s}\)) and significantly outperforms it in terms of ADE and FDE for longer periods of up to \(50\,\mathrm{s}\). We also showed that our method makes more consistent predictions and is not strongly sensitive to the observation horizon. By exploiting the learned motion patterns encoded in the CLiFF MoD, our method can implicitly infer common goal points and correctly predict trajectories that follow the complex topology of the environment, e.g., navigating around corners or obstacles, or passing through narrow passages such as doors.
Future work will include experimenting with other types of MoDs and motion prediction methods, sampling speed in addition to direction from the MoD, extending CLiFF-LHMP to multi-agent prediction, extending the evaluation to
Fig. 4: ADE/FDE (mean \(\pm\) one std. dev.) in the ATC dataset with prediction horizon 1-\(50\,\mathrm{s}\).
Fig. 5: ADE/FDE (mean \(\pm\) one std. dev.) in the THOR1 **(top)** and THOR3 **(bottom)** dataset with prediction horizon 0.4-\(12\,\mathrm{s}\).
outdoor datasets, as well as estimating confidence values for the predicted trajectories.
| 人間行動予測は、モバイルサービスロボットやIntelligent vehicleの安全かつスムーズな動作に重要です。より正確な予測は、特に長期的な予測において、システムの性能が向上します。例えば、衝突リスクの評価や将来の計画が可能になります。この論文では、空間的行動パターンに関する一般表現として、動的マップ(MoD)をベースに、長期人間行動予測(LHMP)に適用します。私たちは、CLiFF-LHMPというMoDに基づいた人間行動予測手法を提案しました。この手法は、データ効率的、説明可能であり、上流追跡システムからの誤差に影響を受けません。この手法は、同じ環境で記録された人間の動きデータで訓練された特定のMoD、CLiFF-mapを使用します。CLiFF-mapからサンプルを基に、定速度予測をバイアスさせ、多変動軌道予測を生成します。公開データセットを用いた実験 |
2309.13013 | Performance Analysis of UNet and Variants for Medical Image Segmentation | Medical imaging plays a crucial role in modern healthcare by providing
non-invasive visualisation of internal structures and abnormalities, enabling
early disease detection, accurate diagnosis, and treatment planning. This study
aims to explore the application of deep learning models, particularly focusing
on the UNet architecture and its variants, in medical image segmentation. We
seek to evaluate the performance of these models across various challenging
medical image segmentation tasks, addressing issues such as image
normalization, resizing, architecture choices, loss function design, and
hyperparameter tuning. The findings reveal that the standard UNet, when
extended with a deep network layer, is a proficient medical image segmentation
model, while the Res-UNet and Attention Res-UNet architectures demonstrate
smoother convergence and superior performance, particularly when handling fine
image details. The study also addresses the challenge of high class imbalance
through careful preprocessing and loss function definitions. We anticipate that
the results of this study will provide useful insights for researchers seeking
to apply these models to new medical imaging problems and offer guidance and
best practices for their implementation. | Walid Ehab, Yongmin Li | 2023-09-22T17:20:40 | http://arxiv.org/abs/2309.13013v1 | # Performance Analysis of UNet and Variants for Medical Image Segmentation
###### Abstract
Medical imaging plays a crucial role in modern healthcare by providing non-invasive visualisation of internal structures and abnormalities, enabling early disease detection, accurate diagnosis, and treatment planning. This study aims to explore the application of deep learning models, particularly focusing on the UNet architecture and its variants, in medical image segmentation. We seek to evaluate the performance of these models across various challenging medical image segmentation tasks, addressing issues such as image normalization, resizing, architecture choices, loss function design, and hyperparameter tuning. The findings reveal that the standard UNet, when extended with a deep network layer, is a proficient medical image segmentation model, while the Res-UNet and Attention Res-UNet architectures demonstrate smoother convergence and superior performance, particularly when handling fine image details. The study also addresses the challenge of high class imbalance through careful preprocessing and loss function definitions. We anticipate that the results of this study will provide useful insights for researchers seeking to apply these models to new medical imaging problems and offer guidance and best practices for their implementation.
keywords: medical imaging, image segmentation, deep learning, performance evaluation, UNet, Res-UNet, Attention Res-UNet
## 1 Introduction
Medical image segmentation is a critical aspect of medical image analysis and computer-aided diagnosis, involving the partitioning of images into meaningful regions for the identification of structures such as organs, tumors, and vessels. Deep learning, with its ability to automatically extract complex features from vast medical image datasets, presents a promising solution to enhance segmentation accuracy. However, challenges persist due to the diversity of medical domains, necessitating tailored approaches and evaluation metrics.
This research's primary goal is to comprehensively study state-of-the-art deep learning methods, focusing on UNet [53] and its variants, Res-UNet [29] and Attention Res-UNet [44], renowned for their effectiveness in complex medical image segmentation tasks.
The main objectives of this work are to apply the UNet model and its variants to a number of representative medical image segmentation problems, adapt different image pre-processing and model training techniques, identify appropriate performance metrics, and evaluate the performance of these models. Hopefully, the findings of this study will offer useful guidance to researchers when applying these models to new medical imaging problems.
The remainder of this paper is organised as follows. The problems of medical imaging and previous studies on segmentation, particularly medical image segmentation, are reviewed in Section 2. The details of UNet, its variants, and evaluation methods are discussed in Section 3. The applications of the above models to three problems of medical image segmentation, including brain tumor segmentation, polyp segmentation, and heart segmentation, are presented in Sections 4, 5, and 6, respectively. Finally, the findings and future work are presented in Section 7.
## 2 Background
Medical imaging has been widely employed by healthcare professionals for the evaluation of various anatomical structures. Medical image segmentation is the process of assigning labels to individual pixels within an image, thereby converting raw images into meaningful spatial data [41]. Currently, clinicians still largely perform this segmentation manually, a time-consuming process prone to both intra- and inter-observer variations [27]. The adoption of automatic segmentation methods holds significant promise, as it can enhance reproducibility and streamline clinical workflows. This is particularly relevant in the face of growing healthcare demands and a shortage of healthcare providers [42]. The advance of new technologies has made it possible for automatic organ segmentation [9], tumor segmentation [40], vessel segmentation [25], lesion detection and segmentation [52, 60], cardiac segmentation [7], brain segmentation [31, 28], and bone segmentation [26, 43], to name a few, in clinical practice.
Medical image segmentation is inherently influenced by the imaging modality employed. Computed Tomography (CT) imaging presents challenges related to similar tissue intensities, three-dimensional data, and radiation exposure control [30]. Magnetic Resonance Imaging (MRI) introduces complexities in multi-contrast imaging, noise, and artifacts, as well as lengthy acquisition
times [67, 50]. Ultrasound imaging, although operator-dependent and prone to speckle noise, offers real-time imaging without ionizing radiation. Understanding the distinct characteristics and challenges of each modality is crucial for selecting appropriate segmentation techniques and optimizing the accuracy of medical image analysis [51, 66, 19]. Positron Emission Tomography (PET) imaging, commonly used for functional studies and cancer detection, faces resolution-noise trade-offs and requires advanced algorithms for accurate segmentation, distinguishing physiological from pathological regions [37]. X-ray imaging faces challenges due to the inherent two-dimensional projection of three-dimensional structures [1], making accurate segmentation difficult due to overlapping structures and low contrast [8].
Historically, image segmentation can be performed by using low-level image processing methods. For examples, thresholding is a straightforward technique that involves selecting a threshold value and classifying pixels as foreground or background based on their intensity values [59]. Region-based segmentation methods focus on grouping pixels based on their spatial and intensity similarities [24]. The Watershed transform, introduced by Beucher and Serge[5], is a region-based segmentation technique that has found applications in contour detection and image segmentation.
Statistical methods have also been developed for image segmentation. K-means clustering is a widely recognized method for partitioning an image into K clusters based on pixel intensity values [33]. Active contours, often referred to as "snakes," were introduced by Kass, Witkin, and Terzopoulos[39]. Probabilistic modelling for medical image segmentation was presented in [34, 35, 36] where the Expectation-Maximisation process is adopted to model each segment as a mixture of Gaussians. The graph cut method utilises graph theory to partition an image into distinct regions based on pixel similarities and differences [10, 11, 54, 55, 56, 57, 58]. The Markov Random Field (MRF) was adopted in [20, 21, 22, 23] for lesion segmentation in dermoscopy images in combination with particle swarm optimisation, and for optic disc segmentation [55] and choroidal layer segmentation [65]. The level-set method, based on partial differntial equations (PDE), progressively evaluates the differences among neighbouring pixels to find object boundaries and evolves contours to delineate regions of interest [12, 13, 14, 15, 16, 17, 61, 62, 63, 64, 65].
Over the past decade, the Deep Learning (DL) techniques stand as the cutting-edge approach for medical image segmentation. The Convolutional Neural Networks (CNN) are inherently suited for volumetric medical image segmentation tasks. They can be customized by adjusting network depth and width to balance between computational efficiency and segmentation accuracy. Ensembling multiple 3D CNNs with diverse architectures has been effective in improving robustness and generalization to different medical imaging modalities [18]. Fully Convolutional Networks (FCN) have been successfully adapted to
medical image segmentation tasks by fine-tuning pre-trained models or designing architectures tailored to specific challenges. In scenarios where anatomical structures exhibit varying shapes and appearances, FCNs can be modified to include multi-scale and skip connections to capture both local and global information[40].
The UNet [53] represents the most widely embraced variation among DL networks, featuring a U-shaped architecture with skip connections that enables the accurate delineation of objects in images [40]. SegNet, an encoder-decoder architecture, offers adaptability to various medical imaging modalities. Its encoder can be customized to incorporate domain-specific features, such as texture and intensity variations present in medical image[2]. Additionally, the decoder can be modified to handle the specific shape and structure of objects within the medical images, ensuring precise segmentation[38].
ResUNet [29] extends the UNet architecture by introducing residual connections, which enable the network to train effectively, even with a large number of layers, thereby improving its ability to capture complex features in medical images. The integration of residual blocks in ResUNet facilitates the training of deeper networks and enhances segmentation accuracy, making it a valuable choice for tasks demanding the precise delineation of anatomical structures in medical image analysis. Attention ResUNet [44] builds on the ResUNet framework by incorporating attention mechanisms, allowing the network to selectively focus on informative regions in the input image while suppressing noise and irrelevant features. By introducing self-attention or spatial attention modules, Attention ResUNet enhances its segmentation capabilities, particularly in scenarios in which fine details and subtle variations in medical images are critical for accurate segmentation and diagnosis.
Recently, the nnUNet automatic segmentation framework, with its self-configuration mechanism taking into consideration of both cmoputer-hardware capabilities and dataset specific properties, has demonstrated segmentation performance that matches or closely approaches the state-of-the-art, as indicated in a study [32]. Extended models of nnUNet have been reported in [45, 46, 47, 48] for various medical imaging applications.
The exploration of traditional image segmentation methods has revealed their strengths and limitations in simpler tasks but exposed vulnerabilities in complex medical imaging. General segmentation techniques adapted for medical applications, such as the Watershed transform and active contours, have shown promise in specific areas but come with their own limitations. The various domains of medical image segmentation, each with its unique challenges, highlight the complexity of this field. These challenges range from organ shape variability to tumor heterogeneity and vessel intricacies. In light of these challenges, the importance of UNet and its variants becomes evident. These deep
learning approaches offer the potential to overcome the limitations of traditional methods, promising more accurate and adaptable segmentation solutions for complex medical images. Exploring UNet and its variants signifies a journey into harnessing the power of deep learning to address the intricacies of medical image segmentation. This endeavor seeks not only to understand the foundations of UNet but also to explore its potential in overcoming the limitations of traditional methods. Ultimately, this exploration aims to advance medical image analysis, leading to improved healthcare quality and patient outcomes in this critical field.
## 3 Methods
An overview of the deep learning models, including UNet, Res-UNet, and Attention Res-UNet, is provided in this section with details of the network architectures, filters of individual layers, connections between layers, specific functional mechanisms such as attention, activation functions, and normalisation.
### UNet
UNet is a convolutional neural network (CNN) architecture that was originally designed for biomedical image segmentation but has found applications in a wide range of image analysis tasks. Introduced by Ronneberger et al. in 2015 [53], UNet's architecture is characterized by its unique encoder-decoder structure and skip connections. Figure 1a shows the general UNet architecture adopted for this project.
UNet's architecture consists of two main components: the contracting path (encoder) and the expansive path (decoder). This design enables UNet to capture both global and local features of the input image, making it highly effective for segmentation tasks.
**Contracting path (Encoder):** The contracting path is responsible for feature extraction. The UNet model built in this project has four encoding layers. Each encoding layer consists of 2 convolution layers or one convolution block, each followed by batch normalisation layers for ensuring normalisation, and a relu activation layer as shown in Fig 1b. The output from the convolution block is then passed through a a down sampling layer with max-pooling to reduce the spatial dimensions of the feature maps. The contracting path is crucial for building a rich feature representation. After the four encoding layers the output passes through a bottleneck layer and then the upsampling
layers(decoders).
**Expansive path (Decoder):** The expansive path aims to recover the original resolution of the image. The UNet model has four decoding layers. It comprises up-sampling and transposed convolutional layers. Importantly, skip connections connect the encoder and decoder at multiple levels. These skip connections allow the decoder to access feature maps from the contracting path, preserving spatial information and fine details.
**Skip connections:** Skip connections are a key innovation in UNet's architecture. They address the challenge of information loss during up-sampling. By providing shortcut connections between corresponding layers in the encoder and decoder, skip connections enable the model to combine low-level and high-level features effectively. This ensures that fine details are retained during the segmentation process.
**Kernel size and number of filters:** Throughout the structure, a kernel size of 3 is maintained for the convolution layers, as this filter size is common in image segmentation tasks. Smaller filter sizes capture local features, while larger filter sizes capture more global features. The number of filters in the first layer is set to 64. This is a common practice to start with a moderate number of filters and gradually increase the number of filters in deeper layers. It allows the network to learn hierarchical features.
**Final Fully Connected Convolutional layer:** The output passes through
Figure 1: UNet. (a) Network architecture. (b) Details of the convolution block.
a final fully connected convolution layer after four decoding layers. The size of kernel in the last layer depends on the number of classes(labels) present in the mask and is therefore tailored to needs of the tasks. The output from the convolutional layer passes through an activation function to produce the final output. The final activation function used also depends on the number of labels in the output. Final Kernel size and activation layer is mentioned for each task in the following chapters.
UNet's design makes it particularly effective for tasks where precise localization and detailed segmentation are required, such as medical image segmentation.
### Res-UNet
Res-UNet is an extension of UNet that incorporates residual connections. Residual connections were introduced in the context of residual networks (ResNets)[29] to address the vanishing gradient problem in deep networks. Res-UNet combines the strengths of UNet with the benefits of residual connections. The convolution block in UNet is replaced here with residual blocks which introduces an addition layer between the input at each block and the output from the last 3X3 convolutional block.
Figure 2: Res-UNet. (a) Network architecture; (b) Details of the residual convolution block.
**Residual Connections:** Res-UNet incorporates residual connections between layers. These connections allow gradients to flow more easily during training, enabling the training of deeper networks without suffering from vanishing gradients.
**Enhanced Information Flow:** The use of residual connections enhances the flow of information through the network, enabling it to capture long-range dependencies and complex structures in medical images.
The Res-UNet model adopted in this project has four encoding and four decoding layers. The overall architecture of the Res-UNet model and the residual convolutional block is provided in 2a and 2b. Res-UNet is known for its ability to handle deeper networks, which can be advantageous for capturing intricate details in medical images.
### Attention Res-UNet
Attention Res-UNet model[44] builds on the Res-UNet architecture but introduces attention mechanisms. This is achieved through gating signals, which brings output from the lower layer to match the same dimension as the current layer, and an attention block, which combines information from two sources: the input feature map (x) and the gating signal (gating) to compute attention weights that determine how much focus or importance should be given to different spatial regions of the input feature map. Attention mechanisms enable the network to focus on salient regions of the input, improving its ability to differentiate between important and less important features. The key steps
Figure 3: Proposed Attention Res-UNet Architecture
taken to implement the two blocks are explained below:
**Gating Signal:** The gating signal is a subnetwork or a set of operations employed to modulate the flow of information in an attention mechanism. In this specific implementation, the gating signal is generated as follows:
* **Convolutional Layer:** A convolutional layer is used to transform the input features into a format compatible with the requirements of the attention mechanism. It adjusts the feature dimensionality if necessary.
* **Batch Normalization (Optional):** An optional batch normalization layer is applied to ensure that the output of the convolutional layer is well-scaled and centered, thereby aiding in stabilizing training.
* **ReLU Activation:** The ReLU activation function introduces non-linearity to the gating signal, helping capture complex patterns and relationships in the data.
**Attention Block:** The attention block is a critical part of attention mechanisms employed in neural networks. Its primary purpose is to combine information from two sourcesthe input feature map (\(x\)) and the gating signal (_gating_). Here's a breakdown of its functionality:
* **Spatial Transformation (Theta_x):** The input feature map (\(x\)) undergoes spatial transformation using convolutional operations. This transformation ensures that the feature map aligns with the dimensions of the gating signal.
* **Gating Signal Transformation (Phi_g):** Similarly, the gating signal is subjected to transformation via convolutional operations to ensure appropriate spatial dimensions.
* **Combining Information:** The transformed gating signal (_Phi_g_) and the spatially transformed input feature map (_Theta_x_) are combined to capture relationships between different parts of the input.
* **Activation (ReLU):** The ReLU activation function is applied to the combined information, introducing non-linearity and enabling the capture of complex relationships.
* **Psi and Sigmoid Activation:** The combined information is further processed to produce attention weights (_Psi_) using convolutional layers and a sigmoid activation. The sigmoid activation ensures that the attention weights are within the range of 0 to 1, indicating the degree of attention assigned to each spatial location.
* **Upsampling Psi:** The attention weights are upsampled to match the spatial dimensions of the original input feature map, ensuring alignment with the input.
* **Multiplication (Attention Operation):** The attention weights are multiplied element-wise with the original input feature map (\(x\)). This operation
effectively directs attention to specific spatial locations in the feature map based on the computed attention weights.
* **Result and Batch Normalization:** The final result is obtained by applying additional convolutional layers and optional batch normalization, ensuring that the output is appropriately processed.
The gating signal prepares a modulating signal that influences the attention mechanism in the attention block. The attention block computes attention weights to focus on relevant spatial regions of the input feature map, which is particularly useful in tasks requiring fine-grained detail capture, such as image segmentation or object detection. The attention mechanism aids the network in prioritizing and weighting different spatial locations in the feature map, ultimately enhancing performance.
### Evaluation Methods
The following metrics were adopted to evaluate the performance of the models:
**Execution time:** Execution time is recorded for the training of each model. This is done to understand how long a model takes to converge. This is implemented using datetime library in python.
**Validation Loss over Epochs**: Change in validation loss over the training period gives a glance on model convergence. Model convergence graphs show performance of model training and how efficient a model is in converging. These graphs show the lowest loss achieved on validation data, and fluctuations in loss which evaluates a model's stability. The graphs provide an initial basis of comparisons for different models.
**The Dice Similarity Coefficient:** Also known as the Sorensen-Dice coefficient, Dice coefficient is a metric used to quantify the similarity or overlap between two sets or groups. In the context of image segmentation and binary classification tasks, the Dice coefficient is commonly employed to evaluate the similarity between two binary masks or regions of interest (ROIs).
Formally, the Dice Similarity Coefficient (DSC) is defined as:
\[DSC=\frac{2\times|A\cap B|}{|A|+|B|} \tag{1}\]
where:
\(A\) is the first set or binary mask (e.g., the predicted segmentation mask);
\(B\) is the second set or binary mask (e.g., the ground truth or reference mask);
\(|\cdot|\) denotes the cardinality of a set, i.e., the number of elements in the set;
\(\cap\) denotes the intersection operation.
The Dice coefficient produces a value between 0 and 1, where:
* \(DSC=0\) indicates no overlap or dissimilarity between the two sets. It means that there is no commonality between the predicted and reference masks.
* \(DSC=1\) indicates perfect overlap or similarity between the two sets. It means that the predicted mask perfectly matches the reference mask.
In the context of image segmentation, the Dice coefficient is a valuable metric because it measures the agreement between the segmented region and the ground truth. It quantifies how well the segmentation result matches the true region of interest. Higher DSC values indicate better segmentation performance.
Intersection over Union (IoU) or Jaccard Index:IoU measures the overlap between the predicted segmentation mask (\(A\)) and the ground truth mask (\(B\)). It is calculated as the intersection of the two masks divided by their union. A higher IoU indicates better segmentation accuracy.
\[IoU=\frac{|A\cap B|}{|A\cup B|}\]
where:
\(A\) is the predicted mask;
\(B\) is the ground truth mask.
In this formula, \(|A\cap B|\) denotes the cardinality of the intersection of sets \(A\) and \(B\), and \(|A\cup B|\) represents the cardinality of their union. IoU quantifies the extent to which the predicted mask and the ground truth mask overlap, providing a valuable measure of segmentation accuracy. The implementation of Jaccard index using python is given below:
Confusion Matrix:A confusion matrix provides a detailed breakdown of true positive, true negative, false positive, and false negative predictions. It is useful for understanding the model's performance on different classes or categories within the segmentation task. This is implemented using confusion_matrix function from python's sklearn.
**Precision:** Precision assesses the accuracy of an algorithm in correctly identifying relevant pixels or regions. It's the ratio of true positive pixels (correctly segmented) to all pixels identified as positive by the algorithm. High precision indicates that when the algorithm marks a pixel or region as part of the target object, it's usually correct. In medical image segmentation, high precision means that when the algorithm identifies an area as a specific organ or structure, it's likely to be accurate, reducing false positives.
**Recall:** Recall, also called sensitivity or true positive rate, gauges an algorithm's capacity to accurately identify all relevant pixels or regions in an image. It's the ratio of true positive pixels to the total pixels constituting the actual target object or region in the ground truth. A high recall value signifies that the algorithm excels at locating and encompassing most of the genuine target object or region. In medical image segmentation, a high recall means the algorithm effectively identifies and includes most relevant anatomical structures, reducing the likelihood of false negatives.
## 4 Brain Tumor Segmentation
The task of Brain tumor segmentation involves the process of identifying and delineating the boundaries of brain tumors in medical images, specifically in brain MRI scans. The goal of this segmentation task is to automatically outline the shape and extent of lower-grade gliomas (LGG) within the brain images.
### Pre-processing
The dataset used in this study was obtained from Kaggle [6] and was originally sourced from The Cancer Genome Atlas Low Grade Glioma Collection (TCGA-LGG)[49]. It includes brain MR images that are accompanied by manually created FLAIR abnormality segmentation masks. The dataset contains MRI FLAIR image data for 110 patients. Each MRI image is an RGB image with three channels, and each mask is a 2D black and white image.
The dataset originally contained 1200 patient images and masks, with 420 masks indicating the presence of tumors. To focus the model on tumor segmentation, images without tumor annotations were removed. The dataset was then split into training, testing, and validation sets using an 8:1:1 ratio. To handle this data efficiently, a Data Generator was employeda crucial tool in deep learning, particularly for large datasets that don't fit in memory. Data Generators process data in smaller batches during training, effectively managing computational resources and ensuring real-time preprocessing during
model training. The pre-processing steps each image goes through are listed below:
1. **Image Resizing**: Images in the dataset were resized to a standard 256 by 256 pixel dimension to ensure compatibility with neural network architectures. This choice balances between preserving important details, which smaller sizes might lose, and avoiding unnecessary noise, which larger sizes could introduce.
2. **Standardization**: Image and mask data were standardized by adjusting their pixel values to have a mean of 0 and a standard deviation of 1. This uniform scaling simplifies data for deep learning models, promoting convergence and training stability.
3. **Normalisation of mask images**: The mask images, initially with binary values (0 for background, 1 for the mask), had their values become floating-point during resizing. To prepare them for model training, their dimensions were expanded by one to (256x256x1), followed by a thresholding operation. Pixel values greater than 0 were set to 1 (indicating a tumor), while values equal to or less than 0 were set to 0 (representing the background), maintaining binary suitability for training.
### Model Training
#### 4.2.1 Loss Function
**Binary Focal Loss** is a specialized loss function used in binary classification tasks, particularly when dealing with imbalanced datasets or cases where certain classes are of greater interest than others. It is designed to address the problem of class imbalance and focuses on improving the learning of the minority class. Formally, the Binary Focal Loss (BFL) is defined as follows:
\[BFL=-(1-p_{t})^{\gamma}\cdot\log(p_{t}) \tag{2}\]
where:
* \(p_{t}\) represents the predicted probability of the true class label; \(\gamma\) is a tunable hyperparameter known as the focusing parameter; \(\log(\cdot)\) is the natural logarithm.
The Binary Focal Loss has the following key characteristics:
* It introduces the focusing parameter \(\gamma\) to control the degree of importance assigned to different examples. A higher \(\gamma\) values emphasize the training on hard, misclassified examples, while lower values make the loss less sensitive to those examples.
* When \(\gamma=0\), the Binary Focal Loss reduces to the standard binary cross-entropy loss.
* The term \((1-p_{t})^{\gamma}\) is a modulating factor that reduces the loss for well-classified examples (\(p_{t}\) close to 1) and increases the loss for misclassified examples (\(p_{t}\) close to 0).
* BFL helps the model focus more on the minority class, which is especially useful in imbalanced datasets where the majority class dominates.
* It encourages the model to learn better representations for challenging examples, potentially improving overall classification performance.
* The loss is applied independently to each example in a batch of data during training.
The Binary Focal Loss is a valuable tool in addressing class imbalance and improving the training of models for imbalanced binary classification tasks. By introducing the focusing parameter, it allows practitioners to fine-tune the loss function according to the specific characteristics of their dataset and the importance of different classes.
#### 4.2.2 Model Design Choices
**UNet:** The UNet model has input shape of (256, 256,3) for the rgb images and an output layer of shape (256, 256, 1) for the mask output. The final output layer consists of 1x1 convolutional layers followed by batch normalization and sigmoid activation. These layers produce the segmentation mask, where each pixel is classified as either part of the object or background. Sigmoid activation is used for binary segmentation. The model has a total of 31,402,501 parameters with 31,390,723 being trainable.
**Res-UNet:** The Res-UNet model takes RGB images with an input shape of (256, 256, 3) and produces a mask output with an output layer of shape (256, 256, 1). The last layer of the model comprises 1x1 convolutional layers, which are followed by batch normalization and a sigmoid activation function.
**Attention Res-UNet:** Attention Res-UNet follows the same input and output configuration as the previous models due to the same input and output image and mask specifications.
#### 4.2.3 Callbacks
Three callbacks are assigned to the models:
1. EarlyStopping Callback (EarlyStopping): The EarlyStopping callback monitors validation loss during training. If there's no improvement (decrease) for 20 epochs, training stops early to prevent overfitting and save time,
ensuring the model doesn't learn noise or deviate from the optimal solution.
2. ReduceLROnPlateau Callback: The ReduceLROnPlateau callback is used to optimize the model's training by lowering the learning rate when the validation loss reaches a plateau or stops improving, aiding the model in fine-tuning and avoiding local minima. The callback monitors 'val_loss' with a'min' mode, providing informative updates (verbose=1). It adjusts the learning rate if there's no improvement in validation loss for 10 consecutive epochs (patience=10) by a factor of 0.2 (reduced to 20% of its previous value), ensuring meaningful improvements with a'min_delta' parameter set at 0.0001.
3. Checkpointer: Checkpointers are specified which saves weights of the trained model only when the validation loss improves.
#### 4.2.4 Model Compilation and Fitting
The models are compiled using the Adam optimizer with an initial learning rate of '1e-5.' Multiple initial learning rates were tested, with higher rates causing divergence and lower rates slowing down training. Two compilations are done for each model, one using the dice-coefficient as the loss function and the other using Binary Focal loss. Training is performed on both the training and validation data for 100 epochs initially.
### Results
The model training times and epochs run are listed in Table 1. The differences in execution times among the models indicate varying computational resource requirements during training. Notably, Attention Res-UNet emerges as the model with the longest training duration. This extended duration could be attributed to the model's complexity that necessitated additional time for convergence.
Regarding training behavior, the number of epochs completed by each model offers insights into their respective convergence behaviors. The UNet model
\begin{table}
\begin{tabular}{|c|c|c|} \hline Models & Execution Time & Epoch \\ \hline UNet & 34 min 20 sec & 69 \\ \hline Res-UNet & 42 min 1 sec & 89 \\ \hline Attention Res-UNet & 1 hr 1 min & 100 \\ \hline \end{tabular}
\end{table}
Table 1: Execution time and epochs of trained models
exhibits a comparatively lower number of epochs, implying a relatively swift convergence. This is indicative of a particularly efficient training process. Conversely, Res-UNet and Attention Res-UNet underwent more extensive training, implying potentially more intricate model architectures or the need for extended training periods to achieve convergence.
Furthermore, it's worth noting that some models concluded training prematurely due to a lack of improvement in validation loss, as evidenced by their lower epoch counts. This highlights the consideration of early stopping strategies, a common technique used to curtail training and prevent overfitting. This observation raises the need for discussions on optimizing model performance and making thoughtful decisions about resource allocation during training.
The sub-figures in Figure 4 depict the evolution of the Binary Focal Loss over epochs for training and validation data for three different models. These results offer insights into how these models perform during the training process.
1. **Initial Validation Loss:** Initial validation losses vary among the models. UNet starts with a high initial loss (around 15), indicating initial difficulty in accurate predictions. In contrast, Res-UNet begins with a lower loss (around 8), while Attention Res-UNet starts with an even lower loss (approximately 2), suggesting that the latter two models make relatively better predictions from the start.
2. **Early Epoch Performance:** All three models exhibit a rapid decrease in validation loss within the first ten epochs. This implies that they quickly learn to capture relevant patterns in the data and improve their predictions during this early training phase.
3. **Stability in Training:** During training, all models maintain generally low validation losses, with some fluctuations. UNet exhibits significant fluctuations towards training's end, suggesting sensitivity to data variations. In contrast, Res-UNet shows minor early fluctuations but stabilizes. Attention Res-UNet also experiences initial fluctuations, but they are much smaller than in the other models.
4. **Comparison of Model Performance:** UNet quickly reduces validation loss at the start but has higher fluctuations later. Res-UNet starts with a moderate loss, has some early fluctuations, and stabilizes. Attention
Figure 4: Change in Binary focal loss for each model
Res-UNet consistently performs well from the beginning with minimal fluctuations.
Overall, these results highlight trade-offs between rapid initial learning and stability in model performance. UNet learns quickly but exhibits greater instability, while Res-UNet and Attention Res-UNet provide more consistent and reliable predictions. Table 2 provides performance metrics for UNet, Res-UNet, and Attention Res-UNet, when applied to test data.
## 1 Focal Loss:
All the models achieve low focal loss with Res-UNet and Attention Res-UNet outperforming UNet. Attention Res-UNet achieves the lowest Focal Loss, highlighting its proficiency in addressing class imbalance. This means that the variants perform better at focusing on hard to classify pixels, which, in this case, is the tumor class.
## 2 Accuracy:
Res-UNet and Attention Res-UNet exhibit impressive accuracies, approximately 99.6%, surpassing UNet, which achieves 98.7%. Both Res-UNet and Attention Res-UNet excel in pixel-level classification accuracy.
## 3 Precision and Recall:
Res-UNet demonstrates superior precision, indicating accurate positive pixel classification with minimal false positives. UNet and Attention Res-UNet exhibit slightly lower precision values. Conversely, Attention Res-UNet achieves the highest recall, suggesting its effectiveness in capturing a larger proportion of true positives.
## 4 Dice Coefficient:
Res-UNet achieves the highest Dice coefficient at approximately 0.931, signifying accurate spatial predictions. UNet and Attention Res-UNet yield slightly lower Dice coefficients but maintain strong performance.
## 5 Intersection over Union (IoU):
Res-UNet achieves the highest IoU of approximately 0.870, indicating superior spatial overlap. UNet and Attention Res-UNet record slightly lower IoU values, though they continue to deliver commendable results in this aspect.
In summary, Res-UNet and Attention Res-UNet consistently outperform UNet across multiple performance metrics, underscoring their superior performance
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & Focal Loss & Accuracy & Precision & Recall & Dice & IoU \\ \hline UNet & 0.0169 & 0.987 & 0.852 & 0.623 & 0.72 & 0.563 \\ Res-UNet & 0.0062 & **0.996** & **0.923** & 0.939 & **0.931** & **0.870** \\ Attention Res-UNet & **0.0055** & **0.996** & 0.902 & **0.946** & 0.923 & 0.858 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance Metrics for UNet, Res-UNet, and Attention Res-UNet on test data
in image segmentation on the test data. Res-UNet excels in precision, Dice coefficient, and IoU, while Attention Res-UNet achieves the highest recall.
### Discussions
Figure 5 shows four examples with given image and ground truth mask followed my predictions from the three models. The four examples were chosen as they represent the different types of results observed in the whole test prediction.
UNet exhibits sensitivity to tumor features and shows promise in identifying likely tumor locations but tends to misclassify tumor pixels as background, leading to false negatives. It also mistakenly classifies background pixels as tumors, causing false positives and impacting precision. Res-UNet and Attention Res-UNet, on the other hand, deliver highly accurate predictions, capturing fine details and maintaining a balance between sensitivity and specificity. While they occasionally overestimate tumor presence, these misclassifications are minor.
UNet and its variants perform adequately in most cases but struggle when tumors are very small or have complex boundaries. They also face challenges with class imbalance, resulting in misclassification and poor recall. Res-UNet and Attention Res-UNet mitigate these limitations, successfully locating tu
Figure 5: Segmentation results by the three models for four different examples, from left to right are the input images, ground-truth, segmentation results by UNet, Res-UNet and Attention Res-UNet.
mors in challenging conditions and reducing misclassifications significantly. Attention Res-UNet excels in handling class imbalance.
Despite variations in performance, all models achieve high accuracy scores. However, accuracy may not be a reliable metric as it can remain high even when tumors are misclassified due to the relatively small size of tumors compared to background, making it unreliable for assessing model performance.
## 5 Polyp Segmentation
Polyp segmentation refers to the process of identifying and delineating the boundaries of polyps in medical images, particularly in the context of medical imaging, endoscopy, and colonoscopy. The goal of this segmentation task is to automatically outline the shape and extent of polyps from colonoscopy images.
### Pre-processing
The CVC-ClinicDB dataset [3] is utilized for the segmentation task, featuring frames extracted from colonoscopy videos showcasing polyps. It includes corresponding ground truth masks outlining polyp regions. The dataset consists of two main types of images: original colonoscopy frames accessible at 'original/frame_number.tiff' and corresponding polyp masks at 'ground truth/frame_number.tiff'.
A Pandas DataFrame is employed to manage image and mask paths. The DataFrame is used to split the data into training, testing, and validation sets in an 8:1:1 ratio. A Dataset generator processes images and masks in the training and validation data one by one, using a 'tf_parse()' function to read, resize, and preprocess them for compatibility with the program's requirements. The pre-processing steps are listed below:
1. **Reading the Image:** The function first reads the image from the file path \(x\) using OpenCV (cv2.imread). This reads the image as it is in its original form.
2. **Resizing the Image:** After reading, the image is resized to a fixed size of \(256\times 256\) pixels using OpenCV's cv2.resize function. This resizing ensures that all images have the same dimensions, which is typically necessary for training deep learning models.
3. **Normalizing the Image:** The pixel values of the resized image are scaled to a range between 0 and 1. This is done by dividing all pixel values by 255.0. Normalizing the pixel values helps the deep learning
model learn more effectively.
### Model Training
#### 5.2.1 Loss Function
Binary Focal Loss is used as the loss function for the models. The masks in the problem are binary(label and background) and thus follows a similar design as the Brain Tumor Problem.
#### 5.2.2 Model Design Choices
The model design choices for **UNet**, **Res-UNet** and **Attention Res-UNet** for Polyp segmentation are similar to the the models used for Brain Tumor Segmentation. The two problems, even though crucial in their own ways to the medical community, shares the same configuration in the fact that they involve creating binary segmentation masks from RGB images. Hence the input shape for the images in both the problems is (256, 256, 3) while the output shape is (256, 256, 1), and this does not require a change in the model architectures.
#### 5.2.3 Callbacks
The callbacks used for this problem are Early Stopping, Reducing Learning rate and checkpointers as mentioned in the previous problem.
#### 5.2.4 Model Compiling and Fitting
Models are compiled with the Adam optimizer with an initial learning rate if 1e-5 and fitted for 100 epochs.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Models & Execution Time & Epoch \\ \hline UNet & 51 min 14 sec & 73 \\ \hline Res-UNet & 45 min 12 sec & 63 \\ \hline Attention Res-UNet & 56 min 45 sec & 62 \\ \hline \end{tabular}
\end{table}
Table 3: Execution time and epochs of trained models for Polyp Segmentation
### Results
Model training times and epochs run are listed in Table 3.
Attention Res-UNet had the longest training duration, taking 56 minutes and 45 seconds, likely due to its complex architecture. UNet showed efficient training, completing in 73 epochs, while Res-UNet and Attention Res-UNet required more extensive training, with 45 minutes and 12 seconds and 62 epochs for Res-UNet, and the same time and epochs for Attention Res-UNet. Some models ended training early due to no improvement in validation loss, highlighting the importance of early stopping strategies to prevent overfitting. This emphasizes the need for optimizing model performance and resource allocation decisions during training.
Figure 6 depict the evolution of the Binary Focal Loss over epochs for validation data for three different models. These results offer insights into how these models perform during the training process.
#### 5.3.1 UNet Model:
The UNet model initiates training with a high validation loss, approximately 1.4, primarily because its initial weights are far from optimal. However, within the initial ten epochs, it experiences a rapid decrease in validation loss, a common pattern during the early training stages of many neural networks. This reduction reflects the model's improvement in fitting the training data as it adjusts its weights through techniques like backpropagation and stochastic gradient descent (SGD). Following this initial phase, the UNet model maintains a relatively low and stable loss for the remaining epochs, albeit with some minor fluctuations. These fluctuations are likely attributable to inherent data noise and the stochastic nature of the optimization process.
Figure 6: Convergence for trained models on Polyp Segmentation
#### Res-UNet and Attention Res-UNet Models:
Both Res-UNet and Attention Res-UNet models begin training with a low initial validation loss, roughly 0.2, suggesting potential pretraining or initialization, placing them closer to a reasonable starting point. In the initial 15 epochs, both models experience fluctuations in loss, common during initial training phases as they adapt to data and fine-tune weights, possibly indicating sensitivity to initial configuration or data noise. As training progresses, both models achieve stable loss values, signifying they have reached a consistent and relatively optimal solution compared to UNet within this timeframe. Eventually, all models reach a minimum loss of approximately 10%, demonstrating similar performance levels in minimizing loss on the validation data, despite differences in convergence speed and early fluctuations.
In summary, these results indicate that UNet initiates training with a higher loss but converges swiftly. In contrast, Res-UNet and Attention Res-UNet begin with lower losses but may show more early training fluctuations. Nevertheless, all models ultimately achieve a similar minimum loss, showcasing their capability to capture crucial data features and make accurate predictions.
Table 4 provides performance metrics for UNet, Res-UNet, and Attention Res-UNet on test data
#### 1. Focal Loss:
All the models achieve low Focal Loss values, with Res-UNet and Attention Res-UNet outperforming UNet. Res-UNet achieves the lowest Focal Loss, highlighting its proficiency in addressing class imbalance. This means that the variants perform better at focusing on hard-to-classify pixels, which, in this case, is the tumor class.
#### 2. Accuracy:
Res-UNet and Attention Res-UNet exhibit impressive accuracies, approximately 99.6%, surpassing UNet, which achieves 98.7%. Both Res-UNet and Attention Res-UNet excel in pixel-level classification accuracy.
#### 3. Precision and Recall:
Res-UNet demonstrates superior precision, indicating accurate positive pixel classification with minimal false positives. UNet and Attention Res-UNet exhibit slightly lower precision values. Conversely, Attention Res-UNet achieves the highest recall, suggesting its effectiveness in capturing a larger proportion of true positives.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & Focal Loss & Accuracy & Precision & Recall & Dice & IoU \\ \hline UNet & 0.0387 & 0.968 & 0.913 & 0.733 & 0.813 & 0.686 \\ Res-UNet & **0.0369** & **0.971** & **0.925** & 0.766 & **0.838** & **0.721** \\ Attention Res-UNet & 0.0394 & 0.969 & 0.881 & **0.788** & 0.832 & 0.712 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance Metrics for UNet, Res-UNet, and Attention Res-UNet on test data
**4. Dice Coefficient:** Res-UNet achieves the highest Dice coefficient at approximately 0.931, signifying accurate spatial predictions. UNet and Attention Res-UNet yield slightly lower Dice coefficients but maintain strong performance.
**5. Intersection over Union (IoU):** Res-UNet achieves the highest IoU of approximately 0.870, indicating superior spatial overlap. UNet and Attention Res-UNet record slightly lower IoU values, though they continue to deliver commendable results in this aspect.
In summary, Res-UNet and Attention Res-UNet consistently outperform UNet across multiple performance metrics, underscoring their superior performance in image segmentation on the test data. Res-UNet excels in precision, Dice coefficient, and IoU, while Attention Res-UNet achieves the highest recall.
Figure 7 shows four examples with given image and ground truth mask followed my predictions from the three models.
#### 5.3.1 Discussion
Polyp segmentation presents challenges due to the irregular and random sizing of polyps, limiting generalization, exacerbated by data limitations. All models trained show above-average segmentation results. UNet, as the base model, trains and converges quickly, especially benefiting from the less imbalanced
Figure 7: Segmentation results by the three models for four different examples, from left to right are the input images, ground-truth, segmentation results by UNet, Res-UNet and Attention Res-UNet.
nature of polyp scans compared to brain MRI masks.
However, UNet exhibits lower performance in predicting the target class, reflecting its sensitivity to polyp features but struggles with class imbalance. It occasionally misclassifies some polyp pixels as background (false negatives) and background pixels as polyps (false positives), impacting both sensitivity and precision. The low True Positive score in the confusion matrix underscores these challenges in accurate polyp detection.
In contrast, Res-UNet and Attention Res-UNet perform consistently, reflecting their performance in brain tumor segmentation. They excel in capturing intricate edge boundaries and maintain accuracy with small ground truth masks. There are rare instances of slight overestimation of polyp presence, but these misclassifications are minor and have minimal impact. Attention Res-UNet is better at predicting true positives than other models reflected by its low recall score.
## 6 Heart Segmentation
The third task involves the multi-label segmentation of cardiac structures in medical images, specifically targeting the Left Ventricle (LV), Right Ventricle (RV), and Myocardium. Accurate segmentation of the LV is essential for assessing its size and function, while RV segmentation aids in diagnosing cardiac conditions. Furthermore, precise Myocardium segmentation provides insights into its thickness and function, offering indicators of heart health and potential issues.
### Data Pre-processing
The "Automatic Cardiac Diagnosis Challenge" (ACDC)[4] dataset is used for this segmentation task. The dataset encompasses data from 150 CMRI recordings which are stored in a 4D "nifti" format, preserving the original image resolution and primarily containing whole short-axis slices of the heart specifying the diastolic and systolic phases of the cardiac cycle. The MRI images are in grayscale, while the mask images employ a 0 to 3 scale, with 0 representing the background, 1 corresponding to the RV cavity, 2 representing the myocardium, and 3 corresponding to the LV cavity.
The preprocessing steps involved creating a dataframe to record image and mask volumes, reading them using the 'nibabel' library, and iterating through slices in the third dimension of both the image and mask volumes. Each slice
was cropped using a custom 'crop' function, with most images having a minimum dimension less than 150, leading to a final size of (128, 128) to avoid introducing noise or unreliable information.
Mask images, with pixel values ranging from 0 to 3 (representing labels and background), were converted to **one-hot encoding** by increasing the dimensionality to 4, a crucial step for multi-label loss functions and more accurate predictions. For instance, a pixel value of 0 became (0, 0, 0, 0), and 3 became (0, 0, 0, 1).
MRI pixel values, with a maximum of 3049, were **normalized** to a range of 0 to 1, making them compatible with neural networks. These preprocessing steps were essential for preparing the data for model training.
### Model Training
#### 6.2.1 Loss Function
Categorical Focal Loss is used as the loss function for the multi label segmentation task.
**Categorical Focal Cross-Entropy** combines the concepts of categorical cross-entropy and focal loss to create a loss function suitable for multi-class segmentation tasks with class imbalance. It introduces the focal loss component into the standard categorical cross-entropy. This helps the model focus on harder-to-classify pixels while handling imbalanced datasets.
\[CFC(y,p)=-\sum_{i=1}^{N}\alpha_{i}\cdot(1-p_{i})^{\gamma}\cdot y_{i}\cdot\log( p_{i}) \tag{3}\]
In summary, Categorical Focal Cross-Entropy is a loss function that blends the properties of categorical cross-entropy and focal loss to improve the training of models on imbalanced multi-class segmentation tasks. It helps the model pay more attention to minority classes and focus on pixels that are difficult to classify. The CategoricalFocalCrossentropy loss function is implemented from keras.losses library.
#### 6.2.2 Model Design Choices
**Input and output shapes**: The task of multi label segmentation and nature of greyscale MRI images require the output mask shape of the models to have a size of (128, 128, 4) and input shape to be (128, 128, 1). This in turn reduces the total number of parameters in the model when compared to the previous
problems.
**Activation Function**: Softmax Classifier is used as the activation function in the output layer of all the models, as it is equipped to run classification/segmentation on multi-labelled prediction.
**UNet** Number of parameters: 31401556
**Res-UNet** Number of parameters: 33157140
**Attention Res-UNet** Number of parameters: 39089304
Figure 8: UNet Architecture
Figure 9: Res-UNet Architecture
#### 6.2.3 Model Compiling and Fitting
All the models were compiled with Adam optimizer at initial learning rate of 1e-5 and fitted with Early Stopping, Reduce Learning Rate and Checkpointer callbacks for 100 epochs.
### Results
Model training times and epochs run are listed in Table 5.
UNet had the shortest training duration, taking 19 minutes, but it required 86 training epochs to reach convergence. In contrast, Res-UNet had a longer training duration, lasting 25 minutes and 20 seconds, and it completed 98 training epochs before converging. Attention Res-UNet, with the longest training duration at 27 minutes and 48 seconds, reached convergence after 83 training epochs.
These results illustrate the trade-offs between training time and the number of
\begin{table}
\begin{tabular}{|c|c|c|} \hline Models & Execution Time & Epoch \\ \hline UNet & 19 min & 86 \\ \hline Res-UNet & 25 min 20 sec & 98 \\ \hline Attention Res-UNet & 27 min 48 sec & 83 \\ \hline \end{tabular}
\end{table}
Table 5: Execution time and epochs of trained models for Multi-label Heart Segmentation
Figure 10: Attention Res-UNet Architecture
epochs required for these models. UNet trained relatively quickly but needed more epochs, while Res-UNet and Attention Res-UNet took more time but required fewer epochs to achieve convergence.
Fig 11 show the change in Categorical Focal Crossentropy over epochs for validation data for the three models. All models converge similarly, starting with a high initial loss that rapidly decreases within the first 10 epochs. Afterward, they exhibit noticeable fluctuations in loss, with Res-UNet showing fewer fluctuations compared to the others. Overall, their convergence patterns are similar.
Table 6 provides precision and recall values for each class predicted by the three models.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Precision} & \multicolumn{4}{c|}{Recall} \\ \hline Models & 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 \\ \hline UNet & 0.99 & 0.91 & 0.89 & **0.96** & 0.99 & 0.91 & **0.89** & 0.94 \\ \hline Res-UNet & 0.99 & **0.92** & 0.89 & 0.95 & 0.99 & **0.92** & **0.89** & 0.94 \\ \hline Attention Res-UNet & 0.99 & 0.91 & 0.89 & 0.94 & 0.99 & 0.91 & 0.88 & **0.95** \\ \hline \end{tabular}
\end{table}
Table 6: Precision and Recall score for each class by three models
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Dice} & \multicolumn{4}{c|}{IoU} \\ \hline Models & 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 \\ \hline UNet & 0.993 & 0.906 & **0.893** & **0.951** & **0.987** & 0.829 & **0.807** & **0.907** \\ \hline Res-UNet & 0.993 & **0.92** & 0.888 & 0.944 & **0.987** & **0.852** & 0.799 & 0.895 \\ \hline Attention Res-UNet & 0.993 & 0.908 & 0.884 & 0.945 & 0.985 & 0.831 & 0.792 & 0.895 \\ \hline \end{tabular}
\end{table}
Table 7: Dice and IoU score for each class by three models
Figure 11: Convergence for trained models on Heart Segmentation
#### 4.2.2 Class-wise performance evaluation
* **Class 0 (Background):** All models achieve a high precision score, indicating that they are good at minimizing false positives for the background class. However, UNet and Res-UNet achieve the highest recall, suggesting that they capture most of the background pixels. UNet achieves the highest Dice coefficient and IoU, indicating its accuracy in identifying the background class.
* **Class 1 (RV Cavity):** Res-UNet achieves the highest precision for this class, indicating its accuracy in positive predictions. It also has the highest recall, meaning it captures most of the RV cavity pixels. Res-UNet has the highest Dice coefficient, indicating accurate spatial predictions, while UNet has the highest IoU.
* **Class 2 (Myocardium):** UNet has the highest precision for the myocardium, indicating accurate positive predictions, while Res-UNet has the highest recall, capturing the most myocardium pixels. UNet achieves the highest Dice coefficient and IoU for the myocardium.
* **Class 3 (LV Cavity):** Attention Res-UNet achieves the highest precision for the LV cavity, indicating its proficiency in minimizing false positives. It also has the highest recall, suggesting it captures most of the LV cavity pixels. UNet achieves the highest Dice coefficient and IoU for the LV cavity.
#### 4.2.3 Accuracy and Loss
* UNet achieves the highest accuracy of 98.41%, indicating its proficiency in overall pixel-level classification. UNet also has the lowest loss at 1.00%, suggesting that it minimizes the difference between predicted and ground truth masks effectively.
* Res-UNet achieves a similar accuracy of 98.41% but has a slightly higher loss at 1.09%.
* Attention Res-UNet has an accuracy of 98.28% and the highest loss at 1.44
In summary, the results show that each model excels in different aspects:
* UNet demonstrates high accuracy, low loss, and strong performance in capturing background, myocardium, and LV cavity classes.
* Res-UNet achieves high precision and recall for the RV cavity and the high
\begin{table}
\begin{tabular}{|c|c|c|} \hline Models & Accuracy & Loss \\ \hline UNet & **98.41\%** & **1.00\%** \\ \hline Res-UNet & **98.41\%** & 1.09\% \\ \hline Attention Res-UNet & 98.28\% & 1.44\% \\ \hline \end{tabular}
\end{table}
Table 8: Accuracy and Loss score by the three models
est Dice coefficient for class 1 (RV Cavity).
* Attention Res-UNet excels in precision and recall for the LV cavity and class 3 (LV Cavity).
#### 6.3.1 Discussion
Figure 12 shows four examples with given image and ground truth mask followed my predictions from the three models.
The trained models (UNet, Res-UNet, and Attention Res-UNet) exhibit acceptable results in producing masks similar to the ground truth for the multi-class image segmentation task involving the classes Myocardium (Class 2), LV cavity (Class 3), and RV cavity.
All three models generally perform well in producing accurate image segmentation masks, particularly for the Myocardium (Class 2) and LV cavity (Class 3) due to their abundance of training examples and distinctive features. However, they struggle with the underrepresented RV cavity class (Class 1), resulting in frequent misclassifications, likely due to the limited training data for this class.
Despite these challenges, UNet outperforms the other models in capturing Class 1 pixels. This may be attributed to UNet's lower focal loss score for Class 0, indicating its better handling of class imbalance and enhanced focus
Figure 12: Segmentation results by the three models for four different examples, from left to right are the input images, ground-truth, segmentation results by UNet, Res-UNet and Attention Res-UNet.
on the RV cavity class.
In summary, the models encounter typical challenges associated with imbalanced class distributions in multi-class image segmentation. They excel with well-represented classes but face difficulties with underrepresented ones. UNet shows promise in handling this imbalance, but further improvements, such as addressing class balance and using data augmentation, could enhance performance across all classes.
## 7 Conclusions
We have evaluated the performance of UNet, Res-UNet and Attention Res-UNet on three problems of brain tumor, polyp and multi-label heart segmentation. All models achieved acceptable segmentation results when compared to the ground-truth provided by the datasets. Differences were visible when the target masks become more complex in nature. The key findings of the study are summarised as follows:
1. UNet often misclassified target classes as background when overall target pixel is relatively small such as brain tumor or small polyp segments. UNet also struggled with target segmentation when mask edge boundaries were intricate in nature. This pointed to UNet's limitations with vanishing gradient problem and inability to put focus on hard-to-classify pixels.
2. Res-Unet and Attention Res-Unet proved to be more suitable in handling complex and irregular structures, as both the models were able to capture the complex boundaries in most cases. This was indicative of the residual connection introduced in the two models, which mitigated the vanishing gradient problem.
3. Attention Res-Unet was more effective at tackling class imbalance, as it consistently achieved high recall values among all all tasks. The model was also predicted more refined masks in most cases compared to Res-UNet. Multi-label heart segmentation enforced these theories, as the mask images were less imbalanced compared to other tasks and resulted in higher performance in standard UNet model. Res-Unet and Attention Res-UNet performed similarly in this task due to exclusion of major class underrepresentation in most. One of the three classes was often misclassified due to its scarcity in most of the images in the dataset. This indicated that datasets need to be more inclusive of all classes in order for these robust models to perform at their full potential.
The implications of this work extend beyond the immediate research domain. It sets a modern benchmark for segmentation techniques in the medical field,
offering future researchers valuable insights into the critical factors to consider when applying UNet, variants and other deep learning methodologies to medical image analysis. To enhance this study, future work could focus on the application of the aforementioned models to three-dimensional medical images, as many medical datasets are inherently three-dimensional. Additionally, involving medical specialists to evaluate segmentation outputs could provide more refined and clinically relevant assessments. More loss functions and their effect on these models can be explored, thus adding to the reliability of the study and these models. Similar studies into more extensions of UNet and their suitability can also be explored, hence enriching concrete guidelines.
| 医療画像診断は、内部構造や異常の非侵襲的な視覚化を提供することで、現代医療における重要な役割を果たします。早期疾患検出、正確な診断、治療計画の策定など、様々な利点をもたらします。この研究では、特にUNetアーキテクチャとその変種を用いた深層学習モデルの適用性を調査することを目的としています。さまざまな複雑な医療画像分割タスクにおいてこれらのモデルのパフォーマンスを評価し、画像の正規化、サイズ調整、アーキテクチャ選択、損失関数の設計、ハイパーパラメータチューニングなどの課題に対処します。研究の結果は、標準的なUNetを深層ネットワーク層で拡張することで、医療画像分割モデルとしての能力を証明した一方で、Res-UNetとAttention Res-UNetアーキテクチャは、特に細かな画像の処理において、より滑らかな収束と優れた性能を示しています。また、この研究では |
2310.01574 | Potential Ways to Detect Unfairness in HRI and to Re-establish Positive
Group Dynamics | This paper focuses on the identification of different algorithm-based biases
in robotic behaviour and their consequences in human-robot mixed groups. We
propose to develop computational models to detect episodes of microaggression,
discrimination, and social exclusion informed by a) observing human coping
behaviours that are used to regain social inclusion and b) using system
inherent information that reveal unequal treatment of human interactants. Based
on this information we can start to develop regulatory mechanisms to promote
fairness and social inclusion in HRI. | Astrid Rosenthal-von der Pütten, Stefan Schiffer | 2023-09-27T09:42:52 | http://arxiv.org/abs/2310.01574v1 | # Potential Ways to Detect Unfairness in HRI and to Re-establish Positive Group Dynamics
###### Abstract
This paper focuses on the identification of different algorithm-based biases in robotic behaviour and their consequences in human-robot mixed groups. We propose to develop computational models to detect episodes of microaegression, discrimination, and social exclusion informed by a) observing human coping behaviours that are used to regain social inclusion and b) using system inherent information that reveal unequal treatment of human interactants. Based on this information we can start to develop regulatory mechanisms to promote fairness and social inclusion in HRI.
human-robot interaction, group dynamics, social rejection, bias, inclusion
## I Introduction
Social robots are envisioned to be part of our lives as service providers, team members, companions. Depending on the robots' tasks and purpose they will be playing a more or less active role in our social groups and potentially shape group dynamics - for the better and the worse. Previous research demonstrated that robots can positively influence social dynamics in small groups. In free play situations a robot was able to mitigate conflict between children over toys by providing information on how to compromise [1]. A robotic microphone positively influenced group discussions by encouraging those discussion partners to participate more who were more silent than others [2]. Similarly, a robot giving self-disclosure statements could facilitate to speak up in a support group session for stressed students and it improved perceptions of trust among the members of the support group [3]. However, robots can also cause feelings of social exclusion by leaving humans out of interactions (e.g. not tossing a ball to the human, [4]) or communication (e.g. speaking in "robotic language, [5], or bluntly rejecting the human, [6]), causing negative consequences such as experiencing negative emotions, the feeling of being ignored or being meaningless, and lowered self-esteem. While we assume that developer teams of social robots do not intend to create robots that socially exclude individuals, social exclusion can still arise in interactions in human-robot groups because robots may have software components that are biased against certain groups of humans (e.g., women, PoC) or because it is unaware of the social situation the robot finds itself in and unknowingly behaves socially inadequate.
In this paper we want to briefly revisit i) the role groups play in our (human) lives and how group membership can lead to inter-group bias, ii) the psychological consequences of social rejection (caused by biased behavior), iii) sources of algorithmic bias, and iv) how to use system information to detect bias and start repair mechanisms.
## II Related work on HRI groups and algorithmic bias
### _What groups mean to us_
Groups are highly important to individuals [7]. Since the membership in groups is one defining part of an individual's self-concept and consequently an individual's self-esteem is partly dependent upon group membership, strategies to protect the group and differentiate it from other groups are important for the individual. Positive distinctiveness of the in-group from other groups can be achieved by simply evaluating groups differently in favour of the in-group - also referred to as inter-group bias which is "the systematic tendency to evaluate one's own membership group (the in-group) or its members more favourably than a non-membership group (the out-group) or its members" [8]. It manifests as favouring the in-group (in-group favouritism) or derogating the out-group (out-group derogation), or both. In-group favouritism entails the extension of trust, positive regard, cooperation, and empathy to in-group members, but not to members of the out-group and thus is an initial form of discrimination. Inter-group bias extends to robots. For instance, humans show in-group favouritism for an in-group robot in online studies [9, 10] and assigned "painful" noise blasts to out-group humans to spare in-group robots in scenarios were interactants were in different rooms [11]. Since humans show inter-group bias in human-robot mixed groups negative emotional and social consequences potentially arise for other humans when a robot is favoured instead of them. Moreover, the robots could also be the source of social rejection due to algorithmic biases which will be discussed further below.
### _What happens when we feel excluded from a group_
Inter-group bias can be perceived as a sign of social exclusion or social rejection. According to the Temporal-Need-Threat-Model by Williams [12], social exclusion causes a
reflexive pain response accompanied with negative affect (e.g., sadness, anger) and triggers threats to four fundamental needs: belonging, self-esteem, control over one's social environment and meaningful existence. In a reflective stage, individuals' attention is directed to the exclusion episode and they reflect on its meaning and relevance. This may lead to coping responses such as compliance and conformity or attracting attention, provoking, and attempts of controlling others to fortify the threatened needs. Persistent exposure to ostracism over time consumes the resources necessary to motivate the individual to fortify threatened needs. Eventually, this leads to resignation, alienation, helplessness, and depression. Since humans are hypersensitive to ostracism and tend to over-detect ostracism [13], it is extremely likely that humans detect ostracism in interactions with robots as well and experience and engage in the described reflexive and reflective processes. Indeed, recent studies have explored this and found that participants felt excluded when robots talked in a "robot language" [5] or when a robot stated it did not want to interact with the human again [6]. Although the need for a paradigm shift from studying dyadic human-robot interactions in laboratory settings to studying group interactions in complex environments has been identified and advocated for [14] research in human-robot mixed groups is still scarce. Social psychological phenomena such as social exclusion, and ostracism as negative consequences of a robot's unequal adaptation to group members through machine learning is yet a new perspective in research on HRI groups that the community just recently has identified to be important.
### _Sources of algorithmic bias in HRI_
The general notion of unfair or fair AI has been discussed intensively in recent years. In our modern, digitalized world, we engage more and more in interactions with algorithms and artificially intelligent systems that learn and adapt based on these interactions. Our visits, views, clicks, and buying decisions provide training data for recommender systems on shopping websites (e.g., Amazon) or video streaming applications (e.g., Netflix). Recently, voice agents have entered our homes providing us with helpful information, and services while using these interactions as training data to learn and adapt to us and generalizing this knowledge to predict preferences and intentions of groups of users. Especially in the latter area of voice agents, similar biases may emerge when algorithms try to categorize users into groups and provide these groups with personalized interactions. Recent research demonstrated in many application fields (e.g., financial credit, job application management) that algorithms often discriminate certain groups of people, for instance, based on gender or skin tone and thereby exhibit unintended and unexpected biases usually originated in biased training data. While the lack of diversity in the training data sets that are being used in machine learning originates from different sources, it unequivocally causes a bias towards certain types of users at the cost of others. A new topic that has been recently identified [15] are potential negative consequences arising in HRI by robots that show unintended biases in favour of certain group members and thereby discriminating others. Under the term Fair AI, researchers call out the computer science community to "identify sources of bias, [to] de-bias training data and [to] develop artificial-intelligence algorithms that are robust to skews in the data" [16]. Since computer vision and machine-learning are core technologies for robotic systems, it has been proposed that a similar threat is posed to HRI [17].
Interestingly, concerns about the negative effects of biased robotics systems are often seen from a more global societal perspective. For instance, autonomous cars could put people of colour to a greater risk due to biased person recognition and medical or service robots might reinforce certain discriminatory practices due to biased algorithmic decision-making [15]. However, besides the issues already identified, new forms of biases are likely to emerge when the training data base for machine learning are interactions with multiple humans over a longer time as we have discussed in previous work [18]. Robots are expected to learn and adapt to their users, ideally while in operation during run-time. Hence, robots learning from humans means that robots learn from interactions and the more interactions the better the learning outcome. But humans might have more or less time or might be more or less motivated to provide these interactions that are needed for learning. Thus, training data sets differ in quantity and quality which has consequences for the learning outcome (e.g., knowing the user's preferences) and the robot's quality to adapt to different users.
Let is consider the following family scenario, in which the user who spends more time at home potentially provides the largest training data base for the robot, is best known to the system, and his/her preferences can be easily determined and served. A user who spends less time at home might receive recommendations and interactions matching his/her preferences less often. Or let us consider a working environment, in which the robot's implemented goal is to maximize team performance. The robot will monitor the performance of every single team member and their contribution to team performance. Based on the maximization goal, the robot might decide to distribute more resources to those team members who are high performers in the task, thereby discriminating low performers. Very likely, low performers will experience negative emotions, feel threatened in their self-esteem and their need to belong to the group, and will try to regain social inclusion. Recent work tapped into this issue of unequal adaptation to users based on performance and algorithm goals in experimental studies. For instance, in a collaborative tower construction task, a robot distributed building blocks unequally between two participants which led to lower satisfaction of the human team members with the team relationship [19]. In a collaborative Tetris Game, fair distribution (in contrast to unfair distribution) of resources led participants to trust the system more and resulted in higher overall team performance [20]. However, emotional responses and consequences for the self-perception and self-esteem of the neglected participant were not assessed.
These first results and the scenarios described above demonstrate that besides the now commonly known problems of biases in natural language processing or face recognition also interaction-based algorithmic learning can result in, for instance, perceived (inter-group) bias and social exclusion of individuals with severe negative outcomes for the emotional state of the individual and the social dynamics of the group.
## III How to overcome biased HRI and reach better inclusion
### _First Step - Recognizing the potential for biases in your own work_
Researchers in the field of HRI have become more aware of the potential that their developments and systems might be affected by biases. Earlier this year a group of HRI scholars discussed "how pursuing a very typical, data-driven approach to the development of a robot listener behavior (production of backchannels, which can serve to indicate attentiveness) resulted in models that acted differently with participants with different gender identities" [21]. In their paper the authors discuss design guidelines that may be applied to avoid embedding gender biases into robot social behavior such as carefully examining training data sets before using them for modelling. According to Ntoutsi et al. [22] this recommendation would fall under preprocessing methods focusing on the data to mitigate bias which focus on creating so-called balanced data sets. This can be done using different approaches such as equal sampling from different groups or altering the given data in its classification, i.e., adapting training sample weights [23].
### _Second Step - Mitigating bias in machine learning before system deployment_
Besides the pre-processing methods to mitigate bias as mentioned before, Ntoutsi et al. [22] also consider so-called in-processing methods focusing on the ML algorithm, and post-processing methods focusing on the ML model. Both types of approaches concentrate on the machine learning process and/or the inspection and adaptation of the resulting model. For instance, in the latter case Ntoutsi et al. refer to previous work that post-hoc changed the confidence of CPAR classification rules [24] or the probabilities in Naive Bayes models [25].
_Third Step - How to use system information to detect bias during interaction and start repair mechanisms_
All the specified approaches above have in common that developers or researchers are actively involved in curating either data or changing the algorithm's specifications which cannot be done during run-time. Moreover, if the system is further learning based on continuous interactions it can be that a "de-biased" algorithm becomes biased again, for instance, because human interactants behave in stereotypical ways. We have proposed that i) information on biased components and ii) certain system information that is produced during ongoing interactions with humans can be used to inform the system about potential biases emerging. For instance, it is commonly known that biased speech recognition performance is biased in favour for people speaking accent-free standard languages due to better training to that user type. This known pre-existing bias can be used during the development of interactions with humans. The system should also be enabled to draw conclusions of internal data to detect bias. For instance, recognition for human faces or behaviours as well as predictions about human behaviour usually are hypothesis-based with specifications about the likelihood and confidence that this hypothesis is true or false. Consistent lower likelihoods connected with one user could be used (together with other information) as an indicator for bias. As described above most systems are biased in their speech recognition performance in favour of people speaking accent-free High German due to better training to that user type in contrast to people speaking local dialects or foreign accents. A robot could use system information that is an indicator for this bias occurring, for instance, when a higher number of hypotheses exists (cf. n-best lists, [26]; [27]) and/or lower confidence (cf. [28]) in speech recognition or computer vision for a specific user (e.g., interactant with local dialect or foreign language accent). Based on this the robot would initiate a regulatory mechanism such as apologizing for misunderstandings and asking the user to speak more slowly.
_Fourth Step - How to use user behavior to detect bias during interaction and start repair mechanisms_
As explained above there is empirical evidence that when humans detect signs of social rejection or social exclusion by a robot they will experience negative emotions and fundamental needs are threatened [29, 4, 5, 6]. This may lead to coping responses such as compliance and conformity, attracting attention, provoking, or attempts of controlling others to fortify the threatened needs. Current studies on social exclusion in HRI scenarios predominantly look into self-reported experiences. Future work should systematically investigate which coping behaviours excluded humans exert in trying to regain social inclusion. One approach is to use behaviour analysis to identify patterns of verbal and nonverbal behaviour, and interactional strategies a robot might detect as a sign that social exclusion occurred. Based on this classification a computational model could be implemented to detect episodes of social exclusion informed by observing human coping behaviours that are used to regain social inclusion as well as the aforementioned system information.
_Fifth Step - Develop Socially Interactive Agents with Capacity to Re-establish positive Group Dynamics_
The work is not done when we managed to detect biases. We further need a good concept how to resolve social exclusion episodes for the human and re-establish positive group dynamics. This means that there is a need to i) develop conversational or interactional strategies to maintain positive social group dynamics that can be triggered when potential bias is detected, and ii) research which conversational and interactional strategies are effective and regarded as socially adequate in different situations and group constellations.
## IV Conclusion
In this paper we outlined why social robots should take into account the social dynamics in a human-robot mixed group as well as the (negative) social consequences of its own behaviour in these groups. We discussed why and in which ways biases can arise in HRI and how we can either de-bias systems or enable the system to automatically detect bias and engage in repair mechanisms. We advocate for considering this perspective throughout the development process of a new system.
| This paper focuses on the identification of algorithm-based biases in robotic behavior and their consequences in human-robot mixed groups. We propose to develop computational models to detect episodes of microaggression, discrimination, and social exclusion, informed by a) observing human coping behaviours that are used to regain social inclusion and b) using system-inherent information that reveal unequal treatment of human interactants. Based on this information, we can start to develop regulatory mechanisms to promote fairness and social inclusion in HRI.
Please note that I am looking for a formal, academic Japanese translation. |
2309.15462 | DTC: Deep Tracking Control | Legged locomotion is a complex control problem that requires both accuracy
and robustness to cope with real-world challenges. Legged systems have
traditionally been controlled using trajectory optimization with inverse
dynamics. Such hierarchical model-based methods are appealing due to intuitive
cost function tuning, accurate planning, generalization, and most importantly,
the insightful understanding gained from more than one decade of extensive
research. However, model mismatch and violation of assumptions are common
sources of faulty operation. Simulation-based reinforcement learning, on the
other hand, results in locomotion policies with unprecedented robustness and
recovery skills. Yet, all learning algorithms struggle with sparse rewards
emerging from environments where valid footholds are rare, such as gaps or
stepping stones. In this work, we propose a hybrid control architecture that
combines the advantages of both worlds to simultaneously achieve greater
robustness, foot-placement accuracy, and terrain generalization. Our approach
utilizes a model-based planner to roll out a reference motion during training.
A deep neural network policy is trained in simulation, aiming to track the
optimized footholds. We evaluate the accuracy of our locomotion pipeline on
sparse terrains, where pure data-driven methods are prone to fail. Furthermore,
we demonstrate superior robustness in the presence of slippery or deformable
ground when compared to model-based counterparts. Finally, we show that our
proposed tracking controller generalizes across different trajectory
optimization methods not seen during training. In conclusion, our work unites
the predictive capabilities and optimality guarantees of online planning with
the inherent robustness attributed to offline learning. | Fabian Jenelten, Junzhe He, Farbod Farshidian, Marco Hutter | 2023-09-27T07:57:37 | http://arxiv.org/abs/2309.15462v2 | DTC: Deep Tracking Control - A Unifying Approach to Model-Based Planning and Reinforcement-Learning for Versatile and Robust Locomotion
###### Abstract
**Legged locomotion is a complex control problem that requires both accuracy and robustness to cope with real-world challenges. Legged systems have traditionally been controlled using trajectory optimization with inverse dynamics. Such hierarchical model-based methods are appealing due to intuitive cost function tuning, accurate planning, and most importantly, the insightful understanding gained from more than one decade of extensive research. However, model mismatch and violation of assumptions are common sources of faulty operation and may hinder successful sim-to-real transfer. Simulation-based reinforcement learning, on the other hand, results in locomotion policies with unprecedented robustness and recovery skills. Yet, all learning algorithms struggle with sparse rewards emerging from environments where valid footholds are rare, such as gaps or stepping stones. In this work, we propose a hybrid control architecture that combines the advantages of both worlds to simultaneously achieve greater robustness, foot-placement accuracy, and terrain generalization. Our approach utilizes a model-based planner to roll out a reference motion during training. A deep neural network policy is trained in simulation, aiming to track the optimized footholds. We evaluate the accuracy of our locomotion pipeline on sparse terrains, where pure data-driven methods are prone to fail. Furthermore, we demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts. Finally, we show that our proposed tracking controller generalizes across different trajectory optimization methods not seen during training. In conclusion, our work unites the predictive capabilities and optimality guarantees of online planning with the inherent robustness attributed to offline learning.**
## Introduction
Trajectory optimization (TO) is a commonly deployed instance of optimal control for designing motions of legged systems and has a long history of successful applications in rough environments since the early 2010s [1, 2]. These methods require a model
## 1 Introduction
Figure 1: **Examples for deployment.** The proposed control pipeline combines robustness properties inherent to learning-based approaches with accurate foothold planning attributed to model-based methods. This marriage allows legged robots to be deployed in environments where steppable contact surfaces are sparse (bottom left) and environmental uncertainties are high (top right).
of the robot's kinematics and dynamics during runtime, along with a parametrization of the terrain. Until recently, most approaches have used simple dynamics models such as single rigid body [3] or inverted pendulum dynamics [4, 5], or have ignored the dynamics altogether [6]. Research has shifted towards more complex formulations, including centroidal [7] or full-body dynamics [8]. The resulting trajectories are tracked by a whole-body control (WBC) module, which operates at the control frequency and utilizes full-body dynamics [9]. Despite the diversity and agility of the resulting motions, there remains a significant gap between simulation and reality due to unrealistic assumptions. Most problematic assumptions include perfect state estimation, occlusion-free vision, known contact states, zero foot-slip, and perfect realization of the planned motions. Sophisticated hand-engineered state machines are required to detect and respond to various special cases not accounted for in the modeling process. Nevertheless, highly dynamic jumping maneuvers performed by Boston Dynamics' bipedal robot Atlas demonstrate the potential power of TO.
Reinforcement learning (RL) has emerged as a powerful tool in recent years for synthesizing robust legged locomotion. Unlike model-based control, RL does not rely on explicit models. Instead, behaviors are learned, most often in simulation, through random interactions of agents with the environment. The result is a closed-loop control policy, typically represented by a deep neural network, that maps raw observations to actions. Handcrafted state-machines become obsolete because all relevant corner cases are eventually visited during training. End-to-end policies, trained from user commands to joint target positions, have been deployed successfully on quadrupedal robots such as ANYmal [10, 11]. More advanced teacher-student structures have significantly improved the robustness, enabling legged robots to overcome obstacles through touch [12] and perception [13]. While locomotion on gaps and stepping stones is theoretically possible, good exploration strategies are required to learn from the emerging sparse reward signals. So far, these terrains could only be handled by specialized policies, which intentionally overfit to one particular scenario [14] or a selection of similar terrain types [15, 16, 17, 18]. Despite promising results, distilling a unifying locomotion policy may be difficult and has only been shown with limited success [19].
Some of the shortcomings that appear in RL can be mitigated using optimization-based methods. While the problem of sparse gradients still exists, two important advantages can be exploited: First, cost-function and constraint gradients can be computed with a small number of samples. Second, poor local optima can be avoided by pre-computing footholds [5, 8], pre-segmenting the terrain into step-pable areas [7, 20], or by smoothing out the entire gradient landscape [21]. Another advantage of TO is the ability to plan actions ahead and predict future interactions with the environment. If model assumptions are generic enough, this allows for great generalization across diverse terrain geometries [7, 21].
The sparse gradient problem has been addressed extensively in the learning community. A notable line of research has focused on learning a specific task while imitating expert behavior. The expert provides a direct demonstration for solving the task [22, 23], or is used to impose a style while discovering the task [24, 25, 26]. These approaches require collecting expert data, commonly done offline, either through re-targeted motion capture data [24, 25, 26] or a TO technique [22, 23]. The reward function can now be formulated to be dense, meaning that agents can collect non-trivial rewards even if they do not initially solve the task. Nonetheless, the goal is not to preserve the expert's accuracy but rather to lower the sample and reward complexity by leveraging existing knowledge.
To further decrease the gap between the expert and the policy performance, we speculate that the latter should have insight into the expert's intentions. This requires online generation of expert data, which can be conveniently achieved using any model-based controller. Unfortunately, rolling out trajectories is often orders of magnitude more expensive than a complete
learning iteration. To circumvent this problem, one possible alternative is to approximate the expert with a generative model, e.g., by sampling footholds from a uniform distribution [15, 16], or from a neural network [17, 27, 28]. However, for the former group, it might be challenging to capture the distribution of an actual model-based controller, while the latter group still does not solve the exploration problem itself.
In this work, we propose to guide exploration through the solution of TO. As such data will be available both on- and offline, we refer to it as "reference" and not expert motion. We utilize a hierarchical structure introduced in deep loco [28], where a high-level planner proposes footholds at a lower rate, and a low-level controller follows the footholds at a higher rate. Instead of using a neural network to generate the foothold plan, we leverage TO. Moreover, we do not only use the target footholds as an indicator for a rough high-level direction but as a demonstration of optimal foot placement.
The idea of combining model-based and model-free control is not new in the literature. For instance, supervised [29] and unsupervised [30, 31] learning has been used to warm-start nonlinear solvers. RL has been used to imitate [22, 23] or correct [32] motions obtained by solving TO problems. Conversely, model-based methods have been used to check the feasibility of learned high-level commands [27] or to track learned acceleration profiles [33]. Compared to [32], we do not learn corrective joint torques around an existing WBC, but instead, learn the mapping from reference signals to joint positions in an end-to-end fashion. To the author's best knowledge, our approach constitutes the first proposition for a tracking controller fully learned in simulation.
To generate the reference data, we rely on an efficient TO method called terrain-aware motion generation for legged systems (TAMOLS) [21]. It optimizes over footholds and base pose simultaneously, thereby enabling the robot to operate at its kinematic limits. We let the policy observe only a small subset of the solution, namely planar footholds, desired joint positions, and the contact schedule. We found that these observations are more robust under the common pitfalls of model-based control, while still providing enough information to solve the locomotion task. In addition, we limit computational costs arising from solving the optimization problems by utilizing a variable update rate. During deployment, the optimizer runs at the fastest possible rate to account for model uncertainties and disturbances.
Our approach incorporates elements introduced in [14], such as time-based rewards and position-based goal tracking. However, we reward desired foothold positions at planned touch-down instead of rewarding a desired base pose at an arbitrarily chosen time. Finally, we use an asymmetric actor-critic structure similar to [22], where we provide privileged ground truth information to the value function and noisified measurements to the network policy.
We trained more than \(4000\) robots in parallel for two weeks on challenging ground covering a surface area of more than \(76000\,\mathrm{m}^{2}\). Throughout the entire training process, we generated and learned from about \(23\) years of optimized trajectories. The combination of offline training and online re-planing results in an accurate and agile tracking controller with exceptional robustness properties. As showcased in Fig. 1 and movie 1, with our hybrid control pipeline, ANYmal [34] can skillfully traverse parkours with high precision, and confidently overcome uncertain environments with high robustness. Remarkably, one policy can solve several categories of terrain types, such as gaps, stepping stones, stairs, boxes, and hills. Moreover, without the need for any post-training, the tracking policy can be deployed zero-shot with different TO methods at different update rates. The contributions of our work are therefore twofold: Firstly, we enable the deployment of model-based planners in rough and uncertain real-world environments, while, secondly, creating a single unifying locomotion policy that generalizes beyond the limitations imposed by state-of-the-art RL methods.
## Results
In order to evaluate the effectiveness of our proposed pipeline, hereby referred to as Deep Tracking Control (DTC), we compared it with four different approaches: two model-based controllers, TAMOLS [21] and a nonlinear model predictive control (MPC) presented in [7], and two data-driven methods, as introduced in [13] and [11]. We refer to those as baseline-to-1 (TAMOLS), baseline-to-2 (MPC), baseline-rl-1 (teacher/student policy), and baseline-rl-2 (RL policy), respectively. These baselines mark the state-of-the-art in MPC and RL prior to this work and they have been tested and deployed under various conditions. If not noted differently, all experiments were conducted in the real world.
### Evaluation of Robustness
We conducted three experiments to evaluate the robustness of our hybrid control pipeline. The intent is to demonstrate survival skills on slippery ground, and recovery reflexes when visual data is not consistent with proprioception or is absent altogether. We rebuild harsh environments that are likely to be encountered on sites of natural disasters, where debris might further break down when stepped onto, and construction sites, where oil patches create slippery surfaces.
In the first experiment, we placed a rectangular cover plate with an area of \(0.78\times 1.19\,\mathrm{m}^{2}\) on top of a box with the same length and width, and height \(0.37\,\mathrm{m}\) (Fig. 2 A). The cover plate was shifted to the front, half of the box's length. ANYmal was then steered over the cover plate, which pitched down as soon as its center of mass passed beyond the edge of the box. Facing only forward and backward, the plate's movement was not detected through the depth cameras, and could only be perceived through proprioceptive sensors. Despite the error between map and odometry reaching up to \(0.4\,\mathrm{m}\), the robot managed to successfully balance itself. This experiment was repeated three times with consistent outcomes.
In our second experiment (Fig. 2 B) we created an obstacle parkour with challenging physical properties. A large wooden box with a slopped front face was placed next to a wet and slippery whiteboard. We increased the difficulty by placing a soft foam box in front, and a rolling transport cart on top of the wooden box. The robot was commanded to walk over the objects with random reference velocities for approximately \(45\) seconds, after which the objects were re-distributed to their original locations to account for any potential displacement. This experiment was repeated five times. Despite not being trained on movable or deforming obstacles, the robot demonstrated its recovery skills in all five trials without any falls.
The tracking policy was trained with perceptive feedback, meaning that the policy and the motion planner had partial or complete insight into the local geometrical landscape. Nevertheless, the locomotion policy was still capable of overcoming many obstacles completely blind. To simulate a scenario with damaged depth sensors, we let ANYmal blindly walk over a stair with two treads, each \(0.18\,\mathrm{m}\) high and \(0.29\,\mathrm{m}\) wide (Fig. 2 C). The experiment was repeated three times up and down, with an increasing heading velocity selected from \(\{\pm 0.5,\pm 0.75,\pm 1.0\}\,\mathrm{m}/\mathrm{s}\). In some cases, a stair tread was higher than the swing motion of a foot. Thanks to a learned swing reflex, the stair set could be successfully cleared in all trials. We note that the same stair set was passed by a blindfolded version of baseline-rl-1 [13], which was trained in a complex teacher/student environment. In contrast, our method relies on an asymmetric actor/critics structure, achieving a similar level of robustness. Accompanying video clips can be found in the supplementary movie S1.
### Evaluation of Accuracy
We demonstrate the precision of foothold tracking by devising a complex motion that required the robot to perform a turn-in-place maneuver on a small surface of \(0.94\times 0.44\,\mathrm{m}^{2}\). The robot was commanded to walk up a slope onto a narrow table, then execute a complete \(360\,\mathrm{deg}\) turn, and finally descend onto a pal
Figure 2: **Evaluation of robustness.****(A)** ANYmal walks along a loose cover plate that eventually pitches forward (left to right, top to bottom). The third row shows ANYmal’s perception of the surroundings during the transition and recovery phase. **(B)** The snapshots are taken at critical time instances when walking on slippery ground, just before complete recovery. The transport cart is visible in the second image. **(C)** ANYmal climbs upstairs with disabled perception (top to bottom). The collision of the right-front end-effector with the stair tread triggers a swing reflex, visualized in orange.
Figure 3: **Evaluation of tracking performance.****(A)** ANYmal climbs up a narrow table, turns, and descends back down to a box. The second image in the second row shows the robot’s perception of the environment. **(B)** Euclidean norm of the planar foothold error, averaged over \(20\,\mathrm{s}\) of operation using a constant heading velocity. The solid/dashed curves represent the average/maximum tracking errors. **(C)** Same representation as in (B), but the data was collected with baseline-to-2. **(D)** DTC deployed with baseline-to-2, enabling ANYMal to climb up a box of \(0.48\,\mathrm{m}\).
let. Some snapshots of the experiment are provided in Fig. 3 A, while the full video is contained in movie S2.
To evaluate the quality of the foothold tracking, we collected data while ANYmal walked on flat ground. Each experiment lasted for approximately \(20\,\mathrm{s}\) and was repeated with eight different heading velocities selected from \(\{\pm 1.0,\pm 0.8,\pm 0.6,\pm 0.4\}\,\mathrm{m}\mathrm{/}\mathrm{s}\). We measured the tracking error as the smallest horizontal distance between a foot and its associated foothold during a stance phase. As shown in Fig. 3 B, the footholds could be tracked with very high precision of \(2.3\,\mathrm{c}\mathrm{m}\) and standard deviation \(0.48\,\mathrm{c}\mathrm{m}\) when averaged over the broad spectrum of heading velocity commands.
### Deployment with MPC
The maximum height that DTC in combination with TAMOLS can reliably overcome is about \(0.40\,\mathrm{m}\). The policy might hesitate to climb up taller objects due to the risk of potential knee joint collisions with the environment. This limitation is inherent to the chosen TO method, which only considers simplified kinematic constraints. We, therefore, deployed DTC with the planner of baseline-to-2, a method that takes into account the full kinematics of the system. To allow for zero-short generalization, we implemented the same trotting gait as experienced during training. With this enhanced setup, ANYMal could climb up a box of height of \(0.48\,\mathrm{m}\). This is \(50\,\mathrm{\char 37}\) higher than what baseline-rl-1 can climb up, and \(380\,\mathrm{\char 37}\) more than what was reported for baseline-rl-2. The box climbing experiment was successfully repeated five times. The results are shown in movie S2, and for one selected trial in Fig. 3 D. Furthermore, we measured the tracking error on flat ground. Despite the wider stance configuration of baseline-to-2, the error was found to be only \(0.03\,\mathrm{m}\) on average (Fig. 3 C).
The above two results seem to be surprising at first glance but are easy to explain when considering the observation space and the training environment. While the base-pose trajectory is considerably more detailed for baseline-to-2 due to frequency-loop shaping and increased system complexity, the foothold patterns are nevertheless quite similar. Thus, good generalization is facilitated by the specific choice of observations, which hides the optimized base pose from the policy. Some terrains (type l as shown in Fig. 8 D) can be seen as a combination of gaps and boxes, where each box is surrounded by a gap. During training, TAMOLS placed the footholds sufficiently far away from the box to avoid stepping into the gap. This allowed the policy to learn climbing maneuvers without knee joint collisions. Baseline-to-2, being aware of the spatial coordinates of the knees, naturally produces a similar foothold pattern, even in the absence of the gap.
Figure 4: **Benchmark against model-based control.****(A)** DTC successfully traverses an obstacle parkour (left to right) in simulation with a heading velocity of \(1\,\mathrm{m/s}\). Prior to our work, this parkour has been crossed by baseline-to-2 with a heading velocity of \(0.8\,\mathrm{m/s}\). **(B)** Baseline-to-1 falls after stepping into a gap hidden from the perception (left to right). **(C)** ANYmal successfully overcomes a trapped floor using our hybrid control architecture (left to right).
lowing the robot to successfully navigate through the trap. The robustness roots in the ability to ignore both perception and reference motion while relying only on proprioception. Such behavior is learned in simulation by experiencing simulated map drift. The experiment was repeated five times with baseline-to-1, five times with baseline-to-2, and five times with our method, consistently leading to similar results. The video clips corresponding to the above experiments can be found in movie S3. The movie is further enriched with a comparison of baseline-to-2 against DTC on soft materials, which impose very similar challenges.
### Benchmark Against RL Control
While RL policies are known for their robustness, they may struggle in environments with limited interaction points. We demonstrate typical failure cases in two experiments utilizing baseline-rl-1. In the first experiment (Fig. 5 A), ANYmal was tasked to cross a small gap of \(0.1\,\mathrm{m}\) with a reference heading velocity of \(0.2\,\mathrm{m}\mathrm{/}\mathrm{s}\). The model-free controller did not avoid the gap, and thus could not reach the other side of the platform. In the second experiment, we connected two elevated boxes with a \(1.0\,\mathrm{m}\)-long beam of height \(0.2\,\mathrm{m}\) (Fig. 5 B). The robot was commanded to walk from the left to the right box but failed to make use of the beam.
In comparison, our hybrid policy achieves a \(100\,\mathrm{\char 37}\) success rate for the same gap size over ten repetitions. To further demonstrate the locomotion skills of DTC, we made the experiments more challenging. We replaced the small gap with four larger gaps, each \(0.6\,\mathrm{m}\) wide and evenly distributed along the path (Fig. 5 C). Similarly, we increased the length of the beam to a total of \(1.8\,\mathrm{m}\) (Fig. 5 D). Despite the increased difficulty, our approach maintained a \(100\,\mathrm{\char 37}\) success rate across four repetitions of each experiment. Video clips of those experiments can be found in movie S4.
By using a specialized policy, ANYmal crossed already a \(0.6\,\mathrm{m}\) wide gap within a pre-mapped environment [14]. Most notably, our locomotion controller, not being specialized nor fine-tuned for this terrain type, crossed a sequence of four gaps with the same width, while, relying on online generated maps only.
The limitations of baseline-rl-1 were previously demonstrated [7] on the obstacle parkour of Fig. 4 A, showing its inability to cross the stepping stones. We showcase the generality of our proposed control framework by conducting three experiments on stepping stones in the real world, each with an increased level of difficulty. The first experiment (Fig. 6 A) required ANYmal traversing a field of equally sized stepping stones, providing a contact surface of \(0.2\times 0.2\,\mathrm{m}^{2}\) each. The robot passed the \(2.0\,\mathrm{m}\) long field \(10\) times. Despite the varying heading velocity commands, the robot accurately hit always the correct stepping stones as indicated by the solution of the TO. For the second experiment (Fig. 6 B), we increased the height of two randomly selected stones. The parkour was successfully crossed four out of four times. In the final experiment (Fig. 6 C), we distribute three elevated platforms \(a\), \(b\), and \(c\), connected by loose wooden blocks of sizes \(0.31\times 0.2\times 0.2\,\mathrm{m}^{3}\) and \(0.51\times 0.2\times 0.2\,\mathrm{m}^{3}\). This environment poses significant challenges as the blocks may move and flip over when stepped on. Following the path \(a\to b\to a\to b\to c\to a\), the robot missed only one single stepping stone, which, however, did not lead to failure. Video clips of the stepping stones experiments are provided in movie S5.
### Simulation-Based Ablation Study
During training, we compute a new solution to the TO problem after variable time intervals, but mainly after each foot touch-down. While such a throttled rate greatly reduces computational costs, it also leads to poor reactive behavior in the presence of quickly changing external disturbances, dynamic obstacles, or map occlusion. Moreover, the optimizer was updated using privileged observations, whereas, in reality, the optimizer is subject to elevation map drift, wrongly estimated friction coefficients, and unpredicted exter
## 5 Conclusion
Figure 5: **Benchmark against RL.****(A)** Baseline-rl-1 attempts to cross a small gap. ANYmal initially manages to recover from miss-stepping with its front legs but subsequently gets stuck as its hind legs fall inside the gap. **(B)** Using baseline-rl-1, the robot stumbles along a narrow beam. **(C)** With DTC, the robot is able to pass four consecutive large gaps (left to right) without getting stuck or falling. **(D)** ANYmal is crossing a long beam using our proposed control framework.
Figure 6: **Evaluation of the locomotion performance on stepping stones.****(A)** ANYmal reliably crosses a field of flat stepping stones (left to right). **(B)** The robot crosses stepping stones of varying heights (left to right). The two tall blocks are highlighted in blue. **(C)** ANYmal navigates through a field of loosely connected stepping stones, following the path \(a\to b\to a\to b\to c\to a\).
## 6 Conclusion
Figure 7: **Simulation results and ablation studies**. **(A)** Success and failure rates of DTC, recorded for different update rates of the optimizer. The upper limit of \(50\,\mathrm{Hz}\) is imposed by the policy frequency. **(B)** Comparison against baseline policies. Left: Evaluation on all \(120\) terrains. Right: Evaluation on terrains where valid footholds are dense (white background) and sparse (gray background). **(C)** Impact of elevation map drift on the locomotion performance, quantified by tracking error (left), success rate on rough (middle), and on flat ground (right). **(D)** Average terrain level (left) and average foothold reward (right) scored during training.
nal forces. To compensate for such modeling errors, we deploy the optimizer in MPC-fashion. In the following, we investigate the locomotion performance as a function of the optimizer update rate. Using the experimental setup outlined in supplementary section S4, we collected a total of six days of data in simulation. A robot was deemed "successful" if it could walk from the center to the border of its assigned terrain patch, "failed" if its torso made contact with the environment within its patch, and "stuck" otherwise. We report success and failure rates in Fig. 7 A. Accordingly, when increasing the update rate from \(1\,\mathrm{Hz}\) to \(50\,\mathrm{Hz}\), the failure rate dropped by \(7.11\,\mathrm{\char 37}\) while the success rate increased by \(4.25\,\mathrm{\char 37}\).
In the second set of experiments, we compared our approach to baseline-rl-2 as well as the same policy trained within our training environment. We refer to the emerging policy as baseline-rl-3. More details regarding the experimental setup can be found in supplementary section S5. As depicted in Fig. 7 B (left), our approach exhibits a substantially higher success rate than baseline-rl-2. By learning on the same terrains, baseline-rl-3 can catch up but still does not match our performance. The difference mainly comes from the fact that the retrained baseline still fails to solve sparse-structured terrains. To highlight this observation, we evaluated the performance on four terrain types with sparse ("stepping stones", "beams", "gaps", and "pallets"), and on four types with dense stepping locations ("stairs", "pit", "rough slope", and "rings"). On all considered terrain types, our approach clearly outperforms baseline-rl-2 by a huge margin (Fig. 7 B, right), thereby demonstrating that learned locomotion generally does not extrapolate well to unseen scenarios. We perform equally well as baseline-rl-3 on dense terrains, but score significantly higher on sparse-structured terrains. This result suggests that the proposed approach itself is effective and that favorable locomotion skills are not encouraged by the specific training environment.
In an additional experiment, we investigated the impact of erroneous predictions of the high-level planner on locomotion performance. We did so by adding a drift value to the elevation map, sampled uniformly from the interval \(\in(0,0.5)\,\mathrm{m}\). Contrary to training, the motion was optimized over the perturbed height map. Other changes to the experimental setup are described in the supplementary section S6. As visualized in Fig. 7 C, we collected tracking error, success, and failure rates with simulated drift on flat and rough ground. The tracking error grows mostly linearly with the drift value. On flat ground, the slope of the error curve decreases at around \(0.1\,\mathrm{m}\) of drift. On rough terrains, the success rate remains constant for drift values smaller than \(0.1\,\mathrm{m}\), and decreases linearly for larger values. On the other hand, success and failure rates are not impacted by drift on flat ground.
We found that providing joint positions computed for the upcoming touch-down event greatly improves convergence time and foothold tracking performance. This signal encodes the foothold location in joint space, thus, providing a useful hint for foothold tracking. It also simplifies the learning process, as the network is no more required to implicitly learn the inverse kinematics (IK). Evidence for our claims is given in Fig. 7 D, showing two relevant learning curves. Tracking accuracy is represented by the foothold rewards, while technical skills are quantified using the average terrain level [11]. Both scores are substantially higher if the footholds can be observed in both task and joint space.
## Discussion
This work demonstrates the potential of a hybrid locomotion pipeline that combines accurate foot placement and dynamic agility of state-of-the-art TO with the inherent robustness and reflex behaviors of novel RL control strategies. Our approach enables legged robots to overcome complex environments that either method alone would struggle with. As such terrains are commonly found in construction sites, mines, and collapsed buildings, our work could help advance the deployment of autonomous legged machines in the fields of construction, maintenance, and search-and
rescue.
We have rigorously evaluated the performance in extensive real-world experiments over the course of about half a year. We included gaps, stepping stones, narrow beams, and tall boxes in our tests, and demonstrated that our method outperformed the RL baseline controller on every single terrain. Next, we evaluated the robustness on slippery and soft ground, each time outperforming two model-based controllers.
Furthermore, we have shown that the emerging policy can track the motion of two different planners utilizing the same trotting gait. This was possible because the observed footholds seem to be mostly invariant under the choice of the optimizer. However, certain obstacles may encourage the deployed planner to produce footprint patterns that otherwise do not emerge during training. In this case, we would expect a degraded tracking performance.
In addition to our main contribution, we have demonstrated several other notable results. (1) our policy, which was trained exclusively with visual perception, is still able to generalize to blind locomotion. (2) A simple multilayer perceptron (MLP) trained with an asymmetric actor/critics setup achieves similar robust behaviors as much more complex teacher/student trainings [12, 13]. (3) Our locomotion policy can handle a lot of noise and drift in the visual data without relying on complicated gaited networks, which might be difficult to tune and train [13].
Contrary to our expectations, the proposed training environment was found to be not more sample efficient than similar unifying RL approaches [11, 13]. The large number of epochs required for convergence suggests that foothold accuracy is something intrinsically complicated to learn.
We see several promising avenues for future research. (1) Many successful data-driven controllers have the ability to alter the stride duration of the trotting gait. We expect a further increase in survival rate and technical skills if the network policy could suggest an arbitrary contact schedule to the motion optimizer. Moreover, a truly hybrid method, in which the policy can directly modify the cost function of the planner, may be able to generate more diversified motions. (2) Our results indicate that IK is difficult to learn. To increase the sample efficiency and improve generalization across different platforms, a more sophisticated network structure could exploit prior knowledge of analytical IK. (3) Another potential research direction may focus on leveraging the benefits of sampling trajectories from an offline buffer. This could significantly reduce the training time and allow for the substitution of TAMOLS with a more accurate TO method, or even expert data gathered from real animals.
## Materials and Methods
To motivate the specific architectural design, we first identify the strengths and weaknesses of the two most commonly used control paradigms in legged locomotion.
TO amounts to open-loop control, which produces suboptimal solutions in the presence of stochasticity, modeling errors, and small prediction windows. Unfortunately, these methods introduce many assumptions, mostly to reduce computation time or achieve favorable numerical properties. For instance, the feet are almost always pre-selected interaction points to prevent complex collision constraints, contact and actuator dynamics are usually omitted or smoothed out to circumvent stiff optimization problems, and the contact schedule is often pre-specified to avoid the combinatorial problem imposed by the switched system dynamics. Despite a large set of strong assumptions, real-time capable planners are not always truly real-time. The reference trajectories are updated around \(5\,\mathrm{Hz}\)[31] to \(100\,\mathrm{Hz}\)[7] and realized between \(400\,\mathrm{Hz}\) to 1000 Hz. In other words, these methods do not plan fast enough to catch up with the errors they are doing. While structural [2] or environmental [7, 20] decomposition may further contribute to the overall suboptimality, they were found useful for extracting good local solutions on sparse terrains. Because the concept of planning is not restricted to the tuning
domain, model-based approaches tend to generalize well across different terrain geometries [7, 21]. Moreover, since numerical solvers perform very cheap and sparse operations on the elevation map, the map resolution can be arbitrarily small, facilitating accurate foothold planning.
RL leads to policies that represent global closed-loop control strategies. Deep neural networks are large capacity models, and as such, can represent locomotion policies without introducing any assumption about the terrain or the system. They exhibit good interpolation in-between visited states but do not extrapolate well to unseen environments. Despite their large size, the inference time is usually relatively small. The integration of an actuator model has been demonstrated to improve sim-to-real-transfer [10], while the stochasticity in the system dynamics and training environment can effectively be utilized to synthesize robust behaviors [12, 13]. Contrary to model-based controllers, the elevation map is typically chosen to be small and sparse [11, 13] to avoid immense memory consumption during training.
In summary, TO might be better suited if good generalization and high accuracy are required, whereas RL is the preferred method if robustness is of concern or onboard computational power is limited. As locomotion combines challenges from both of these fields, we formulate the goal of this work as follows: RL shall be used to train a low-level tracking controller that provides significantly more robustness than classical inverse dynamics, while the accuracy and planning capabilities of model-based TO shall be leveraged on a low-level to synthesize a unifying locomotion strategy that supports diverse and generalizing motions.
## Reference Motions
Designing a TO problem for control always involves a compromise, that trades off physical accuracy and generalization against good numerical conditioning, low computation time, convexity, smoothness, availability of derivatives, and the necessity of a high-quality initial guess. In our work, we generate the trajectories using TAMOLS [21]. Unlike other similar methods, it does not require terrain segmentation nor pre-computation of footholds, and its solutions are robust under varying initial guesses. The system dynamics and kinematics are simplified, allowing for fast updates. During deployment, we also compare against baseline-to-2, which builds up on more complex kinodynamics. Due to the increased computation time and in particular the computationally demanding map-processing pipeline, this method is not well-suited to be used directly within the learning process.1
Footnote 1: The training time is expected to be about eight times larger.
We added three crucial features to TAMOLS: First, we enable parallelization on CPU, which allows multiple optimization problems to be solved simultaneously. Second, we created a python interface using pybind11[35], enabling it to run in a python-based environment. Finally, we assume that the measured contact state always matches the desired contact state. This renders the TO independent of contact estimation, which typically is the most fragile module in a model-based controller.
The optimizer requires a discretized \(2.5\)d representation of its environment, a so-called elevation map, as input. We extract the map directly from the simulator by sampling the height across a fixed grid. For both training and deployment, we use a fixed trotting gait with a stride duration of \(0.93\,\mathrm{s}\) and swing phase of \(0.465\,\mathrm{s}\), and set the resolution of the grid map to \(0.04\times 0.04\,\mathrm{m}^{2}\).
## Overview of the Training Environment
The locomotion policy \(\pi(\mathbf{a}\mid\mathbf{o})\) is a stochastic distribution of actions \(\mathbf{a}\in\mathcal{A}\) that are conditioned on observations \(\mathbf{o}\in\mathcal{O}\), parametrized by an MLP. The action space comprises target joint positions that are tracked using a PD controller, following the approach in [10] and related works [12, 13, 14].
Given the state \(\mathbf{s}\in\mathcal{S}\), we extract the solution at the next time step \(\mathbf{x}^{\prime}(\mathbf{s})\in\mathcal{X}\subseteq\mathcal{S}\) from the optimizer,
Figure 8: **Method.****(A)** The optimized solution provides footholds \(\mathbf{p}_{i}^{*}\), desired base pose \(\mathbf{b}^{*}\), twist \(\dot{\mathbf{b}}^{*}\), and acceleration \(\ddot{\mathbf{b}}^{*}\) (extracted one policy step \(\Delta t\) ahead), as well as desired joint positions \(\mathbf{q}^{*}\). Additionally, a height scan \(h\) is sampled between the foot position \(\mathbf{p}_{i}\) and the corresponding foothold \(\mathbf{p}_{i}^{*}\). **(B)** Training environment: The optimizer runs in parallel with the simulation. At each leg touch-down, a new solution \(\mathbf{x}^{\prime}\) is generated. The policy \(\pi\) drives the system response \(\mathbf{s}^{\prime}\) toward the optimized solution \(\mathbf{x}^{\prime}(\mathbf{s})\), which is encouraged using the reward function \(r\). Actor observations are perturbed with the noise vector \(\mathbf{n}\), while critics and the TO receive ground truth data. **(C)** Deployment: Given the optimized footholds, the network computes target joint positions that are tracked using a PD control law. The state estimator (state) returns the estimated robot state, which is fed back into the policy and the optimizer. **(D)** The list of terrain types includes a) stairs, b) combinations of slopes and gaps, c) pyramids, d) slopped rough terrain, e) stepping stones, f) objects with randomized poses, g) boxes with tilted surfaces, h) rings, i) pits, j) beams, k) hovering objects with randomized poses, and l) pallets.
which includes four footholds \(\mathbf{p}_{i=0,\ldots,3}^{*}\), joint positions \(\mathbf{q}^{*}\) at touch-down time, and the base trajectory evaluated at the next time step. The base trajectory consists of of base pose \(\mathbf{b}^{*}(\Delta t)\), twist \(\dot{\mathbf{b}}^{*}(\Delta t)\), and linear and angular acceleration \(\ddot{\mathbf{b}}^{*}(\Delta t)\). More details can be found in Fig. 8 A. We then sample an action from the policy. It is used to forward simulate the system dynamics, yielding a new state \(\mathbf{s}^{\prime}\in\mathcal{S}\), as illustrated in Fig.8 B.
To define a scalar reward \(r(\mathbf{s},\mathbf{s}^{\prime},\mathbf{x}^{\prime},\mathbf{a})\), we use a monotonically decreasing function of the error between the optimized and measured states, i.e., \(r\propto\mathbf{x}^{\prime}(\mathbf{s})\ominus\mathbf{x}(\mathbf{s}^{\prime})\). The minus operator \(\ominus\) is defined on the set \(\mathcal{X}\), the vector \(\mathbf{x}^{\prime}(\mathbf{s})\) is the optimized state, and \(\mathbf{x}(\mathbf{s}^{\prime})\) is the state of the simulator after extracting it on the corresponding subset. The policy network can also be understood as a learned model reference adaptive controller with the optimizer being the reference model.
In this work, we use an asymmetrical actor/critic method for training. The value function approximation \(V(\mathbf{o},\tilde{\mathbf{o}})\) uses privileged \(\tilde{\mathbf{o}}\in\tilde{\mathcal{O}}\) as well as policy observations \(\mathbf{o}\).
## Observation Space
The value function is trained on policy observations and privileged observations, while the policy network is trained on the former only [22]. All observations are given in the robot-centric base frame. The definition of the observation vector is given below, while noise distributions and dimensionalities of the observation vectors can be found in supplementary sections S2 and S3, respectively.
### Policy Observations
The policy observations comprise proprioceptive measurements such as base twist, gravity vector, joint positions, and joint velocities. The history only includes previous actions [11]. Additional observations are extracted from the model-based planner, including planar coordinates of foothold positions (\(xy\) coordinates), desired joint positions at touch-down time, desired contact state, and time left in the current phase. The latter two are per-leg quantities that fully describe the gait pattern. Footholds only contain planner coordinates since the height can be extracted from the height scan.
The height scan, which is an additional part of the observation space, enables the network to anticipate a collision-free swing leg trajectory. In contrast to similar works, we do not construct a sparse elevation map around the base [11, 27] or the feet [13]. Instead, we sample along a line connecting the current foot position with the desired foothold (Fig. 8 A). This approach has several advantages: (1) The samples can be denser by only scanning terrain patches that are most relevant for the swing leg, (2) it prevents the network from extracting other information from the map, which is typically exposed to most uncertainty (e.g., occlusion, reflection, odometry drift, discretization error, etc.), and (3) it allows us to conveniently model elevation map drift as a per-foot quantity, i.e., each leg can have its own drift value.
We use analytical IK to compute the desired joint positions. As the motion optimizer may not provide a swing trajectory, as is the case for TAMOLS, we completely skip the swing phase. This means that the IK is computed with the desired base pose and the measured foot position for a stance leg, and the target foothold for a swing leg.
It is worth noting that we do not provide the base pose reference as observation. As shown in the results chapter, this was found to reduce sensitivity to mapping errors and renders the policy independent of the utilized planner. Finally, to allow the network to infer the desired walking direction, we add the reference twist (before optimization) to the observation space.
### Privileged Observations
The privileged observations contain the optimized base pose, base twist, and base linear and angular acceleration, extracted on time step ahead. In addition, the critics can observe signals confined to the simula
tor, such as the external base wrench, external foot forces, the measured contact forces, friction coefficients, and elevation map drift.
### Reward Functions
The total reward is computed as a weighted combination of several individual components, which can be categorized as follows: (1) "tracking" of reference motions, (2) encouraging of "consistent" behavior, and (3) other "regularization" terms necessary for successfully sim-to-real transfer. The reward functions are explained below whereas weights and parameters are reported in Table S3.
#### Base Pose Tracking
To achieve tracking of the reference base pose trajectory, we use
\[r_{Bn}=e^{-\sigma_{Bn}\cdot\|\mathbf{b}^{*}(t+\Delta t)^{(n)}\ominus\mathbf{b}(t)^{(n)} \|^{2}}, \tag{1}\]
where \(n=\{0,1,2\}\) is the derivative order, \(\mathbf{b}(t)\) is the measured base pose, \(\mathbf{b}^{*}(t+\Delta t)\) is the desired base pose sampled from the reference trajectory one policy step \(\Delta t\) ahead, and \(\ominus\) denotes the quaternion difference for base orientation, and the vector difference otherwise. We refer to the above reward function as a "soft" tracking task because large values can be scored even if the tracking error does not perfectly vanish.
To further analyze the reward function, we decompose the base trajectory into three segments. The "head" starts at time zero, the "tail" stops at the prediction horizon, and the "middle" connects these two segments with each other. A logarithmic reward function would prioritize the tracking of the trajectory head, while a linear penalty would focus on making progress along the whole trajectory at once. Contrary, the exponential shape of the reward function splits the tracking task into several steps. During the initial epochs, the tracking error of the trajectory middle and tail will likely be relatively large, and thus, do not contribute significantly to the reward gradient. As a result, the network will minimize the tracking error of the trajectory head. Once its impact on the gradient diminishes, the errors corresponding to the trajectory middle will dominate the gradient landscape. In the final training stages, tracking is mostly improved around the trajectory tail.
#### Football Tracking
We choose a logarithmic function
\[r_{pi}=-\ln(||\mathbf{p}_{i}^{*}-\mathbf{p}_{i}||^{2}+\epsilon), \tag{2}\]
to learn foothold tracking, where \(\mathbf{p}_{i}\) is the current foot position of leg \(i\in\{0,\dots,3\}\), \(\mathbf{p}_{i}^{*}\) is the corresponding desired foothold, and \(0<\epsilon\ll 1\) is small number ensuring the function is well defined. The above reward function may be termed "hard" tracking task, as the maximum value can only be scored if the error reaches zero. As the tracking improves, the gradients will become larger, resulting in even tighter tracking toward the later training stages.
A dense reward structure typically encourages a stance foot to be dragged along the ground to further minimize the tracking error. To prevent such drag motions to emerge, the above reward is given for each foot at most once during one complete gait cycle: more specifically, if and only if the leg is intended to be in contact and the norm of the contact force indicates a contact, i.e., if \(||\mathbf{f}_{i}||>1\), then the reward is given to the agent.
#### Consistency
In RL for legged locomotion, hesitating to move over challenging terrains is a commonly observed phenomenon that prevents informative samples from being gathered and thus impedes the agent's performance. This behavior can be explained by insufficient exploration: The majority of agents fail to solve a task while a small number of agents achieve higher average rewards by refusing to act. To overcome this local optimum, we propose to encourage consistency by rewarding actions that are intended by previous actions.
In our case, we measure consistency as the similarity between two consecutive motion optimizations. If the solutions are similar, the agent is considered to be "consistent". We measure similarity as the Euclidean distance between two adjacent solutions and write
\[r_{c}=\\ \sum_{\delta tj+t_{0}\in(T_{a}\cap T_{b})}-\delta t||\mathbf{b}_{a}^{*} (\delta tj+t_{0,a})\ominus\mathbf{b}_{b}^{*}(\delta tj+t_{0,b})||\\ -w_{p}||\mathbf{p}_{a}^{*}-\mathbf{p}_{b}^{*}||. \tag{3}\]
Here, \(\mathbf{p}_{t}^{*}\) with \(t=\{a,b\}\) is a vector of stacked footholds, \(w_{p}>0\) is a relative weight, \(\delta t=0.01\,\mathrm{s}\) is the discretization time of the base trajectory, and \(t_{0}\) is the time elapsed since the optimization was started. The index \(a\) refers to the most recent solution, while \(b\) refers to the previous solution. It is important to note that the two solution vectors \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\), from which we extract the base and footholds, are only defined on their respective time intervals given by the optimization horizon \(\tau_{h}\), i.e, \(t_{a}\in T_{a}=[0,\tau_{h,a}]\) and \(t_{b}\in T_{b}=[0,\tau_{h,b}]\).
#### Regularization
To ensure that the robot walks smoothly, we employ two different penalty terms enforcing complementary constraints. The first term, \(r_{r1}=-\sum_{i}|\mathbf{v}_{i}^{T}\mathbf{f}_{i}|\), discourages foot-scuffing and end-effector collisions by penalizing power measured at the feet. The second term, \(r_{r2}=-\sum_{i}(\dot{\mathbf{q}}_{i}^{T}\mathbf{\tau}_{i})^{2}\), penalizes joint power to prevent arbitrary motions, especially during the swing phase. Other regularization terms are stated in the supplementary section S3.
### Training Environment
To train the locomotion policy, we employ a custom version of Proximal Policy Optimization (PPO) [36] and a training environment that is mostly identical to that one introduced in [11]. It is explained in more detail in supplementary section S2. Simulation and back-propagation are performed on GPU, while the optimization problems are solved on CPU.
#### Termination
We use a simple termination condition where an episode is terminated if the base of the robot makes contact with the terrain.
#### Domain Randomization
We inject noise into all observations except for those designated as privileged. At each policy step, a noise vector \(\mathbf{n}\) is sampled from a uniform distribution and added to the observation vector, with the only exceptions of the desired joint positions and the height scan.
For the elevation map, we add noise before extracting the height scan. The noise is sampled from an approximate Laplace distribution where large values are less common than small ones. We perturb the height scan with a constant offset, which is sampled from another approximate Laplace distribution for each foot separately. Both perturbations discourage the network to rely extensively on perceptive feedback and help to generalize to various perceptive uncertainties caused by odometry drift, occlusion, and soft ground.
All robots are artificially pushed by adding a twist offset to the measured twist at regular time instances. Friction coefficients are randomized per leg once at initialization time. To render the motion robust against disturbances, we perturb the base with an external wrench and the feet with external forces. The latter slightly stiffens up the swing motion but improves tracking performance in the presence of unmodeled joint frictions and link inertia. The reference twist is resampled in constant time intervals and then hold constant.
The solutions for the TO problems are obtained using ground truth data, which include the true friction coefficients, the true external base wrench, and noise-free height map. In the presence of simulated noise, drift, and external disturbances, the policy network is therefore trained to reconstruct a base trajectory that the optimizer would produce given the ground truth data. However, there is a risk that the network learns to remove the drift from the height
scan by analyzing the desired joint positions. During hardware deployment, such a reconstruction will fail because the optimizer is subject to the same height drift. To mitigate this issue, we introduce noise to the desired joint position observations, sampled from a uniform distribution with boundaries proportional to the drift value.
### Terrain Curriculum
We use a terrain curriculum as introduced in [11]. Before the training process, terrain patches of varying types and difficulties are generated. As an agent acquires more skills and can navigate the current terrain, its level is upgraded, i.e., it will be re-spawned on the same terrain type, but with a harder difficulty. We have observed that the variety of terrains encountered during training heavily influences the sim-to-real transfer. We thus have included a total of \(12\) different terrain types with configurable parameters (Fig.8 D), leading to a total of \(120\) distinguishable terrain patches. The terrain types classify different locomotion behaviors, s.a. climbing ("stairs", "pits", "boxes", "pyramids"), reflexing ("rough", "rings", "flying objects"), and walking with large steps ("gaps", "pallets", "stepping stones", "beams", "objects with randomized poses"). Our terrain curriculum consists of \(10\) levels, where one of the configurable parameters is modulated to increase or decrease its difficulty. This results in a total of \(1200\) terrain patches, each with a size of \(8\times 8\,\mathrm{m}^{2}\), summing up to a total area of \(76800\,\mathrm{m}^{2}\), which is approximately the size of \(14\) football fields or \(10\) soccer fields.
### Training
Solving the TO problem at the policy frequency during training was found to provoke poor local optima. In such a case, the optimizer adapts the solution after each policy step. If the agent is not able to follow the reference trajectory, the optimizer will adapt to the new state s.t. the tracking problem becomes feasible again. This means that the agent can exhibit "lazy" behavior and still collect some rewards. We prevent such a local optimum by updating the optimizer only at a leg touch-down (i.e., after \(0.465\) seconds). This also greatly reduces learning time because computational costs are reduced by a factor of \(23\) compared to recomputing the trajectories at the policy frequency. After a robot fell (on average, once every \(18\) seconds), was pushed (after \(10\) seconds) or its twist commands changed (three times per episode), the optimized trajectories are not valid anymore. To guarantee that the locomotion policy generalizes across different update rates, we additionally recompute the solution in all those scenarios.
We trained the policy with a massive parallelization of \(64^{2}=4096\) robots, for a total of \(90000\) epochs. Each epoch consisted of \(45\) learning iterations where each iteration covered a duration of \(0.02\) seconds. Considering the variable update rate explained previously, this resulted in a total of \(8295\) days (or \(23\) years) of optimized trajectories. The policy can be deployed after about one day of training (\(6000\) epochs), reaches \(90\,\mathrm{\char 37}\) of its peak performance after three days (\(20000\) epochs), and is fully converged after two weeks (\(90000\) epochs).
In comparison, the baseline-rl-1 policy was trained for \(4000\) epochs with \(1000\) parallelized robots over \(5\) consecutive days. Each epoch lasted for \(5\) seconds, resulting in a throughput of \(46\) simulated seconds per second. Our policy was trained for \(14\) days, with each epoch lasting for \(0.9\) seconds, leading to a throughput of \(27\) simulated seconds per second. Thus, despite generating \(1.6\) years of desired motions per day, our approach has only a \(1.7\) times lower throughput than the baseline.
### Deployment
We deploy the policy at a frequency of \(50\,\mathrm{Hz}\) zero-shot without any fine-tuning. The motion optimizer runs at the largest possible rate in a separate thread. For TAMOLS with a trotting gait, this is around \(400\,\mathrm{Hz}\) and for baseline-to-2 around \(100\,\mathrm{Hz}\) (which both are faster than the policy frequency). At each
step, the policy quires the most recent solution from the thread pool and extracts it \(\Delta t=0.02\,\mathrm{s}\) ahead of the most recent time index.
For our experiments, we used three different types of ANYmal robots [34], two version C and one version D, for which we trained different policies. ANYmal C is by default equipped with four Intel RealSense D435 depth cameras whereas ANYmal D has eight depth cameras of the same type. For the second Version C, the depth cameras were replaced with two identical Robosense Beparl dome LiDAR sensors. Motion optimization and the forward propagation of the network policy are done on a single Intel core-i7 8850H machine. Elevation mapping [37] runs on a dedicated onboard Nvidia Jetson.
## Concluding Statement
In this work, we emphasized that TO and RL share complementary properties and that no single best method exists to address the open challenges in legged locomotion. The proposed control architecture leverages this observation by combining the planning capabilities of the former and the robustness properties of the latter. It does, by no means, constitute a universal recipe to integrate the two approaches in an optimal way for a generic problem. Moreover, one could even extend the discussion with self- and unsupervised learning, indirect optimal control, dynamic programming, and stochastic optimal control. Nevertheless, our results may motivate future research to incorporate the aspect of planning into the concept RL.
## Supplementary materials
Sections S1 to S6 Tables S1 to S3
| 歩行制御は、正確さ、安定性といった特性を持つ複雑な制御問題であり、実世界の課題を克服する必要がある。歩行システムは従来、逆伝導ダイナミクスを用いた軌道最適化によって制御されてきた。このような階層的なモデルベースの方法は、直観的なコスト関数の調整、正確な計画、汎用性、そして10年以上もの研究を通して得られた深い洞察から魅力的である。しかし、モデルの不一致と前提の違反は、不適切な動作の共通の原因となる。一方、シミュレーションベースの強化学習は、これまでになかった安定性と回復力を備えた歩行ポリシーを生成する。しかしながら、学習アルゴリズムは、足場が稀な環境で出現するsparse rewardに苦労している。例えば、隙間やステップストーンのような環境。本研究では、両方の世界を組み合わせたハイブリッド制御アーキテクチャを提案し、より高い安定 |
2309.05671 | tSPM+; a high-performance algorithm for mining transitive sequential
patterns from clinical data | The increasing availability of large clinical datasets collected from
patients can enable new avenues for computational characterization of complex
diseases using different analytic algorithms. One of the promising new methods
for extracting knowledge from large clinical datasets involves temporal pattern
mining integrated with machine learning workflows. However, mining these
temporal patterns is a computational intensive task and has memory
repercussions. Current algorithms, such as the temporal sequence pattern mining
(tSPM) algorithm, are already providing promising outcomes, but still leave
room for optimization. In this paper, we present the tSPM+ algorithm, a
high-performance implementation of the tSPM algorithm, which adds a new
dimension by adding the duration to the temporal patterns. We show that the
tSPM+ algorithm provides a speed up to factor 980 and a up to 48 fold
improvement in memory consumption. Moreover, we present a docker container with
an R-package, We also provide vignettes for an easy integration into already
existing machine learning workflows and use the mined temporal sequences to
identify Post COVID-19 patients and their symptoms according to the WHO
definition. | Jonas Hügel, Ulrich Sax, Shawn N. Murphy, Hossein Estiri | 2023-09-08T17:47:31 | http://arxiv.org/abs/2309.05671v1 | tSPM+; a high-performance algorithm for mining transitive sequential patterns from clinical data
## Abstract
The increasing availability of large clinical datasets collected from patients can enable new avenues for computational characterization of complex diseases using different analytic algorithms. One of the promising new methods for extracting knowledge from large clinical datasets involves temporal pattern mining integrated with machine learning workflows. However, mining these temporal patterns is a computational intensive task and has memory repercussions. Current algorithms, such as the temporal sequence pattern mining (tSPM) algorithm, are already providing promising outcomes, but still leave room for optimization. In this paper, we present the tSPM+ algorithm, a high-performance implementation of the tSPM algorithm, which adds a new dimension by adding the duration to the temporal patterns. We show that the tSPM+ algorithm provides a speed up to factor 980 and a up to 48 fold improvement in memory consumption. Moreover, we present a docker container with an R-package, We also provide vignettes for an easy integration into already existing machine learning workflows and use the mined temporal sequences to identify Post COVID-19 patients and their symptoms according to the WHO definition.
## Introduction
While the primary functionality of Electronic health records (EHRs) is to capture patient data for billing and communication purposes, as research data source, EHRs can provide insights
about patient journeys and understanding of complex diseases [1]. Leveraging this information has become feasible by the rapid growth in the availability of computational power and development of new analysis methods. This allows for new methods regarding disease prevention, control, population health management [2, 3], diagnosis of (rare) diseases [4, 5, 6], treatment options [7, 8, 9, 10, 11] and drug-development [8, 12] by harnessing big data analytics.
There are a few challenges, such as harmonization and interoperability [13], noisiness [14, 15], availability of computational power, models and data [16, 15] and privacy and security [16, 17], that need to be addressed when working with big data in healthcare. Nevertheless, the large amount of healthcare data presents a valuable resource that, once properly utilized, has the potential to transform patient healthcare, research, and population health [18, 19]. While we have not yet fully tapped into the immense potential of big healthcare data, there are already successful approaches in place, such as machine learning, association rule mining and temporal pattern mining, that are making a significant impact.
This paper presents multiple significant contributions. We introduce an optimized and enhanced implementation of the transitive sequential pattern mining (tSPM) algorithm [20, 21], referred to as tSPM+, for mining transitive sequential patterns from time-stamped clinical data. Estiri et al. [20, 21] introduced an innovative approach for mining transitive (temporal) sequence patterns (tSPM) from electronic health records, which proves beneficial for enhancing signal detection in various machine learning models [20, 21, 22]. In the year 2021, the tSPM algorithm was recognized as a significant contribution to the field of clinical informatics [23].
This implementation is based on a C++ library wrapped within an R-package, delivering notable improvements in both speed and memory consumption compared to the previous implementation. Specifically, tSPM+ exhibits a speedup up to factor \(\sim\)920 and \(\sim\)48-fold reduction in memory consumption. Additionally, the R-package provides a functionality to split the dbmart in chunks with an adaptive size to fit the available memory limitations.
The substantial acceleration of the algorithm unlocks new potential use cases, particularly in leveraging temporal sequences and their durations to simplify complex tasks such as identifying patients with rare or complex diseases, including conditions like Post COVID-19, commonly known as long covid [24]. To demonstrate the application of tSPM+ in such scenarios, we provide a detailed vignette illustrating the implementation of one of these tasks. Specifically, we showcase how to identify patients with Post COVID-19 and associated symptoms within a synthetic database.
Furthermore, we highlight the seamless integration of the tSPM+ algorithm into existing machine learning workflows. By outlining the steps required to incorporate tSPM+ effectively, we offer researchers a straightforward approach to harness the algorithm's capabilities within their established frameworks. To facilitate easy access and reproducibility of our work, we provide a Docker container encompassing an RStudio instance pre-installed with tSPM+, synthetic data, and the accompanying vignettes. This container grants researchers and readers an accessible entry point and ensures easy reproducibility of our findings.
## Background
### Association rule mining
The field of data mining has witnessed significant advancements in extracting knowledge and patterns from extensive databases [25, 26, 27, 28, 29, 30]. One specific area within data mining is association rule mining (ARM), which aims to extract rules that capture associations, correlations, or frequent patterns among entries in a given database [29, 31]. Since its introduction by Agrawal et al. [29] in 1993, initially for analyzing market basket data, ARM has evolved into an active and extensive research area in data science, encompassing diverse data sources [25, 26, 27, 28, 30, 31, 32, 33, 34, 35]. Recently, Shahin et al. [25] conducted a systematic review and identified three commonly employed ARM algorithms: Apriori [29], FP-Growth [36] and Eclat [37]. Over the years, these algorithms have undergone numerous enhancements and adaptations [38, 39, 40, 41, 42, 43].
Although general association rule mining typically overlooks temporal relationships among individual data entries [44], EHR data inherently possesses temporal dependencies. Consequently, temporal pattern mining techniques are employed to account for such relationships. Sequential pattern mining (SPM) represents a subtype of temporal pattern mining that incorporates the order of entries in the database, including their temporal aspects, while extracting frequent patterns [45]. Within the healthcare domain, SPM serves as a prevalent technique for decision support systems and as input for machine learning algorithms. Leveraging sequential patterns, instead of considering individual entries, facilitates enhanced signal detection for certain machine learning algorithms, making it a widely adopted approach in healthcare [20, 21, 46, 47, 48, 49, 9]. In some cases, SPM algorithms account for the duration of the sequences. Notably, temporal pattern mining encompasses more than just sequential pattern mining and encompasses extensive subfields such as time series data analysis [50].
While ARM and SPM algorithms offer distinct perspectives on data analysis, they both suffer from shared drawbacks [51]. Their application to larger databases demands substantial computational resources due to their inherent complexity [51]. Moreover, the reliability and accuracy of their outcomes rely heavily on the quality of the input data, making the presence of noise and incomplete data, which are prevalent in medical datasets, particularly influential. Furthermore, the well-established challenge of safeguarding data privacy in the medical domain must be carefully considered when employing ARM and SPM algorithms for medical data analysis. However, overcoming these obstacles can yield valuable insights and enable the exploration of complex research inquiries, ultimately contributing to the enhancement of patient care and well-being [21, 22, 36, 40, 46, 48, 50, 51].
### Transitive sequential pattern mining (tSPM) algorithm
Implemented in the R programming language, the tSPM algorithm operates on patient data structured as a simple table, encompassing the patient number, date, and clinical representations from the database, each denoting the clinical feature space X, hence
referred to as 'phenX' in abbreviation. This table adheres to the MLHO [52] format and is referred to as a dbmart.
The tSPM algorithm [20, 21] compasses three key steps. First, it extracts all phenX entries for each patient, sorting them based on their dates to establish a temporal order.
Second, tSPM iterates through the sorted phenX entries and generates sequences that initiate with the current phenX and conclude with another phenX having a later date. This process mines ((n-1)(n))/2 sequences per patient, where n represents the number of entries for the patient in the dbmart. Given an average of ~400 entries per patient and a cohort of 5000 patients, the tSPM algorithm generates a staggering 399,000,000 sequences.
Consequently, the inclusion of a third optional step becomes highly recommended, involving sparsity screening to mitigate the sequence count. Estiri et al. utilize the Minimize Sparsity Maximize Relevance (MSMR) algorithm [20], which employs a straightforward sparsity screening and employs joint mutual information to discard sparse sequences prevalent in small patient subsets. Fig 1. shows the pseudocode for the tSPM algorithm.
Subsequently, Estiri et al. employ the extracted sequences as input for various machine learning tasks [20, 21, 22], consistently outperforming alternative approaches. While the combination of tSPM and machine learning tasks yields superior signal detection compared to the conventional approach of using phenX as direct input for machine learning [22], the tSPM algorithm leaves potential for improvement concerning memory consumption and runtime. Furthermore, it is important to note that the tSPM algorithm does not provide information regarding the duration of a sequence, specifically the time difference between the dates of the two phenX entries.
In the following sections we present tSPM+, an optimized implementation of the tSPM algorithm as a C++ library available as an R package. This yields substantial speed and memory improvements compared to the original version. and allows for more complex uses-cases. These are described in two vignettes, where we highlight a seamless integration
Figure 1: The pseudocode of the basic tSPM algorithm.
in a machine learning workflow as well as a scenario to leverage the mined sequences for Post COVID-19 detection.
For accessibility and reproducibility, we provide a Docker container with tSPM+, synthetic data and the aforementioned vignettes, ensuring easy access and replication.
## Methods
While the original implementation of the tSPM algorithm achieved good results, we recognized the need for a more performant implementation. Optimizing its performance enables us to sequence more patient data allowing for more complex analyses revealing more useful and precise information for downstream phenotype modeling. Additionally, integrating the duration of the sequences adds new dimensions to our analyses and enables even more complex use cases, such as the implementation of the long covid definition.
### transitive Sequential Pattern Mining plus (tSPM+) algorithm
The tSPM+ algorithm follows the same fundamental principles as the tSPM algorithm. It constructs sequences by combining each entry of a patient with all subsequent entries, as outlined in the tSPM section. Notably, the algorithm also captures the duration of these sequences, the time intervals between entry dates, expanding the potential of the generated sequences. Consequently, the data must adhere to the MLHO format to support these functionalities. To optimize memory efficiency, the tSPM algorithm either discards the description column in the preprocessing step or necessitates its removal in before.
To facilitate an efficient implementation, we have developed the tSPM+ algorithm as a high-performance C++ library. This implementation can be directly integrated into low-level C++ programs or encapsulated within higher-level languages such as R or Python. The C++ library encompasses not only the tSPM+ algorithm itself but additional auxiliary functions that have demonstrated utility when working with sequences.
By implementing the tSPM+ algorithm as a C++ library, we capitalize on the advantages of leveraging native data formats and performing faster and more efficient operations compared to higher-level languages. Consequently, we made the decision to store data as a numeric representation, albeit with the trade-off of requiring lookup tables for later translation to their original forms. During the creation of these lookup tables, we assign a running number, starting from 0, to each unique phenX and patient ID. This number is stored as a 32-bit unsigned integer, enabling us to use the patient ID as an index in arrays. Crucially, this numeric representation facilitates the storage of phenX pairs as a distinct hash function that is easily reversible. To construct a sequence, we append leading zeros to the end phenX, resulting in a 7-digit number. We then append this number to the first phenX, creating a unique numeric sequence for each phenX pair. This representation can be effortlessly reverted back to its original form and is interpretable by humans, provided the number of digits for the last phenX is known. Furthermore, it allows us to store the sequence as a 64-bit integer (long long). For a more detailed explanation of the sequence creation process, refer to Figure 2.
The duration of a sequence can be stored in multiple ways. We decided to store the duration of a sequence in days as default, but the unit can be changed via a parameter. Using days
allows us to incorporate the duration into the number that represents the sequence. Therefore, we utilize cheap bitshift operations to shift the duration on the last bits of the sequence. Nevertheless, we decided to store the duration in an extra variable to ease the program flow, but leverage this feature in some helper functions, e.g. when calculating duration sparsity. Since the duration is stored in days using unsigned 32 bit integers, it reduces the memory footprint further.
While the numeric representation significantly contributes to the substantial memory reduction, its benefits extend to the use of numerical operators, which allows for fast comparison of the individual values. Nevertheless, the most acceleration arises from the parallelization with OpenMP [53]. The parallelization of the tSPM+ algorithm is straightforward by simultaneously creating the sequences for multiple patients in different threads. This requires sorting the dbmart after patient id as first and date as the second criterion to ensure that each patient is one chunk of entries. For an efficient parallel sorting we leverage the in-place super scalar sample sort (ips4o) algorithm from Axtman et al. [54]. Additionally, the entries in each chunk are chronologically arranged, enabling the creation of all sequences for a phenX by iterating over all subsequent phenX in the same chunk. Consequently, to harness parallelization, we distribute the patient chunks over multiple threads storing the created sequences in thread-specific vectors. This strategic design mitigates resource-intensive cache invalidations, thus optimizing performance.
Merging these vectors results in a huge vector of sparse sequences. Sparse sequences occur only for a small number of patients. By removing them, keeping only significant sequences, we preempt overfitting in subsequent machine learning applications. The
Figure 2: The workflow to mine the transitive sequences. At first, the data is extracted from the database and transformed into the MLHO format. After transforming it to numeric, the dbMart gets sorted and the sequences are created for each patient. Each phenX for a patient is coded in a different color. We highlighted the parts (substrings and duration) of the created sequence in the color of the corresponding phenX to visualize how the phenX can be easily extracted from the sequence.
simplest way to identify sparse sequences is to count the occurrences of a sequence and remove it when the count is less than the threshold. To optimize performance in the parallel processing, we again leverage the ips4o algorithm from Axtman et al. [54] to sort the sequences by their id.
Afterwards, a sophisticated approach is applied to methodically mark sparse sequences before removing them. We first determine the start positions within the vector for each sequence, allowing us to divide it in equal chunks for concurrent processing on multiple parallel threads. In each thread, we iteratively calculate the number of each sequence, by subtracting the start position of the next sequence from the current. If this number is less than the sparsity threshold, we label this sequence for removal by assigning the maximal possible value to the patient number. Once all sequences are labeled, we sort them by their patient id. Subsequently, we determine the first occurrence of the maximal integer value as patient id and erase all values after this entry.
This strategy optimized the number of memory allocations by minimizing its frequency to one. Additionally, the sequence chunks are large enough to mitigate cache invalidations, when altering patients numbers. Finished through shrinking the vector to its new size, we retain only non-sparse sequences, effectively refining the sequences.
### R package:
In order to enhance accessibility to the underlying low-level C++ library, we developed a user friendly R-package. It encapsulates the performant C++ functions making them easily available and usable in the R environment. Rcpp [55] and RcppParallel [56] are widely adopted R packages to interfacing C++ functionalities are often harnessed to speed up and parallelize functionalities in R packages. Consequently, we chose them to facilitate a seamless integration of the tSPM+ C++ library.
Given that tSPM+ exclusively applicable to numeric data, the R package incorporates a utility function to convert alpha-numeric dbmarts to their numeric counterparts and the corresponding look-up tables.
Furthermore, the R-package provides a utility function to enable the adaptive partitioning of the dbmart based on the available memory and the number of created sequences. Applying this approach segregates the data into manageable chunks, which can be sequenced separately. Thereby it enables the sequencing of phenotypes on ressource-restrained platforms, like laptops. This functionality is particularly relevant since the maximum number of entries in an R vector is limited to 2\({}^{**}\)31-1 entries [57]. This threshold can be swiftly reached when sequencing substantial patient cohorts with multiple tens of thousands patients.
To enhance useability the R-package is accompanied by two instructive vignettes. Each vignette encompasses illustrated code and comprehensive explanations on significant use cases for transitive temporal sequences. These use cases are demonstrable either with provided synthetic example data or data from the linked dependency packages in the vignettes. The first vignette guides the user through integrating the sequencing process into the MLHO machine learning workflow [52, 58]. In contrast, the second vignettes showcased the synergistic utilization of sequences and utility functions to address current challenges, for example, implement complex disease definitions, such as the WHO definition of Post COVID-19.
## Benchmarks
We performed multiple benchmarks to measure the performance of tSPM+. One to compare tSPM+ with tSPM and another one to analyze the possible performance. Since not only the data set characteristics, such as max or average number of phenX per patient and number of patients, might influence the performance of the algorithms, but also the scheduling of the operating system and background processes, we performed 10 iterations of all benchmarks and reported the average, as well as the min/max values for memory consumption and speed. All benchmarks were performed on a machine with Ubuntu 22.04.2 LT, 2 Intel(r) Xeon(r) Gold 5220R CPUs @ 2.2GHz, each with 24 Cores and 48 Threads, and 256 GB of available memory. We used R 4.1.2 and compiled the source code using gcc version 11.3.0. We used the time [59] program to measure runtime and maximal memory consumption for each iteration of the tSPM(+) calls.
The benchmark is orchestrated through a bash script, which executes the different R scripts iteratively, for a total of 10 cycles. These scripts encompassed:
1. tSPM without sparsity screening
2. tSPM with sparsity screening
3. tSPM+ in-memory with sparsity screening
4. tSPM+ file-based with sparsity screening
5. tSPM+ in-memory without sparsity screening
6. tSPM+ file-based without sparsity screening.
Within each R script the data was loaded and the corresponding algorithm invocated. The measurement protocol included total runtime and memory consumption as mentioned before, and additionally, the runtime measurement for data loading, sequencing and sparsity screening, if applicable, within the R scripts.
On the one hand, we include the transformation into a numeric representation into the benchmark, because it is a preprocessing step that distinguishes the tSPM and tSPM+ algorithms. On the other hand, we excluded the transformation into the MLHO format from the measurements due to being required by both algorithms.
The bash and R scripts are embedded in the available docker container, as well as in the corresponding GitHub Repo ([https://github.com/JonashHuegel/tSPMPlus_benchmarks](https://github.com/JonashHuegel/tSPMPlus_benchmarks)).
Furthermore, this repository stores a detailed list of the used R packages, their dependencies including the corresponding version numbers Despite the potentially reduced runtime and memory demands of the C++ implementation, we benchmark the R version of tSPM+ algorithm to enhance the comparability with the original tSPM implementation.
### Comparison Benchmark
We analyze the performance of the original tSPM with the tSPM+ algorithm on real world data that were already used together with the old tSPM algorithm in an older AD study [22] to evaluate the performance in a real world setting.
We used the patient data from 4985 patients with an average of 471 entries per patient from the Mass General Brigham Biobank. The Mass General Brigham Institutional Review Board (protocol# 2017P000282) allows the use of the biobank data as per the Biobank Consent signed by all participants in the MGB Biobank.
Following the protocol of the previous study [22], we only kept the first occurrence of a phenX per patient, e.g. when a discarded phenX occurs in the next sequence for a patient,
we do not store that sequence. We did this to account for the number of created sequences and the required computational resources of the original tSPM algorithm. Deviating from the previous study, we employed only the sparsity screening from the MSMR function [20] with the tSPM algorithm, but excluded the Joint Mutual Information to select the most relevant features. The tSPM+ library provides a native sparsity function, hence we applied it in the benchmark.
### Performance Benchmark
The second benchmark measures the achievable performance and is performed on the 100k Covid-19 synthetic data set from Synthea(tm)[60], [61]. After extracting data for ~125 000 synthetic patients and reducing it to 35 000 patients with an average of 318 entries, we stored it in the MLHO format. The reduction of the dataset deemed necessary as the C++ tSPM+ algorithm mined an excessive number of sequences causing failure during the transformation into an R dataframe. This arose from R limiting the number of elements per vector, capped at 2*{31}-1 elements [57]. While employing adaptive partitioning is a viable approach, we consciously opted against it. Implementing it would introduce extra iterations of the sequencing process without substantial benefits and increasing the runtime linear.
## Results
### Implementations of tSPM+
#### The C++ library
The C++ library is implemented in C++17 and is published on GitHub ([https://github.com/JonasHuegel/tspm_cop_backend](https://github.com/JonasHuegel/tspm_cop_backend)) under the MIT license. While this implementation is not a direct usable command line tool, it is accompanied by an runnable example file to demonstrate how to include the library in other programs. Moreover, the library encompasses a native function for the sparsity screening and a broad array of additional utility functions allowing fast operations on the sequences. These functions facilitate tasks such as extracting functions with given start phenX, end phenX or specified minimum durations. Another function combines these functions and allows to extract all sequences that end with phenX, which is an end phenX of all sequences with a given start phenX.
The tSPM+ implementation offers two distinct operational modes. The first mode is file based, creating a file storing all generated sequences for each patient. The second mode operates completely in memory, providing the sequences as one comprehensive vector.
### The R package
The R-package is published on GitHub ([https://github.com/JonasHuegel/tSPMPlus_R](https://github.com/JonasHuegel/tSPMPlus_R)) under the MIT license and encompasses the C++ library as a git submodule. The R package is accompanied by two vignettes and a synthetic dbmart providing examples on how to leverage the outstanding opportunities of the tSPM+ algorithm.
Integrating tSPM+ in the MLHO Machine Learning workflow
Integrating the mined sequences into existing machine learning workflows is a necessity to leverage the full potential of the sequences. Consequently, the first vignette encompasses instructions to integrate tSPM+ into the MLHO Machine Learning framework. It builds onto the original MLHO vignette [58] and demonstrates how to leverage the sequences for the classification tasks instead of raw EHR entries. In the first step, we load the example data from the MLHO package, converting it to numeric and handing it over to the tSPM+ function call to extract the sequences and perform the sparsity screening. The created, non-sparse sequences are handed over to the MSMR algorithm extracting the 200 most significant sequences. Following the original vignette, we are using MLHO to train the classifier on the remaining relevant sequences. Finally, the vignette demonstrates how the sequences reported as significant for the classification task can be translated back to their descriptions to become fully human readable again.
#### Leveraging tSPM+ to identify Post COVID-19 patients
The second vignette encompasses a more complex use case of temporal sequences. We highlight in this vignette how the transitive sequences and their duration can be leveraged to identify which patient has which Post COVID-19 symptom according to the WHO definition. To be considered a Post COVID-19 symptom, a symptom must occur after a covid infection and is at least ongoing for two months, if it can not be excluded by another rationale from the patient. Usually the symptoms appear 3 months after the infection or later, but this is not a mandatory criteria for Post COVID-19 [24].
We utilize a modified version of the synthetic Synthea COVID-19 data set [60], which is included into the R package, as example data. At first, we demonstrate how to transform this alphanumeric dbmart to numeric. Afterwards, we leverage a util function of the tSPM+ library to extract all sequences that end with a phenX that is for at least one patient the end phenX of a sequence starting with covid. From this set we exclude all sequences that did not start with covid. Then we start to exclude candidate sequences on a patient level that either occur only once or where the maximal difference of the duration of the sequences with the same end phenX was less than 2. All remaining sequences are candidates which now need to be excluded by other sequences from a patient. Therefore, we sequence all sequences that end with a candidate phenX and compute pairwise correlations between the sequence and the end phenX duration bucket tupel. If a patient had a sequence with a high correlation, even if it is not casualisation, and the corresponding candidate phenX, we removed the candidate phenX for this patient. After we remove each candidate phenX, for which the patient has at least one other sequence that ends with this candidate phenX and has a high correlation and significance, the remaining candidates are Post COVID-19 symptoms for the corresponding patient. Finally, the vignette demonstrates how to convert the numeric sequences to human readable descriptions.
## Benchmark
### Comparison Benchmark
The tSPM+ algorithm massively outperforms the old tSPM implementation in computation time as well as in memory consumption in the comparison benchmark.
The tSPM+ implementation that just utilizes the files, achieved a speed up by factor ~920; from ~12 900 seconds, a little more than three and a half hours, to ~14 seconds and a memory reduction from ~62.62 GB to ~1.3GB, while the tSPM+ implementation working in memory need ~60 seconds and 43.34 GB of memory, a improvement by factor ~210 in speed and ~1.4 in memory usage respectively. We have to note that for the in-memory approach half of the memory was allocated during the transformation from the C++ data structure into an R-data frame and could be avoided when using it in a C++ program.
The large difference between the file based and in-memory implementation of tSPM+ gets completely equalized when we consider the sparsity screening process. Both implementations require around 25Gb of memory and running in around 1 minute, ~56 and ~64 seconds respectively. Therefore they are clearly outperforming the old tSPM implementation with a runtime of ~19020 seconds and ~205 GB memory consumption providing a speed-up by factor ~297 and an eightfold improvement regarding memory consumption.
### Performance Benchmark
When running the performance benchmark with 100k patients with an average of 318 entries, every run failed due to an error in the end when converting the used C++ data structure into an R dataframe. This happens because R has a limit of (2\({}^{\text{\tiny A}}\)31)-1 entries per vector and we sequenced 7 195 858 303 (close to 2\({}^{\text{\tiny A}}\)33) sparse sequences.
Therefore, we rerun the benchmark with only 35k patients and reported the corresponding runtimes and memory consumption.
As in the comparison benchmark, the file-based tSPM+ algorithm without sparsity is the fastest with an average runtime of ~37 seconds and a memory consumption of ~2 GB, outperforming the in memory approach, which required 109 GB of memory and had a runtime of ~ 214 seconds.
Again this massive lead gets lost, when the sparsity screening is applied. The file-based tSPM algorithm as well as the in memory version with sparsity screening requires an
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Memory consumption (in GB)**} & \multicolumn{1}{c|}{**Runtime (hh:mm:ss)**} \\ \multicolumn{1}{|c|}{**Implementation**} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline
**Algo** & **Sparsity Screening** & **In-Memory / File-Base** & & & & & & \\ \hline tSPM & without & In-Memory & 62,27 & 62,82 & 62,62 & 3:30:08 & 3:37:26 & 3:34:09 \\ \hline tSPM & included & In-Memory & 201,09 & 207,60 & 205,23 & 5:10:42 & 5:24:08 & 5:17:27 \\ \hline tSPM+ & without & In-Memory & 43,34 & 43,34 & 43,34 & 00:00:58 & 00:01:11 & 00:01:01 \\ \hline tSPM+ & included & In-Memory & 25,89 & 25,89 & 25,89 & 00:01:01 & 00:01:07 & 00:01:04 \\ \hline tSPM+ & included & File-based & 22,26 & 28,10 & 24,34 & 00:00:52 & 00:00:59 & 00:00:56 \\ \hline tSPM+ & without & File-based & 1,33 & 1,33 & 1,33 & 00:00:13 & 00:00:14 & 00:00:14 \\ \hline \end{tabular}
\end{table}
Table 1: shows the average, min and max values for the memory consumption and runtime for all the implementations during the comparison benchmark. We provide a more detailed enumeration for each run in the appendix.
average of ~108 GB of memory. The speed advantage melts down to a difference of ~8 seconds with a runtime of ~288 seconds for the in-memory approach and ~280 seconds for the file-based approach. Table X shows the min, max and average runtime and memory consumption of the performance benchmark. We report the more detailed runtime in the appendix.
### Performance on End User devices
Additionally, we run the tSPM+ algorithms on some end user devices (laptops or workstations). Even on devices with only 4 to 8 cores and less than 16GB of memory we were able to run the tSPM+ algorithm to sequence more than 1000 patients and ~400 entries per patient in less than 5 minutes.
### Reproducibility and availability of the source code and examples
By integrating the source code from the above-mentioned GitHub repositories into a docker-container, we provide low level access to the tSPM+ algorithms as well ensure reproducibility of our benchmarks. The docker container is based on rocker:rstudio and provides an Rstudio instance, where tSPM+, tSPM, MLHO and all dependencies are already pre installed and ready to use. Furthermore, the docker container is encompassed by both vignettes and their required data. Therefore, it provides two examples demonstrating how to use tSPM+ on synthetic data, and additionally a straightforward approach to deploy tSPM+ and MLHO on their own data. The buildfile and the container is available in the following GitHub repository: [https://github.com/JonashIuegel/tSPMPlusDocker](https://github.com/JonashIuegel/tSPMPlusDocker). Additionally, we froze the versions of the code and the docker container and provide them online at [62].
## Discussion
In summary, the tSPM+ algorithm significantly outperforms the original tSPM. A fraction of the speedup is achieved by replacing slow string operations and comparisons with faster numeric ones. Consequently, we require 128 bit or 16 byte to store a sequence (8 for the
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & & \multicolumn{2}{c|}{**Memory consumption**} & \multicolumn{2}{c|}{**Runtime (hh:mm:ss)**} \\ \multicolumn{2}{|c|}{**Implementation**} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \hline
**Algo** & **Sparsity** & **In-Memory /** & **Min** & **Max** & **Average** & **Min** & **Max** & **Average** \\ & **Screening** & **File-Base** & & & & & & \\ \hline tSPM+ & without & In-Memory & 109,63 & 109,63 & 109,63 & 00:03:10 & 00:04:53 & 00:03:34 \\ \hline tSPM+ & included & In-Memory & 106,61 & 108,16 & 108,01 & 00:04:07 & 00:05:12 & 00:04:48 \\ \hline tSPM+ & included & File-based & 108,17 & 108,20 & 108,18 & 00:03:56 & 00:04:59 & 00:04:40 \\ \hline tSPM+ & without & File-based & 2,01 & 2,19 & 2,12 & 00:00:31 & 00:31:00 & 00:03:40 \\ \hline \end{tabular}
\end{table}
Table 2: shows the average, min and max values for the memory consumption and runtime for all the implementations during the performance benchmark. We provide a more detailed enumeration for each run in the appendix.
sequence, and 4 for the duration and patient id each. This is significantly smaller than when we use strings (characters) for storing all this information.
To allow an efficient parallelization we added additional sorting steps, which also can be done performant in parallel [54]. After the sorting we can access and modify the data in a linear way avoiding costly cache invalidations and other (scheduling) operations, e.g. memory allocations and copying. This approach is commonly used by other high performance implementations [63, 64, 65]. A good example for this procedure is the sparsity screening, where we at first sorted the mined sequences by their sequence ID and then just needed to iterate over the sequences and count for how many patients they occur.
According to the developers of the IPS*4o algorithm it is currently not possible to compile their algorithm on windows [66, 54]. Nevertheless, linking it against the RcppParallel library [56], which encompasses the Intel oneAPI Threading Building Blocks library [67], ensures the compilation.
Nevertheless, the tSPM+ algorithm has some limitations. The largest one is that it is currently only working with discrete data. Non-discrete data such as weight can be used, if it gets discretized by creating a new phenX for different ranges. Moreover, since it is only working on numeric data, it requires that the original information is stored in look up tables, which either require memory or have to be written to files. Moreover, tSPM+ requires the transformation to numeric data as a preprocessing step, and the transformation back to human readable sequences, after the sequences were mined and processed in the use cases. While the integration into R provides several advantages, it adds additional overhead, especially when transforming the data from the C++ data structure into an R dataframe, which limits the maximum number of sequences that can be mined per run to 2*31 - 1.
The tSPM+ implementation empowers researchers to perform high-throughput sequencing of phenotypes without requiring large scaled servers. By demonstrating that the tSPM+ algorithm performs on end user devices, we enable data scientists and other researchers to develop and test AI/ML pipelines with integrated sequencing on devices with less compute power.
Another advantage of the low resource consumption is that it is possible to sequence large numbers of patients and provide the mined sequences to use them in AI models to examine complex diseases.
For example, limited by computational efficiency Estiri et al. [22] were able to sequence the first occurrence of each phenotype in their Alzheimer's Classification task, tSPM+ would now allow to sequence all phenotypes instead of only the first occurrence of a phenX. Furthermore, tSPM+ provides the duration of these sequences adding a new dimension in the analyses.
In their current review, Xie et al. [68] identify the integration of temporal dimension, especially of entries that might occur multiple times per patient, as a current challenge when using EHR data in deep learning. Using sequences mined with tSPM+ might provide an efficient approach to solve this challenge.
Moreover, as we have shown with our Post COVID-19 vignette, we empower researchers to leverage the usage of transitive sequences to implement complex definitions of diseases without writing complex SQL queries to extract these informations from the databases.However, simplifying complex database queries by utilizing temporal sequences
is not a novel approach, already in 2008 Wang et al. [69] worked with temporal sequences trying to avoid complex database queries.
Chaichana et al. [70] analyzed how Post COVID-19 was defined in all the Post COVID-19 studies until the beginning of 2023. According to them there is an urgent need for an easy implementation of a uniform Post COVID-19 definition, since most of the studies were using diverging Post COVID-19 definitions. This might be due to the complexity of definition of Post COVID-19 by exclusion and the challenge of implementing this definition in algorithms.
We showed in the vignette that there might be a simple way to fulfill this need. This approach still requires clinical validation, which is why we currently work on a larger multi-site study to evaluate this approach. This approach might also be applicable on other large covid data sets, such as the German NAPKON study [71, 72].
McDermott et al. [16] emphasize the need for reproducible models and implementations for Machine Learning approaches in healthcare. By not only providing example data, but as well as a docker container and two vignettes, we contribute to this need and make our work easily reproducible for others.
Moreover, McDermott et al. [16] stress the danger of applying AI approaches only on inhouse data sets or the "same" public data sets when considering generalization. By providing the vignette on how to integrate tSPM+ with MLHO [52], we enable an easier transfer of the tSPM+ sequencing approach and the MLHO AI models to different data sets. The transfer requires the conversion of the data into the MLHO format. However, by providing the R-package with the synthetic data from Synthea [60, 61], we removed the barrier of having a not shareable data set, allowing others to reproduce most of our results.
## Conclusion
In this work, we presented an efficient, extended high-throughput implementation of the original tSPM algorithm. We provide an R package and a docker-container as low level access to this algorithm and a high-performance C++ library which can be included in different languages.
The massive performance boost of tSPM+ allows for new use-cases, like the aforementioned implementation of the Post Covid definitions. This library enables more researchers to analyze their patient data to solve complex research questions.
By providing two vignettes and a docker container, with relevant use cases and sample data, we reduce the entry barrier for other scientists, especially clinicians, which might not be proficient in programming as data and computer scientists, and just desire an easy to use tool to analyze their EHR data using AI.
Further enhancements to the algorithms, such as the integration of non discrete data, would enable additional dimensions of information, and is worth further investigations.
Additionally, the Post COVID-19 use case requires thorough validation,e. g. by a complete study on its own and would grant urgently required insights in this complex disease.
Finally, tSPM+ adds a new dimension with the sequence durations and is not limited to use only the first occurrence of a clinical record as a phenX for the sequencing. Therefore, it might be worth repeating previous analyses, e.g. regarding Alzheimer's Disease, from older publications to extract more knowledge and get more detailed information about the diseases.
The application of tSPM+ is not limited to Alzheimer's Diseases and covid, but is also applicable to data from other disease trajectories with a temporal component, e,g, cancer and cardiovascular diseases.
## Acknowledgment
J.Hugel's work was partially funded by a fellowship within the IFI programme of the German Academic Exchange Service (DAAD) and by the Federal Ministry of Education and Research (BMBF).
This work is partially funded by the National Institute on Aging (RF1AG074372) and National Institute of Allergy and Infectious Diseases (R01AI165535), the VolkswagenStiftung (ZN3424) and the German Research Foundation (426671079).
| 大規模臨床データセットの増加は、複雑な病気の計算的特徴付けのための新しい道を開く可能性があります。異なる分析アルゴリズムを用いることで、大規模臨床データセットから知識を抽出するための一例として、時間パターン抽出と機械学習ワークフローを統合する方法が挙げられます。しかし、この時間パターンを掘り下げることは計算 intensive で、メモリへの影響も大きいです。例えば、時間的なシーケンスパターン抽出アルゴリズム(tSPM)はすでに良い結果を出していますが、さらに改善の余地があります。この論文では、tSPMアルゴリズムの高速化を目的とした、tSPM+アルゴリズムを提案します。tSPM+アルゴリズムは、時間パターンに「期間」を加えることで、新たな次元をもたらします。tSPM+アルゴリズムは、速度を980倍向上させ、メモリ消費を48倍向上させる効果をもたらします。また、Dockerコン |
2309.13007 | ReConcile: Round-Table Conference Improves Reasoning via Consensus among
Diverse LLMs | Large Language Models (LLMs) still struggle with natural language reasoning
tasks. Motivated by the society of minds (Minsky, 1988), we propose ReConcile,
a multi-model multi-agent framework designed as a round table conference among
diverse LLM agents. ReConcile enhances collaborative reasoning between LLM
agents via multiple rounds of discussion, learning to convince other agents to
improve their answers, and employing a confidence-weighted voting mechanism
that leads to a better consensus. In each round, ReConcile initiates discussion
between agents via a 'discussion prompt' that consists of (a) grouped answers
and explanations generated by each agent in the previous round, (b) their
confidence scores, and (c) demonstrations of answer-rectifying human
explanations, used for convincing other agents. Experiments on seven benchmarks
demonstrate that ReConcile significantly improves LLMs' reasoning -- both
individually and as a team -- surpassing prior single-agent and multi-agent
baselines by up to 11.4% and even outperforming GPT-4 on three datasets.
ReConcile also flexibly incorporates different combinations of agents,
including API-based, open-source, and domain-specific models, leading to an 8%
improvement on MATH. Finally, we analyze the individual components of
ReConcile, demonstrating that the diversity originating from different models
is critical to its superior performance. Code:
https://github.com/dinobby/ReConcile | Justin Chih-Yao Chen, Swarnadeep Saha, Mohit Bansal | 2023-09-22T17:12:45 | http://arxiv.org/abs/2309.13007v3 | # ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs
###### Abstract
Large Language Models (LLMs) still struggle with complex reasoning tasks. Motivated by the _society of minds_(Minsky, 1988), we propose ReConcile, a multi-model multi-agent framework designed as a round table conference among diverse LLM agents to foster diverse thoughts and discussion for improved consensus. ReConcile enhances the reasoning capabilities of LLMs by holding multiple rounds of discussion, learning to convince other agents to improve their answers, and employing a confidence-weighted voting mechanism. In each round, ReConcile initiates discussion between agents via a 'discussion prompt' that consists of (a) grouped answers and explanations generated by each agent in the previous round, (b) their uncertainties, and (c) demonstrations of answer-rectifying human explanations, used for convincing other agents. This discussion prompt enables each agent to revise their responses in light of insights from other agents. Once a consensus is reached and the discussion ends, ReConcile determines the final answer by leveraging the confidence of each agent in a weighted voting scheme. We implement ReConcile with ChatGPT, Bard, and Claude2 as the three agents. Our experimental results on various benchmarks demonstrate that ReConcile significantly enhances the reasoning performance of the agents (both individually and as a team), surpassing prior single-agent and multi-agent baselines by 7.7% and also outperforming GPT-4 on some of these datasets. We also experiment with GPT-4 itself as one of the agents in ReConcile and demonstrate that its initial performance also improves by absolute 10.0% through discussion and feedback from other agents. Finally, we also analyze the accuracy after every round and observe that ReConcile achieves better and faster consensus between agents, compared to a multi-agent debate baseline.1
Footnote 1: Our code is available at: [https://github.com/dinobby/ReConcile](https://github.com/dinobby/ReConcile)
## 1 Introduction
A large body of work has focused on improving the reasoning capabilities of Large Language Models (LLMs) by imitating various human cognitive processes. These include phenomena like reflecting on and critiquing one's own predictions, being receptive to feedback, and learning from feedback. Of note, self-reflection is an introspective process that allows the model to improve its outputs by generating feedback from the model itself (Madaan et al., 2023; Shinn et al., 2023). However, self-reflection suffers from Degeneration-of-Thought - when the model is overly confident in its answer, it is unable to generate novel thoughts even after multiple rounds of feedback (Liang et al., 2023). Moreover, it is difficult for the model to refine knowledge that it already does not contain.
In order to promote more diverse thoughts, past work has drawn inspiration from the _society of minds_ in multi-agent systems (Minsky, 1988; Zhuge et al., 2023). Communication between multiple agents plays a vital role in complex decision-making. This has prompted recent developments of multi-agent debating frameworks (Liang et al., 2023; Du et al., 2023), in which multiple agents participate in a multi-round debate to arrive at a common final answer. Despite the increased reasoning diversity obtained through the process of a debate, multiple agents have typically been limited
to different instances of the same underlying model, ChatGPT (OpenAI, 2022).2 This results in an inherent model bias, a restricted knowledge scope, and a lack of external feedback from other models due to identical pre-training data and model architectures across all agents. Relatedly, ensemble methods like self-consistency generate the most consistent answer via sampling diverse reasoning paths from the same model (Wang et al., 2023) but do not incorporate any internal or external feedback. In general, when multiple agents propose diverse solutions to a problem, the success of such a multi-agent system is fundamentally reliant on the ability to estimate each agent's confidence and accordingly, convince other agents (with explanations) to reach a better consensus. This puts forward the following question: if multiple diverse LLMs are asked to collaboratively solve a task, are they capable of discussing their solutions with each other such that a better consensus is reached?
Footnote 2: In this work, we refer to multi-agent as multiple instances of the same underlying model (e.g., ChatGPT), whereas multi-model model-agent refers to different models (e.g., ChatGPT, Bard and Claude2) as agents.
We aim to solve complex reasoning problems by learning from diverse insights and external feedback, originating from agents that belong to different model families. Collaborative processes such as brainstorming, group meetings, and discussions play a pivotal role in reaching a consensus and arriving at more refined solutions to complex problems (Li et al., 2022). Effective discussion also entails the selection of stances, voting, convincing, exchange of information, and a diversity of opinions. This leads us to propose ReConcile, a novel method of round-table conference for consensus among diverse LLM agents. ReConcile consists of multiple discussion rounds between diverse LLM agents who try to _convince3_ each other to either rectify their answers or become more _confident_ of their initial correct answers (see Fig. 1 for a broad overview). The central motivation of ReConcile stems from the fact that in a collaborative environment, all participants engaging in a discussion hold their own opinions at the beginning, and a consensus within is achieved through various communicative aspects, including convincing others, voting for a decision, and the adjustment of positions along with their associated confidence levels.
Footnote 3: When we say that an agent tries to convince another agent, we mean that it learns (based on corrective explanations) to defend or argue for its stance while still being receptive to the other agent’s argument such that a better consensus is reached.
Given a reasoning problem, ReConcile begins with each agent first generating an answer, its associated uncertainty, and a corresponding explanation (as a Chain-of-Thought (Wei et al., 2022)) for the answer. Then all agents enter a multi-round discussion phase. Each discussion round comprises of all agents generating a revised explanation and answer based on all other agents' explanations
Figure 1: An illustration of the main difference between ReConcile and existing methods. While most current self-refine and debating techniques rely on multiple instances of a single model (such as ChatGPT), our method incorporates models from different families, including ChatGPT, Bard, and Claude2. Additionally, our approach emphasizes critical elements of effective discussion, including convincing another agent to improve their answers and incorporating the estimated confidence of all agents. For illustrative simplicity, we depict only one agent contemplating how to convince the other two, but in the actual implementation, all participants would engage in this process.
and answers from the previous round. The goal of the revised response is to convince other agents to reach a better consensus. In particular, ReConcille initiates a discussion by designing a _discussion prompt_ for each agent, that lets it condition on (1) grouped answers from all agents, (2) corresponding explanations generated in the previous round, and (3) demonstrations of samples with human explanations (that rectify an agent's initial incorrect answer) that can convince other agents. We leverage them in an in-context learning framework to teach models to generate their own convincing explanations (see Fig. 3). Even in cases where an agent initially offers an incorrect answer and explanation, it can consider another agent's convincing explanation and amend its response accordingly. In each round of the discussion, we estimate an agent's uncertainty via a confidence-estimation prompt (Tian et al., 2023; Xiong et al., 2023). Once all agents converge to the same answer (i.e., a consensus has been reached), we employ these confidences to compute a weighted vote as the final answer.
We primarily develop ReConcille with three state-of-the-art LLMs, Bard (Anil et al., 2023), Claude2 (Anthropic, 2023), and ChatGPT (OpenAI, 2022). We show our method's efficacy on multiple commonsense reasoning (StrategyQA (Geva et al., 2021), ECQA (Aggarwal et al., 2021)) and mathematical reasoning (AQua (Ling et al., 2017) and GSM8K (Cobbe et al., 2021)) benchmarks. Our first result demonstrates that across all four datasets, ReConcille outperforms prior single-agent (e.g., Self-Refine (Madan et al., 2023) and Self-consistency (Wang et al., 2023b)) and multi-agent baselines (Debate (Du et al., 2023) and Judge (Liang et al., 2023)). On the commonsense reasoning benchmarks, ReConcille exhibits especially strong results, outperforming a much stronger model like GPT-4 by up to 3.4%. We find that ReConcille not only improves the overall team performance, but also leads to significant gains for each agent individually. We also conduct detailed analyses of the individual components of ReConcille and demonstrate that leveraging diverse LLM agents and the usage of convincing samples lead to maximum improvements. In particular, convincing samples lead to a 4% improvement as compared to general human explanations without our novel answer-rectifying selection criterion. Convincing samples also generally benefit prior methods like the multi-agent debate (Du et al., 2023). We also analyze the accuracy at the end of each discussion round and show that ReConcille not only improves performance after each round compared to multi-agent debate but also reaches faster consensus (i.e., in a lesser number of rounds), thus pointing to its efficiency.
Finally, as an initial investigation, we implement an alternative version of ReConcille, wherein ChatGPT is replaced with GPT-4 as one of the agents. Note that GPT-4 is a much stronger model in terms of its performance compared to the other agents in consideration here (Zheng et al., 2023; OpenAI, 2023) and is also substantially more expensive. However, our results demonstrate that even when one agent (GPT-4) possesses considerably greater strength than others (Bard and Claude2), collaborative discussions facilitated by our framework individually benefit all agents, even improving GPT-4's initial accuracy by large margins (e.g., an absolute 10.0% on StrategyQA).
In summary, our primary contributions are as follows:
* We propose ReConcille, a novel method for improving reasoning with diverse Large Language Models involved in a Round Table Conference.
* We study the role of confidence estimation and discussion in multi-agent systems and the ability of an agent to convince other agents (by learning from corrective explanations) to reach a better consensus.
* We conduct extensive experiments on multiple reasoning datasets, involving math and commonsense and show that ReConcille improves upon prior single-agent and multi-agent baselines and also outperforms GPT-4 on some benchmarks. We also experiment with a version of ReConcille using GPT-4 as an agent and show that mutual discussion among diverse agents significantly improves GPT-4's initial accuracy.
* We analyze the effect of individual components in ReConcille and observe that the usage of diverse LLM agents and convincing samples leads to significant gains. ReConcille also improves the efficiency of discussion, reaching a faster and better consensus compared to a multi-agent debate baseline.
## 2 Motivation and Problem Setup
When confronted with complex reasoning tasks or open-ended questions, humans often resort to collective brainstorming, discussions, and leveraging the power of group intelligence, also referred to as the _wisdom of the crowd_ or _the society of minds_(Minsky, 1988). Taking inspiration from this, we propose ReConcile that harnesses multiple Large Language Models (LLMs) in a multi-agent discussion procedure with confidence estimation and convincing explanations to improve the overall reasoning capabilities.
We assume that we are given a test problem \(Q\) and there are \(n\) agents \(\mathcal{A}=\{A_{i}\}_{i=1}^{n}\) participating in a round table discussion. Each agent is a distinct LLM, potentially trained with different pre-training data and model architectures. All agents are capable of generating an answer and a corresponding explanation (as a Chain-of-Thought (Wei et al., 2022)) for the test problem. For each agent \(A_{i}\), we utilize a small number of \(k\) demonstrations of convincing samples \(C_{i}=\{c_{j}^{(i)}\}_{j=1}^{k}\). Each convincing sample \(c_{j}^{(i)}=(q_{j}^{(i)},a_{j}^{(i)},e_{j}^{(i)})\) for an agent \(A_{i}\) is an instance of a question \(q_{j}^{(i)}\), gold answer \(a_{j}^{(i)}\), and a human explanation \(e_{j}^{(i)}\) that helps rectify an agent's initial incorrect answer (see more details in Sec 3.2). The objective of ReConcile is to improve the team performance on a given task by holding multiple rounds of discussion between the agents, quantifying the uncertainty associated with each agent, and convincing the other agents to reach a better consensus. Note that convincing samples serve as an additional performance enhancer: even when the dataset lacks human explanations or one opts not to utilize them, our method can still yield performance improvements independently of this technique (more details in Sec. 5.3).
## 3 ReConcile: A Group-Discuss-And-Convince Framework
ReConcile operates in three phases. In phase 1, all agents generate their initial responses. In phase 2, ReConcile initiates a multi-round discussion between the agents. Once the discussion
Figure 2: Overview of ReConcile with ChatGPT, Bard, and Claude2. Our method consists of three phases: (1) Initial Response Generation: Each agent generates an initial answer along with explanations. (2) Multi-Round Discussion: Each model is presented with a discussion prompt (as illustrated on the left) and subsequently generates an updated answer and explanation. The discussion terminates when a predefined stopping criterion is satisfied (e.g., all agents reaching a consensus or reaching a maximum round limit). (3) Final answer generation: The final answer is determined by a weighted vote at the end of each round and upon termination of the whole discussion process. The left part of the figure shows the discussion prompt for an agent, consisting of (a) the responses (grouped answers and explanations) of all agents from the previous round, (b) estimated confidence for the answers, and (c) demonstrations of convincing samples. The detailed prompts are shown in Appendix A.1.
terminates, in phase 3, ReConcile generates the final answer. The overview of our method is demonstrated in Fig. 2, and the workflow is illustrated in Algorithm 1.
```
0: Test Problem \(Q\), Discussion Rounds \(R\), Agents \(\mathcal{A}=\{A_{i}\}_{i=1}^{n}\), Convincing Samples \(\mathcal{C}=\{C_{i}\}_{i=1}^{n}\) functionReConcile(\(Q,R,\mathcal{A},\mathcal{C}\)) \(r\gets 0\) while\(r\leq R\) and not Consensus(\(Q,\{a_{i}^{(r-1)}\}_{i=1}^{n}\)) do \(S\leftarrow[],P\leftarrow[]\) for each \(A_{i}\in\mathcal{A}\)do if\(r=0\)then \(P_{t}\leftarrow(Q,\mathcal{C})\)\(\triangleright\) Initial prompt consists of question and convincing samples \(a_{i}^{(0)},e_{i}^{(0)},p_{i}^{(0)}\gets A_{i}(P_{t})\)\(\triangleright\) Generate initial answer, explanation, and confidence else \(P_{D}\leftarrow(Q,a_{i}^{(r-1)},e_{i}^{(r-1)},p_{i}^{(r-1)},\mathcal{C})\)\(\triangleright\) Discussion prompt \(a_{i}^{(r)},e_{i}^{(r)},p_{i}^{(r)}\gets A_{i}(P_{D})\) endif \(S\gets S+[a_{i}^{(r)}],P\gets P+[p_{i}^{(r)}]\)\(\triangleright\) Append each agent's answer and confidence endfor \(\hat{a}^{(r)}\leftarrow\) WeightedVote(\(S,P\))\(\triangleright\) Get final answer through a confidence weighted vote endwhile return\(\hat{a}^{(r)}\) endfunction
```
**Algorithm 1** ReConcile: A Group-Discuss-And-Convince Framework
### Phase 1: Initial Response Generation
ReConcile operates with each agent \(A_{i}\) initially generating an answer \(a_{i}^{(0)}\), an explanation \(e_{i}^{(0)}\), and an associated confidence \(p_{i}^{(0)}\in[0,1]\) for the generated answer. Each agent conditions on a zero-shot prompt that instructs it to reason about the problem'step-by-step'. See 'Phase 1' in Fig. 2 and the prompt is shown in Fig. 5.
### Phase 2: Multi-round Discussion
ReConcile then enters a discussion phase, consisting of \(R\) rounds (see 'Phase 2' in Fig. 2). Discussion for round \(r\) proceeds as follows. For each agent \(A_{i}\), ReConcile develops a discussion prompt \(\mathcal{D}_{i}^{(r)}\) (as shown in Fig. 5), consisting of the following three components.
* **Grouped responses of all agents from the previous round.**\(\mathcal{D}_{i}^{(r)}\) consists of the answers \(\{a_{j}^{(r-1)}\}_{j=1}^{n}\) and explanations \(\{e_{j}^{(r-1)}\}_{j=1}^{n}\) of all agents from the previous round \((r-1)\). In order to foster better discussions, ReConcile summarizes this information by grouping the responses into distinct answer categories and appends all plausible explanations for each answer, as illustrated on the left side of Fig. 2 and Fig. 5.
* **Confidence associated with the answers.** All agents are not equally confident of their answers. Hence, an effective discussion should also take each agent's uncertainty into account. Since our agents are black-box models, we estimate each agent's confidence \(p_{i}^{(r)}\) in round \(r\) by directly prompting the agent to verbally quantify its uncertainty (Xiong et al., 2023b). This is also shown in our prompt in Fig. 5.
* **Convincing samples from all other agents.** Finally, the prompt also contains convincing samples \(C_{j}\) for all other agents \(A_{j\neq i}\). When an agent tries to reassess its reasoning in light of the reasoning provided by other agents, we hypothesize that it should benefit from conditioning on demonstrations that can convince other agents. In order to obtain such convincing samples for an agent \(A_{j}\), we choose a small number of samples (4 in our experiments) from the training set for which the agent's initial answer is wrong but conditioning on the corresponding human explanation, rectifies the answer.4 See Fig. 3 for an illustration of the process.
Based on the above components, the prompt is formally defined as:
\[\mathcal{D}_{i}^{(r)}=\{a_{j}^{(r-1)},e_{j}^{(r-1)},p_{j}^{(r-1)},C_{j\neq i}\}_{j =1}^{n}\]
Each agent \(A_{i}\) in discussion round \(r\) then conditions on the corresponding discussion prompt \(D_{i}^{(r)}\) to generate an updated answer \(a_{i}^{(r)}\), explanation \(e_{i}^{(r)}\), and associated confidence \(p_{i}^{(r)}\), to be used in the next round. In each round, an agent can review the stances and opinions on the table, reassess all agents' reasoning processes and solutions, and subsequently provide an updated response. Demonstrations of convincing explanations enable the agent to generate explanations that are more likely to convince other agents to reach a better consensus in the following round.
### Phase 3: Final Answer Generation
For each data point, ReConcile continues the discussion for a maximum of \(R\) rounds or terminates it as soon as a consensus is reached (i.e., all agents agree on the same answer). At the end of any round \(r\), ReConcile generates the final answer \(\hat{a}^{(r)}\) for that round using a weighted voting scheme (see the right side of Fig. 2). In particular, it converts the model's confidence into a weight and employs this weight in a weighted voting scheme to determine the final answer. Directly using confidence scores as the voting weights is less effective due to the overconfidence problem of LLMs (Xiong et al., 2023; Tian et al., 2023; Mielke et al., 2022). Specifically, LLMs tend to produce consistently high confidence scores, which can make it challenging to discern subtle distinctions in confidence levels across different outputs. To address this issue, we employ the following simple yet effective rescaling technique to adjust the confidence scores \(p_{i}^{(r)}\), facilitating better differentiation of confidence levels.
\[f(p_{i}^{(r)})=\begin{cases}1.0,&\text{if }p_{i}^{(r)}=1.0\\ 0.8,&\text{if }0.9\leq p_{i}^{(r)}<1.0\\ 0.5,&\text{if }0.8\leq p_{i}^{(r)}<0.9\\ 0.3,&\text{if }0.6<p_{i}^{(r)}<0.8\\ 0.1,&\text{otherwise}\end{cases}\]
Figure 3: Method for choosing convincing samples for each agent. For illustration, we show one such sample for each agent. A convincing sample for ChatGPT consists of a question, gold answer, and a ‘corrective’ human explanation that can rectify its initial incorrect answer. Then the other two agents (Bard and Claude2) use it during the discussion for in-context learning to revise their respective answers and explanations to convince ChatGPT.
where \(p_{i}^{(r)}\) is the original confidence for agent \(A_{i}\) in round \(r\) and \(f(p_{i}^{(r)})\) is the corresponding adjusted score. As we will show later in the experiments, this simple recalibration method, used as a weighing scheme for obtaining the final answer, works well in practice across multiple datasets. Fig. 9 in the Appendix also shows that it helps reduce the Expected Calibration Error (ECE), a popular calibration metric (Naeini et al., 2015). While we note that recalibration can also be achieved through a learned model (e.g., Platt Scaling (Platt et al., 1999)), we refrain from using such models because ReConcile is primarily designed as a few-shot method, and developing a recalibration model would necessitate access to a substantial number of annotated samples.
We use \(f(p_{i}^{(r)})\) to perform a weighted vote to generate the final answer as follows.
\[\hat{a}^{(r)}=\operatorname*{arg\,max}_{a}\sum_{i}f(p_{i}^{(r)})\mathbb{1} \left(\hat{a}_{i}^{(r)}=a\right)\]
where \(a\) is a distinct answer generated by any of the agents.
## 4 Experimental Setup
### Implementation Details of ReConcile
We primarily implement ReConcile with three state-of-the-art LLMs: ChatGPT, Bard, and Claude2, engaging them in up to three rounds of discussion. Later, in Section 5.1, we also develop and experiment with a version of ReConcile that replaces ChatGPT with GPT-4 as one of the agents. Henceforth, we will refer to the initial predictions made by the agents as 'Round 0' of discussion. During decoding, we set the temperature to 0.7 for ChatGPT and Bard and use the default setting for Claude2. All implementations involving ChatGPT are using _gpt-3.5-turbo-0613_ from Azure OpenAI.5 We retrieve results from Claude2 by posting requests to their webpage6, and for Bard, we use _chat-bison-001_ from PaLM2 API7 to obtain responses. For each agent, we use four demonstrations of convincing samples.
Footnote 5: [https://oai.azure.com/](https://oai.azure.com/)
Footnote 6: [https://claude.ai/chats](https://claude.ai/chats)
Footnote 7: [https://developers.generativeai.google/products/palm](https://developers.generativeai.google/products/palm)
### Tasks and Metrics
We evaluate ReConcile on two commonsense reasoning and two math reasoning tasks. These include (1) StrategyQA (Geva et al., 2021), a task of implicit reasoning for multi-hop questions, (2) ECQA (Aggarwal et al., 2021), a commonsense reasoning dataset, (3) GSM8K (Cobbe et al., 2021), a benchmark of math word problems, and (4) AQuA (Ling et al., 2017), a dataset of algebraic word problems. Owing to the costly nature of conducting experiments with black-box models and the limit imposed on the number of API calls, we follow many prior works (Du et al., 2023; Bian et al., 2023; Besta et al., 2023; Yao et al., 2023) and experiment with a subset of 100 samples (from the validation set for StrategyQA and the test set for all other datasets). We report accuracy and its associated standard deviation for all tasks. For each experiment, we conduct at least three runs on the same test samples with the same prompts, primarily accounting for the variance caused due to the decoding strategy.
## 5 Results and Analysis
### ReConcile improves reasoning over single-agent and multi-agent baselines
Our first experiment evaluates the overall reasoning capabilities of ReConcile. Initially, we focus on the version of ReConcile with ChatGPT, Bard, and Claude2 as the three agents. Then, later in this section, we also report our findings when using a stronger GPT-4 model as an agent. We compare ReConcile to prior works that can be broadly grouped into three categories:
* **Vanilla single-agent methods.** Our first set of baselines includes zero-shot Chain-of-Thought prompting with GPT-4, ChatGPT, Bard, and Claude2 that instructs the model to answer the question'step-by-step' (Kojima et al., 2022).
* **Advanced single-agent methods.** Next, we compare with (1) a Self-Refine (SR) baseline that iteratively generates feedback leveraging the model itself and then uses that feedback to refine the output (Madaan et al., 2023), (2) a Self-Consistency (SC) baseline that samples multiple reasoning paths and generates the most consistent answer (Wang et al., 2023b), and (3) their combination, SR+SC, that first conducts multiple iterations of refinement, followed by a majority vote of the refined answers. We implement these baselines on top of ChatGPT.
* **Multi-agent methods with a single backbone model.** Our final baselines are two recently proposed multi-agent debating methods. In particular, we compare with Du et al. (2023), who propose a multi-agent debate between multiple instances of ChatGPT and Liang et al. (2023), who additionally include a judge to monitor the debate process.
For fair comparisons, all iterative methods (either involving refinement, debate, or discussion) go through 3 rounds of iteration and all multi-agent methods are implemented with three agents. We report our results in Table 1. Our primary observation is that across all four datasets, ReConcile, developed with ChatGPT, Bard, and Claude2 as the agents, improves upon all single-agent and multi-agent baselines that are also built on top of these agents (see last row). On commonsense reasoning tasks like StrategyQA and ECQA, our method also outperforms GPT-4 (without using it as an agent). Note that between all single agents, GPT-4 exhibits significantly better performance on all four benchmarks. Therefore, ReConcile's ability to match or surpass it while leveraging the three comparatively weaker agents (ChatGPT, Bard, and Claude2) shows the promise of our framework. On the math reasoning tasks (GSM8K and AQuA), ReConcile matches or closes the gap to GPT-4, which is also the state-of-the-art LLM on GSM8K. GPT-4's especially strong results on GSM8K could be attributed in part to the inclusion of some of GSM8K's training samples in GPT-4's pre-training data (OpenAI, 2023).
As also shown in prior works, advanced single-agent methods are better than their vanilla counterparts (see 'Method Category' column in Table 1) (Wang et al., 2023b; Madaan et al., 2023). Multi-agent debate with ChatGPT (Du et al., 2023) improves results further, especially on the math datasets. On the other hand, debate with multiple instances of Bard and Claude2 is not effective. We hypothesize that the feedback, originating out of multiple instances of the same underlying model of Bard and Claude2 is not diverse enough. While multi-agent debate with Bard and Claude2 is not effective, when they team up with ChatGPT in a multi-round discussion, ReConcile outperforms
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{Method Category} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{Datasets} \\ \cline{3-6} & & StrategyQA & ECQA & GSM8K & AQuA \\ \hline \multirow{4}{*}{Vanilla Single-agent} & _GPT-4_ & _75.6\(\pm\)4.7_ & _73.3\(\pm\)0.4_ & _90.7\(\pm\)1.7_ & _65.7\(\pm\)4.6_ \\ & ChatGPT & 67.3\(\pm\)3.6 & 66.0\(\pm\)1.8 & 73.7\(\pm\)3.1 & 44.7\(\pm\)0.5 \\ & Bard & 69.3\(\pm\)4.4 & 56.8\(\pm\)2.7 & 58.7\(\pm\)2.6 & 33.7\(\pm\)1.2 \\ & Claude2 & 73.7\(\pm\)3.1 & 66.7\(\pm\)2.1 & 79.3\(\pm\)3.6 & 60.3\(\pm\)1.2 \\ \hline \multirow{2}{*}{Advanced Single-agent} & Self-Refine (w/ ChatGPT) & 66.7\(\pm\)2.7 & 61.8\(\pm\)1.8 & 74.3\(\pm\)2.5 & 45.3\(\pm\)2.2 \\ & Self-Consistency (w/ ChatGPT) & 73.3\(\pm\)2.1 & 79.0\(\pm\)1.8 & 80.7\(\pm\)1.5 & 54.0\(\pm\)2.9 \\ & SR + SC (w/ ChatGPT) & 72.2\(\pm\)1.9 & 71.9\(\pm\)2.1 & 81.3\(\pm\)1.7 & 58.3\(\pm\)3.7 \\ \hline \multirow{4}{*}{Single-model Multi-agent} & Debate (w/ ChatGPT) & 66.7\(\pm\)3.1 & 62.7\(\pm\)1.2 & 83.0\(\pm\)2.2 & 65.3\(\pm\)3.1 \\ & Debate (w/ Bard) & 65.3\(\pm\)2.5 & 66.3\(\pm\)2.1 & 56.3\(\pm\)1.2 & 29.3\(\pm\)2.2 \\ \cline{1-1} & Debate (w/ Claude2) & 71.3\(\pm\)2.2 & 68.3\(\pm\)1.7 & 70.7\(\pm\)8.6 & 62.7\(\pm\)2.6 \\ \cline{1-1} & Debate+Judge (w/ ChatGPT) & 69.7\(\pm\)2.1 & 63.7\(\pm\)2.5 & 74.3\(\pm\)2.5 & 57.3\(\pm\)2.1 \\ \hline \hline Multi-model Multi-agent & ReConcile (ChatGPT, Bard, Claude2) & **79.0\(\pm\)1.6** & **74.7\(\pm\)0.4** & **85.3\(\pm\)2.2** & **66.0\(\pm\)0.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of ReConcile (w/ ChatGPT, Bard, Claude2) with vanilla single-agent methods, improved single-agent methods (such as self-refine, self-consistency), and recently proposed multi-agent debating frameworks. Across all four reasoning benchmarks, ReConcile outperforms all prior single-agent and multi-agent methods. On commonsense reasoning benchmarks (StrategyQA and ECQA), ReConcile also outperforms GPT-4. All results are on a random subset of 100 samples. Notably, we obtain further improvements on StrategyQA at **89% (by absolute 10%)** when using GPT-4 as an agent in ReConcile (see Sec. 5.1 and Table 3 for details).
debate frameworks that are built on top of these agents. It obtains maximum improvements of 7.7% accuracy on the commonsense reasoning tasks compared to the strongest baseline, Multi-agent debate with Claude2. Improvements in the math reasoning tasks are relatively moderate, because of ChatGPT's initial strong performance on these tasks.
ReConcile also improves reasoning capabilities of all agents individuallySo far, we have demonstrated that the final team performance of the agents improves through the discussion process. Next, we investigate the round-wise accuracy of each individual agent on the StrategyQA dataset in Table 2. In addition to the accuracy obtained by each agent individually, we also report the team performance, using three different voting mechanisms. These are: (1) our proposed weighted vote, (2) simple majority vote, and (3) choosing the agent with the maximum confidence. We observe that after the initial response generation, both individual and the team accuracy increase for at least two rounds when using weighted voting and majority voting mechanisms. Note that simply choosing the most confident agent proves ineffective. Finally, as discussion progresses further to round 3, each agent's performance tends to saturate (see more details about ReConcile's faster and better consensus in Sec. 5.4).
Using GPT-4 as an agent in ReConcileIn the above section, we showed the effectiveness of ReConcile using ChatGPT, Bard, and Claude2 as the three agents to even outperform GPT-4 in some cases. Based on our results in Table 1 and prior work (OpenAI, 2023; Zheng et al., 2023), GPT-4 is likely the strongest (and also, the most expensive) LLM out of all the models we experiment with. Next, as an initial investigation, we also study the potential of GPT-4 to participate in a multi-round discussion with comparatively weaker agents. To this end, we implement ReConcile with GPT-4, Bard, and Claude2 (i.e., replacing ChatGPT with GPT-4). In Table 3, we report the accuracy obtained by each agent at the end of each discussion round. In addition, we provide the zero-shot performance of each agent as an additional baseline. Note that the zero-shot results are different from ReConcile's Round 0 results because of the differences in their respective prompts: the latter incorporates convincing samples. With increasing rounds, the accuracy of each agent improves, showing that all models benefit from mutual discussions. GPT-4's absolute improvement by 10% is particularly encouraging because it is the strongest participant, and our result highlights the potential for a stronger agent to obtain useful external feedback from comparatively weaker agents, and thereby augmenting its own capability. To further validate that this improvement is indeed due to the _discussion process of_ ReConcile _with other agents_, we compare GPT-4's final accuracy (\(89.0_{\pm 1.4}\)) with 3 rounds of Debate and Self-Refine baselines (both also implemented with GPT-4). We observe both of these baselines yield significantly lower accuracies, at \(78.0_{\pm 0.8}\) and \(83.7_{\pm 1.2}\) respectively. In summary, our ReConcile method holds the potential to involve agents with diverse capabilities in round-table discussions, such that all agents improve individually. Note that the weighted voting scheme becomes less effective in such scenarios and tends to converge towards the dominant agent (GPT-4, in this case). This
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multirow{2}{*}{ChatGPT} & \multirow{2}{*}{Bard} & \multirow{2}{*}{Claude2} & \multicolumn{4}{c}{Team Accuracy} \\ \cline{4-7} & & & & Weighted Vote & Majority Vote & Max Conf \\ \hline Round 0 & 71.0\(\pm\)2.1 & 71.7\(\pm\)0.9 & 73.7\(\pm\)1.7 & 74.3\(\pm\)1.2 & 74.2\(\pm\)0.9 & 72.7\(\pm\)1.4 \\ Round 1 & 71.3\(\pm\)0.9 & 77.7\(\pm\)1.2 & 75.3\(\pm\)0.8 & 77.0\(\pm\)0.9 & 76.3\(\pm\)1.2 & 74.0\(\pm\)1.7 \\ Round 2 & 76.7\(\pm\)0.8 & 77.3\(\pm\)1.4 & 77.7\(\pm\)0.9 & 79.0\(\pm\)0.5 & 77.1\(\pm\)1.3 & 74.7\(\pm\)2.1 \\ Round 3 & 77.0\(\pm\)0.9 & 76.7\(\pm\)0.8 & 77.0\(\pm\)1.2 & 78.7\(\pm\)1.2 & 78.0\(\pm\)0.5 & 74.7\(\pm\)1.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The round-wise accuracy of ChatGPT, Bard, and Claude2 and their team performance (using different aggregation methods) with ReConcile on the StrategyQA dataset.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & GPT-4 & Claude2 & Bard \\ \hline \multirow{3}{*}{Zero-shot} & \multirow{3}{*}{75.6\(\pm\)4.7} & \multirow{3}{*}{73.7\(\pm\)3.1} & \multirow{3}{*}{69.3\(\pm\)4.4} \\ \cline{2-2} \cline{4-5} & & & & \\ \cline{1-1} \cline{2-2} \cline{4-5} & & & & \\ \hline \multirow{3}{*}{\(\pm\)} & Round 0 & 79.0\(\pm\)3.7 & 72.0\(\pm\)0.8 & 75.0\(\pm\)0.8 \\ \cline{1-1} \cline{2-2} \cline{4-5} & Round 1 & 87.7\(\pm\)1.2 & 75.0\(\pm\)0.8 & 76.0\(\pm\)0.8 \\ \cline{1-1} \cline{2-2} \cline{4-5} & Round 2 & 88.3\(\pm\)0.9 & 76.7\(\pm\)0.9 & **78.7\(\pm\)0.9** \\ \cline{1-1} \cline{2-2} \cline{4-5} & Round 3 & **89.0\(\pm\)1.4** & **79.3\(\pm\)1.2** & 77.6\(\pm\)2.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy of GPT-4, Claude2, and Bard on StrategyQA after every discussion round when engaged in a discussion via ReConcile. GPT-4’s final accuracy (\(89.0_{\pm 1.4}\)) after round 3 also outperforms Debate (\(78.0_{\pm 0.8}\)) and Self-Refine \(83.7_{\pm 1.2}\) baselines with GPT-4.
is why we primarily focus on studying agents with similar capabilities (e.g., ChatGPT, Bard, and Claude2) in this paper and all our other analyses in the following sections are also with this setup.
### Ablations of ReConcile: All Components are Beneficial
In Table 4, we evaluate the effect of individual components of ReConcile on the StrategyQA dataset. In particular, we compare ReConcile with four of its variants: (1) **w/o Multiple Models**: Instead of using different models as different agents, we use ChatGPT as the backbone for all three agents, (2) **w/o Grouping**: We simply concatenate the generated responses from different agents without summarizing and grouping their answers, (3) **w/o Convincingness**: We remove convincing samples from the initial prompt and the discussion prompt, and (4) **w/o Confidence Estimation**: We do not use any confidence estimates during the discussion and compute majority vote as the final answer. We show that each component has a positive impact on ReConcile with varying capacities. The effect of using different models as different agents is particularly significant and we observe a 6.8% improvement compared to only using ChatGPT as all three agents in ReConcile. This reinforces our hypothesis that diverse LLMs have complementary strengths and when put together in a round table discussion, they can learn from diverse external feedback and refine their responses to reach a better consensus. Next, grouping answers is beneficial too, demonstrating that summarizing the chances of each agent fosters better discussion. We also show that using convincing samples leads to a 4.5% improvement in accuracy. We analyze the role of convincing samples in more detail in Sec. 5.3. Finally, estimating the confidence of each agent and using it to weigh the agents' answers outperforms majority voting.
### Convincing Samples Improve Both ReConcile and Multi-agent Debate
In this section, we conduct a comprehensive analysis of the role of convincing samples that are, essentially, demonstrations of answer-rectifying human explanations. Recall that ReConcile selects a sample as convincing if the corresponding human explanation helps rectify an agent's initially incorrect answer. Based on this, Table 4 showed that at the cost of collecting a small number of human explanations (four in our case), we can obtain significant improvements (ReConcile row versus 'w/o Convincingness' row). Next, we consider a scenario where no human explanations are present. Table 5 shows that ReConcile, even without access to any convincing samples, outperforms the multi-agent debate baseline by absolute 7.8 points (first versus second row, 74.5% v.s. 66.7%). If random human explanations (i.e., general explanations that may not necessarily ensure answer rectification, as per Fig. 3) are available (third row), we obtain some small improvements; but our convincing samples that are selected based on our novel answer-rectification criterion (last row) improve the results substantially. In Appendix A.2 and A.3, we show two illustrative examples of the discussion process without and with convincing samples respectively.
Next, in Table 6, we show that convincing samples, in general, can boost other multi-agent frameworks as well. On top of Multi-agent Debate (with ChatGPT agents), the inclusion of convincing samples leads to improved results compared to the original setup and one with random human explanations. To summarize, being able to convince another agent is a generic concept that can be applied to other multi-agent systems.
\begin{table}
\begin{tabular}{l l} \hline \hline Method & Accuracy \\ \hline Debate (Du et al., 2023) & 66.7\(\pm\)3.1 \\ Debate (w/ Random Expl) & 68.7\(\pm\)2.2 \\ Debate (w/ Convincing Expl) & 69.5\(\pm\)1.7 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Convincing samples improve other multi-agent frameworks like Debate.
\begin{table}
\begin{tabular}{l l} \hline \hline Method & Accuracy \\ \hline ReConcile & 79.0\(\pm\)1.6 \\ w/o Multiple Models & 72.2\(\pm\)2.1 \\ w/o Grouping & 76.7\(\pm\)2.5 \\ w/o Convincingness & 74.5\(\pm\)1.7 \\ w/o Conf Estimation & 77.7\(\pm\)1.3 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablations of ReConcile on the StrategyQA dataset.
\begin{table}
\begin{tabular}{l l} \hline \hline Method & Accuracy \\ \hline Multi-agent Debate & 66.7\(\pm\)3.1 \\ RC (w/o Convincing Expl) & 74.5\(\pm\)1.7 \\ RC (w/ Random Expl) & 75.0\(\pm\)2.5 \\ RC (w/ Convincing Expl) & 79.0\(\pm\)1.6 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of the role of convincing explanations on StrategyQA. ReConcile (RC) without convincing samples outperforms multi-agent debate and with it obtains further gains.
### Analysis per Discussion Round: ReConcile Reaches Faster and Better Consensus
ReConcile terminates discussion as soon as a consensus is reached i.e., all agents have converged to the same answer. Extending the number of discussion rounds will be costlier due to the increased API calls to black-box models. Hence, achieving faster consensus while maintaining comparable accuracy improvements is more efficient. To study this, in Fig. 4, we plot the accuracy trends at the end of each discussion round; in Fig. 4, we plot the fraction of samples for which consensus has been reached after each discussion round; and finally, in Fig. 4, we analyze accuracy as a function of consensus. From the first plot, we make two important observations: (1) ReConcile improves reasoning performance for two discussion rounds, following which the accuracy saturates, (2) Compared to the debate baselines, ReConcile is not only superior after every round but also peaks at a highest accuracy of 79.0% versus 71.3% for the baselines. Next, from Fig. 4, our observations are also two-fold: (1) In the initial rounds (round 0 and 1), ReConcile's consensus percentage is lower because the discussion takes place between diverse LLM agents. Diverse agents lead to more differences in opinions initially. (2) However, as the discussion proceeds, ReConcile establishes consensus for all samples by round 3, while in the debate baseline, 13% of the samples do not converge even after round 4. Finally, Fig. 4 shows that for the fraction of samples that enter the multi-round discussion phase (i.e., their initial answers did not have a consensus), accuracy is positively correlated with consensus percentage. In other words, as a greater number of samples reach a consensus, accuracy proportionally improves, effectively pointing to better consensus. In summary, we demonstrate that ReConcile reaches _faster_ and _better_ consensus compared to prior multi-agent baselines, in spite of starting with more diverse responses from different models.
## 6 Related Work
### Reasoning with Large Language Models
Progress in Large Language Models has led to the development of advanced prompting and fine-tuning techniques for reasoning. Representative methods include Chain-of-Thought (CoT) (Kojima et al., 2022; Wei et al., 2022; Wang et al., 2023a) and Tree-of-Thought prompting (Yao et al., 2023), self-consistency (Wang et al., 2023b), meta-reasoning over multiple paths (Yoran et al., 2023), use of scratchpadas (Nye et al., 2021), training verifiers (Cobbe et al., 2021), self-reflection (Shinn et al., 2023; Madaan et al., 2023; Wang and Zhao, 2023), and fine-tuning via bootstrapping models (Zelikman et al., 2022; Lewkowycz et al., 2022). Eliciting reasoning from a single agent, while promising, is prone to model bias, degeneration-of-thought, and is fundamentally limited by a lack of diverse insights about the problem due to the absence of external feedback (Liang et al., 2023).
Figure 4: Analysis of the three aspects of ReConcile: discussion round, accuracy, and consensus percentage. (a) Comparison of ReConcile with Debate baselines showing the accuracy after each discussion or debate round. (b) Comparison of ReConcile with Debate baselines showing the fraction of samples for which a consensus is reached (i.e., all agents have converged to the same answer) after each round. (c) Analysis of accuracy as a function of consensus percentage: we separately study the samples for which involve in the discussion without initial consensus (‘w/ Discussion’ in the plot).
### Reasoning in Multi-Agent Systems
A recent line of work has explored student-teacher frameworks with the goal of distilling reasoning capabilities from a stronger teacher to a weaker student (Magister et al., 2023; Fu et al., 2023; Ho et al., 2023; Saha et al., 2023; Mukherjee et al., 2023). As opposed to a teacher teaching weaker agents, we are interested in developing a multi-agent system where different LLM agents have their unique strengths and try to collaboratively improve the performance on a reasoning task by convincing each other (using corrective human explanations for in-context learning) to reach a better consensus. Among notable prior works, researchers have proposed multi-agent debating frameworks (Du et al., 2023; Liang et al., 2023; Chan et al., 2023; Xiong et al., 2023) but such efforts are still largely limited to multiple instances of the same underlying language model. We argue that relying on a single model limits the potential of complementary benefits from different model families and the advantage of ensemble learning. Different models possess varied strengths and weaknesses and consequently, combining the contributions of each model holds the promise of improved robustness and overall accuracy. Moreover, estimating the confidence of each agent and being able to defend or improve one's opinions become more prominent components in such multi-model multi-agent systems because of the individual differences. Overall, Table 7 summarizes ReConcile's key differences compared to prior single-agent and multi-agent reasoning methods.
### Ensembling Large Pretrained Models
Large pretrained models, by virtue of being trained on different data and with architectural variations, exhibit distinct capabilities. This has led to the development of ensembles (Sagi and Rokach, 2018) in multimodal learning (Zeng et al., 2023; Li et al., 2022). Mixture of Experts, a popular ensemble learning technique, trains multiple smaller specialized models to improve robustness and overall accuracy (Jacobs et al., 1991; Shazeer et al., 2017; Du et al., 2022). Specific to language models, Self-Consistency (Wang et al., 2023) generates diverse reasoning paths using CoT and chooses the most consistent answer as the final output. Jiang et al. (2023) propose LLM-Blender, a method to rank and fuse generations from different models. Different from these, we study communication via explanations between distinct LLM agents and their ability to discuss and convince each other in order to improve collective reasoning.
## 7 Conclusion
We presented ReConcile, a multi-agent framework for improving reasoning with diverse LLM agents, engaged in multiple rounds of discussion via confidence estimation and generating explanations that can convince other agents. ReConcile demonstrated strong results on multiple commonsense and mathematical reasoning benchmarks, consistently outperforming prior single-agent and multi-agent baselines and even improving upon GPT-4 on some benchmarks. Moreover, when GPT-4 was used as one of the agents, ReConcile improved its initial accuracy by 10 absolute points. We also showed that compared to a multi-agent debate baseline, ReConcile helps es
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Refine & Ensemble & Multi-Agent & Multi-Model & Convince & Confidence \\ \hline Self-Refine (SR) (Madan et al., 2023) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) \\ Self-Consistency (SC) (Wang et al., 2023) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) \\ SR + SC & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) \\ Multi-Agent Debate (Du et al., 2023) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) \\ Multi-Agent Debate (Judge) (Liang et al., 2023) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) \\ \hline ReConcile (Ours) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Summary highlighting the main differences between prior work and ReConcile. \(\blacksquare\) means supported and \(\blacksquare\) means not supported. ReConcile supports multi-agent discussion between multiple models with confidence estimation and convincingness. * = Note that Du et al. (2023) primarily experiment with multiple instances of ChatGPT as different agents, and conduct an initial investigation with 20 samples using ChatGPT and Bard as the two agents (but without convincingness or confidence voting components).
tablish better and faster consensus between agents. ReConcile shows the promise of leveraging diverse language agents in a collaborative setup to discuss and accomplish complex tasks.
## Limitations
Given that the current best open-source models often face difficulties with lengthy instructions and prompts (Zheng et al., 2023), our framework employs three prominent API-based models as agents. However, we note that we lack complete knowledge of the data that these models have been exposed to, their scales in terms of parameters, and due to their API access, we also do not possess complete control over their behavior. Depending on API-based models also necessitates the need to prompt these models to obtain their confidence estimates. While this approach proves effective as evidenced by our results, we note that these estimates remain post-hoc in nature. Nevertheless, it is worth highlighting that this limitation could potentially be mitigated in the future should a new state-of-the-art open-sourced model emerge, demonstrating robust capabilities in adhering to instructions, handling extensive prompts, and adapting from feedback. Moreover, we are also making our code, prompts, and result logs publicly available to enable replication of our findings.
## Acknowledgments
We thank Peter Hase and Elias Stengel-Eskin for useful feedback and suggestions regarding experiments. This work was supported by NSF-CAREER Award 1846185, NSF-AI Engage Institute DRL-2112635, DARPA MCS Grant N66001-19-2-4031, Accelerate Foundation Models Research program, and a Google PhD Fellowship. The views contained in this article are those of the authors and not of the funding agency.
| Large Language Models (LLMs) は、自然言語の論理的思考タスクに依然として苦労しています。 1988 年のミンスキー(Minsky)にインスピレーションを受け、私たちは ReConcile と呼ばれる多モデル多主体フレームワークを提案します。ReConcile は、LLM アジェント間で円卓会議のような多様な LLM アジェントの集合体です。 ReConcile は、LLM アジェント間の協力的な論理的思考を促進し、他のアジェントの回答を改善するために、複数の議論ラウンドを通じて学習します。また、信頼度を考慮した投票メカニズムを使用することで、より良い合意を得ます。各ラウンドでは、ReConcile は、過去のラウンドで各アジェントが生成したグループ回答、説明、信頼度スコア、そして他のアジェントを説得するための回答修正のための人間の説明を示す「議論の促し」を用 |
2303.18109 | Contests in two fronts | Within the framework of Game Theory, contests study decision-making in those
situations or conflicts when rewards depend on the relative rank between
contenders rather than their absolute performance. By relying on the formalism
of Tullock success functions, we propose a model where two contenders fight in
a conflict on two fronts with different technology levels associated: a front
with large resource demand and another with lower resource requirements. The
parameter of the success function in each front determines the resource demand
level. Furthermore, the redistribution or not of resources after a tie defines
two different games. We solve the model analytically through the best-response
map dynamics, finding a critical threshold for the ratio of the resources
between contenders that determines the Nash Equilibrium basin and,
consequently, the peace and fighting regimes. We also perform numerical
simulations that corroborate and extend these findings. We hope this study will
be of interest to areas as diverse as economic conflicts and geopolitics. | A. de Miguel-Arribas, J. Morón-Vidal, L. M. Floría, C. Gracia-Lázaro, L. Hernández, Y. Moreno | 2023-03-31T14:56:38 | http://arxiv.org/abs/2303.18109v1 | # Contests in two fronts
###### Abstract
Within the framework of Game Theory, contests study decision-making in those situations or conflicts when rewards depend on the relative rank between contenders rather than their absolute performance. By relying on the formalism of Tullock success functions, we propose a model where two contenders fight in a conflict on two fronts with different technology levels associated: a front with large resource demand and another with lower resource requirements. The parameter of the success function in each front determines the resource demand level. Furthermore, the redistribution or not of resources after a tie defines two different games. We solve the model analytically through the best-response map dynamics, finding a critical threshold for the ratio of the resources between contenders that determines the Nash Equilibrium basin and, consequently, the peace and fighting regimes. We also perform numerical simulations that corroborate and extend these findings. We hope this study will be of interest to areas as diverse as economic conflicts and geopolitics.
## I Introduction
Contest Theory is a mathematical tool to model situations where two or more agents riskily compete, at a cost, for a prize [1; 2; 3; 4; 5]. The strategic behavior in contests has attracted the attention of academia for many years [6; 7; 8], and has applications ranging from economics to conflict resolution and geopolitics. Actually, contests are studied in areas as diverse as labor economics, industrial organization, public economics, political science, rent-seeking, patent races, military combats, sports, or legal conflicts [9; 10; 7].
Formally, a contest is characterized by a set of agents, their respective possible efforts, a tentative payoff for each contestant (the prize), and a set of functions for the individual probabilities of obtaining the prize that takes the agents' efforts as parameters. The prize may, or not, be divisible, and contestants may or not have the same valuation of the prize [5].
A case of special interest is the contests in rent-seeking, which study those situations where there is no contribution of productivity nor added value [2; 11]. Therefore, all the contenders' effort is devoted to winning the contest and so obtaining the whole payoff or the greatest possible share of it. This theoretical framework is applied to study issues such as elimination tournaments [12], conflicts [13; 14], political campaigns [15] or lobbying [16; 11].
In this regard, wars also constitute contests, where contenders compete for resources without adding productivity or value, being the appropriation of resources the main cause for war [17; 18; 19], and therefore they are amenable to being theoretically studied as strategic tournaments [20; 21; 22; 23; 24; 25]. Similarly, in economic contests, resources allocation, and redistribution play also a key role in the strategic decision-making [26; 10].
Despite a large amount of research on contest theory, most theoretical work is limited to one-front contests. Nevertheless, real-world competitions many times take place on two or more fronts. For example, a company fighting against a bigger one may be tempted to devote its resources (or some of them) to low-cost marketing instead of the costlier conventional one. This low-cost advertising, so-called _guerrilla marketing_[27; 28], constitutes an active field of study [29; 30; 31; 32]. Some examples of this guerrilla marketing are ambient advertising [33; 32], stealth marketing [34], word-of-mouth marketing [35], social media marketing [36], evangelism marketing [37], viral marketing [38], or marketing buzz [39].
In this work, we focus on either armed or economic conflicts susceptible to being simultaneously fought on two front lines: one corresponding to a costly front (conventional war, costly marketing) and the other one to a low-cost front (guerrilla warfare/marketing). To that end, we rely on the formalism of Tullock's combats success functions by proposing two simultaneous fronts sustained by the same pair of contenders. Each of these fronts is characterized by a value of the parameter \(\gamma\) of the Tullock function. The parameter \(\gamma\) represents the technology associated with that front, i.e., the influence of the resources invested on the winning probability. The whole interaction constitutes a zero-sum game: the sum of the resources invested in both fronts makes up the total prize of the game or combat. That prize will go to the contender winning on both fronts if that is the case. Otherwise, i.e., if each contender wins in a front, we propose two scenarios, each constituting a different game. First, we consider those situations in which contenders recover their investments in case of a tie. This setup, hereafter the keeping resources game (KR), mimics those real-world conflicts where, after a tie, the previous _status quo_ is recovered, as mergers and acquisitions attempts in economics or, regarding army conflicts, abortable invasion temptations. The second setup, hereafter the redistributing resources game (RR), captures the cases where, after a tie, each contender gains all the resources invested in the front she won, like an open-ended long-term economic competition or war.
In both setups, a contender will fight if her expected gains overcome her current resources. Then, peace takes place when no contender has the incentive to fight. Otherwise, the combat may repeat until i) one of the contenders wins on both fronts, taking all the resources, or ii) no contender has the incentive to fight.
We solve the system theoretically under the best-response dynamics, showing the existence, for both games, of two regimes regarding the ratio \(r\) of contenders' resources: one with a Nash equilibrium and another without it. We also perform numerical simulations that confirm and extend the analytical results. In both games, the values of Tullock's technology parameters determine an \(r\) threshold value, \(r_{th}\), which points to the boundary between those regimes. This threshold demarcates the separation between war and peace: in the presence of a Nash equilibrium, the combat takes place and otherwise does not. Remarkably, in the KR game, peace takes place for high resource differences. Conversely, in the RR game, peace is reached for low differences.
The rest of the paper is organized as follows. The details of the model, together with combat functions and the best-response maps, are defined in Section II. In sections III and IV, we study the KR and RR games, respectively. The repeated combats are studied in Section V. Finally, Section VI tries to summarize and contextualize the results together with prospective remarks.
## II The model
Conflicts are not always amenable to reaching an agreement or peaceful solution, and "win or lose" scenarios (such as a war [23] or an economic contest [13]) often emerge as the way out to their resolution. A useful, simple probabilistic description of the expected outcome of combat is provided by the formalism of contest success functions (CSF). A CSF [40] is a function of the quantified efforts, or resources, invested by the contenders, that gives the probability of winning the contest. Though CSFs are in general defined for a number of contenders larger than two, we will restrict consideration to dyadic contests, and denote both contenders as \(\mathbf{1}\) and \(\mathbf{2}\).
Let \(x\) be the resources of Contender \(\mathbf{1}\) and \(y\) those of Contender \(\mathbf{2}\). The CSF function called Tullock, for a positive parameter \(\gamma\), meets the requirement that the winning probability \(p\) of contender \(\mathbf{1}\) is invariant under the re-scaling of both contenders' resources, i.e., for all \(\lambda>0\), \(p(\lambda x,\lambda y)=p(x,y)\). Explicitly, the Tullock function:
\[p_{\gamma}(x,y)=\frac{x^{\gamma}}{x^{\gamma}+y^{\gamma}} \tag{1}\]
gives the winning probability of contender \(\mathbf{1}\). A basic assumption behind this result is that win and lose (from a contender perspective) are a mutually exclusive complete set of events, so that \(p_{\gamma}(x,y)=1-p_{\gamma}(y,x)\).
Regarding the consequences of the contest outcome, one assumes that the winner's benefit is the sum \(x+y\) of both resources, and the loser obtains nothing, zero benefits. From the, admittedly narrow, assumption of perfect rationality (i.e. the behavior is determined by the optimization of benefits), the decision to fight should be taken by a contender only if its expected gain after the contest is higher than its current resources.
In this regard, the parameter \(\gamma\) of the Tullock CSF turns out to play a very important role, because when \(\gamma>1\), it is easy to see that whenever \(x>y\), the expected gain for the contender \(\mathbf{1}\) after the combat, \(p_{\gamma}(x,y)(x+y)>x\)
and then the (richer) contender \(\mathbf{1}\) has an incentive to fight, while if \(\gamma<1\), the expected gain for the richer contender is lower than their resources before the combat, \(p_{\gamma}(x,y)(x+y)<x\), and thus it is the poorer contender who should rationally decide to fight.
Following the acutely descriptive terms introduced in [20], we will call _rich-rewarding_ a Tullock CSF with parameter \(\gamma>1\), and _poor-rewarding_ a Tullock CSF with \(\gamma<1\). In this reference, [20], where contests refer to events of "real" war among nations, a (highly costly) conventional war would be described by a rich-rewarding CSF, while guerrilla warfare would better be described by a poor-rewarding CSF Tullock function, which led the authors to refer to \(\gamma\) as "technology parameter", and ponder its relevance to the expectations and chances for peaceful coexistence among nations or coalitions. Correspondingly, in economic contests, a rich-rewarding CSF corresponds to a competition in a conventional (costly) scenario and a poor-rewarding CSF to either a low-cost strategy or a guerrilla marketing scenario [27].
It is not hard to think of a conflict whose resolution is a war on several simultaneous fronts, each characterized by different Tullock parameters, where the "rulers" (decision-makers) of the two conflicting entities are faced with making a decision on the fraction of available resources that should be invested in each front. We will consider here a war between two contenders which is conducted on two fronts, each one characterized by a different Tullock CSF. In the rich-rewarding front, the Tullock parameter is fixed to a value \(\gamma_{r}>1\), while in the poor-rewarding front, the Tullock parameter is \(\gamma_{p}<1\). Note that due to the scaling property of the Tullock function, the resources, \(x\) and \(y\), of the contenders can be rescaled to \(1\) and \(r<1\), respectively, if we assume \(x>y\), without loss of generality. After the rescaling, the contender \(\mathbf{1}\) has resources \(1\), of which a fraction \(\alpha_{1}\) is invested in the rich-rewarding front (and then a fraction \(1-\alpha_{1}\) is invested in the poor-rewarding front). The resources of the contender \(\mathbf{2}\) are \(r<1\), and its investment in the rich-rewarding front is \(\alpha_{2}r\) (and then its investment in the poor-rewarding front is \((1-\alpha_{2})r\)). Note that \(0\leq\alpha_{1}\), \(\alpha_{2}\leq 1\).
In the sequel, we will fix the values of the Tullock parameters, \(\gamma_{r}>1\) (for the CSF of the rich-rewarding front) and \(\gamma_{p}<1\) (poor-rewarding front), to some arbitrary values. We simplify a bit the notation for the winning probability of contender \(\mathbf{1}\) at each front:
\[p(\alpha_{1},\alpha_{2})=\frac{\alpha_{1}^{\gamma_{r}}}{\alpha_{1}^{\gamma_{ r}}+(\alpha_{2}r)^{\gamma_{r}}}\;,\;\;\;\;\;q(\alpha_{1},\alpha_{2})=\frac{(1- \alpha_{1})^{\gamma_{p}}}{(1-\alpha_{1})^{\gamma_{p}}+((1-\alpha_{2})r)^{ \gamma_{p}}}\;, \tag{2}\]
and furthermore, we will simply write \(p\) and \(q\) whenever the arguments are unambiguous. The following relations concerning the partial derivatives of \(p\) and \(q\) are easily obtained:
\[\alpha_{1}\frac{\partial p}{\partial\alpha_{1}}=-\alpha_{2}\frac{\partial p} {\partial\alpha_{2}}=\gamma_{r}p(1-p)\;, \tag{3}\]
\[(1-\alpha_{1})\frac{\partial q}{\partial\alpha_{1}}=-(1-\alpha_{2})\frac{ \partial q}{\partial\alpha_{2}}=-\gamma_{p}q(1-q)\;. \tag{4}\]
We will consider the outcomes in both fronts as independent events, in the usual sense, so that e.g. the probability that contender \(\mathbf{1}\) reaches victory on both fronts is the product \(pq\). Also, whenever a contender wins on both fronts she obtains resources \(1+r\), and her opponent receives zero resources. In the event of a tie, in which each contender reaches victory in only one front and is defeated in the other, we will consider two different rules that define two different games:
**KR**: In the KR (keeping resources) game, if none of the contenders wins in both fronts, each one keeps their initial resources after the tie.
**RR**: In the RR (redistributing resources) game, each contender receives the sum of the resources invested in the front where she has reached victory.
We denote by \(u_{i}(\alpha_{1},\alpha_{2})\) (\(i=1,2\)), the expected gain of the contender \(i\).
We call \(\beta_{1}\) the best-response map of contender \(\mathbf{1}\), defined as follows:
\[u_{1}(\beta_{1}(s),s)=\max_{\alpha_{1}}u_{1}(\alpha_{1},s)\;, \tag{5}\]
i.e. \(\beta_{1}(s)\) is the value of \(\alpha_{1}\) that maximizes the expected gain of contender \(\mathbf{1}\) for the fraction of resources \(\alpha_{2}=s\) of contender \(\mathbf{2}\) in the rich-rewarding front. Correspondingly, we denote by \(\beta_{2}\) the best-response map of contender \(\mathbf{2}\):
\[u_{2}(t,\beta_{2}(t))=\max_{\alpha_{2}}u_{2}(t,\alpha_{2})\;. \tag{6}\]
The best-response maps \(\beta_{i}\) (\(i=1,2\)) are determined by the three parameters (\(r\), \(\gamma_{r}\), \(\gamma_{p}\)) that define each particular KR (or RR) game. One should not expect them to be smooth 1d functions of the unit interval, for the max operation might introduce, in general, non-analyticities (e.g., jump discontinuities).
An ordered pair \((\bar{\alpha}_{1},\bar{\alpha}_{2})\) is a Nash equilibrium if the following two conditions are satisfied:
\[\bar{\alpha}_{1}=\beta_{1}(\bar{\alpha}_{2})\quad\text{and}\quad\bar{\alpha}_ {2}=\beta_{2}(\bar{\alpha}_{1})\;, \tag{7}\]
or, equivalently,
\[\bar{\alpha}_{1}=\beta_{1}(\beta_{2}(\bar{\alpha}_{1}))\quad\text{and}\quad \bar{\alpha}_{2}=\beta_{2}(\beta_{1}(\bar{\alpha}_{2}))\;. \tag{8}\]
When the contenders' choices of resources' assignments are a Nash equilibrium, none of them has any incentive to deviate.
## III Keeping resources when trying
In this section, we study the KR game. In this game, i) if none of the contenders win on both fronts (i.e., a tie), both keep their initial resources, while ii) if one of them wins on both fronts, the final resources are \(1+r\) for the winner and zero for the loser. Thus, the expected gain after the contest, \(u_{1}\), for the contender \(\mathbf{1}\) are:
\[u_{1}(\alpha_{1},\alpha_{2})=pq(1+r)+(p(1-q)+q(1-p))=pq(r-1)+p+q\;, \tag{9}\]
and the expected gain, \(u_{2}\), for the contender \(\mathbf{2}\) are, in turn:
\[u_{2}(\alpha_{1},\alpha_{2})=1+r-u_{1}(\alpha_{1},\alpha_{2})=1+r-pq(r-1)-p-q\;, \tag{10}\]
where we have omitted the dependence of \(p\) and \(q\) on \(\alpha_{1}\) and \(\alpha_{2}\), the fractions of resources invested in the rich-rewarding front.
### The best-response maps
First, we now obtain the main features of the best-response map \(\beta_{1}(s)\) of the contender \(\mathbf{1}\), for which we focus attention on its expected gain \(u_{1}\) (equation (9)) as a function of its first argument \(\alpha_{1}\), for fixed arbitrary values of its second argument \(\alpha_{2}=s\).
\[u_{1}(\alpha_{1},s)=(r-1)\frac{\alpha_{1}^{\gamma_{r}}}{(\alpha_{1}^{\gamma_{ r}}+(sr)^{\gamma_{r}})}\frac{(1-\alpha_{1})^{\gamma_{p}}}{((1-\alpha_{1})^{ \gamma_{p}}+((1-s)r)^{\gamma_{p}})}+\frac{\alpha_{1}^{\gamma_{r}}}{(\alpha_{1} ^{\gamma_{r}}+(sr)^{\gamma_{r}})}+\frac{(1-\alpha_{1})^{\gamma_{p}}}{((1- \alpha_{1})^{\gamma_{p}}+((1-s)r)^{\gamma_{p}})}\]
For \(s=0\), one has
\[u_{1}(\alpha_{1},0)=1+r\frac{(1-\alpha_{1})^{\gamma_{p}}}{(1-\alpha_{1})^{ \gamma_{p}}+r^{\gamma_{p}}}\;; \tag{11}\]
this is a monotone decreasing function, thus taking its maximum value at the origin. However \(\alpha_{1}=0\) and \(s=0\) corresponds to the situation in which none of the contenders invests in the rich-rewarding front, and then the expected gain for the contender \(\mathbf{1}\) is \(u_{1}(0,0)=(1+r)(1+r^{\gamma_{p}})^{-1}\), i.e. the product of the total resources and the probability of victory in the poor-rewarding front. This is lower than the limit of expression (11) when \(\alpha_{1}\to 0^{+}\):
\[u_{1}(0^{+},0)=1+r\frac{1}{1+r^{\gamma_{p}}}>\frac{1+r}{1+r^{\gamma_{p}}}=u_{1} (0,0)\;. \tag{12}\]
In other words, the best response of contender \(\mathbf{1}\) to \(s=0\) is to invest as small as possible a positive quantity, say \(\beta_{1}(0)=0^{+}\).
Next, let us consider small positive values of \(s\). For values of \(\alpha_{1}\) such that \(0<s\ll\alpha_{1}<1\), the expected gain \(u_{1}(\alpha_{1},s)\) is essentially given by \(u_{1}(\alpha_{1},0)\), equation (11):
\[u_{1}(\alpha_{1},s)\simeq 1+r\frac{(1-\alpha_{1})^{\gamma_{p}}}{(1-\alpha_{1} )^{\gamma_{p}}+r^{\gamma_{p}}}\quad\mbox{for}\;\;\alpha_{1}\gg s>0\;. \tag{13}\]
However, for lower values of \(\alpha_{1}\), \(u_{1}(\alpha_{1},s)\) differs significantly from (13). In particular, for \(\alpha_{1}=0\) the expected gain is
\[u_{1}(0,s)=\frac{1}{1+((1-s)r)^{\gamma_{p}}}\;, \tag{14}\]
and \(u_{1}(\alpha_{1},s)\) is a decreasing function at the origin:
\[\frac{\partial u_{1}}{\partial\alpha_{1}}\bigg{|}_{\alpha_{1}=0}\equiv u_{1} ^{\prime}(0,s)=-\frac{\gamma_{p}((1-s)r)^{\gamma_{p}}}{(1+((1-s)r)^{\gamma_{ p}})^{2}}\;. \tag{15}\]
It can be shown that when \(\alpha_{1}\) increases from zero, the function \(u_{1}(\alpha_{1},s)\) shows a local minimum, followed by a local maximum before it approaches (13). Both, the locations of the minimum and the maximum tend to zero as \(s\to 0\); in this limit, the value of \(u_{1}\) at the maximum converges to \(u_{1}(0^{+},0)\), see equation (12), while its value at the minimum tends to \(u_{1}(0,0^{+})=\frac{1}{1+r^{\gamma_{p}}}<u_{1}(0^{+},0)\). Thus the location of the local maximum gives the value of the best-response map \(\beta_{1}(s)\), for (13) is monotone decreasing. To illustrate this, Panel \(\mathbf{a}\) of Figure 1 displays, for an exemplifying choice \(r=0.5\), \(\gamma_{r}=5\), \(\gamma_{p}=0.5\), the numerical results corresponding to the expected gain \(u_{1}(\alpha_{1},s)\) of Contender \(\mathbf{1}\), as a function of the fraction \(\alpha_{1}\) that she invested in the rich-rewarding front, for two fixed values \((0,\;0.035)\) of the Contender \(\mathbf{2}\) invested fraction \(s\) in the reach-rewarding front. Note the non-monotonous behavior for \(s>0\) (purple line). The inset highlights the local minimum and maximum for \(s>0\).
The conclusion of the analysis for small values of \(s\ll 1\) is that the best-response map \(\beta_{1}(s)\) is a well-behaved monotone increasing function in this region of \(s\) values. For generic, not too small values of \(s\), the qualitative features of the function \(u_{1}(\alpha_{1},s)\) remain the same: it shows a negative slope at the origin, a local minimum followed by a local maximum, and a divergent \((-\infty)\) slope at \(\alpha_{1}=1\) (so that it is ensured that \(\beta_{1}(s)<1\) for all values of \(s\)). However, its maximum value is no longer guaranteed to occur at its local maximum, for it can perfectly occur at the origin, as the position of the local maximum increases with \(s\) (and then the value of \(u_{1}\) decreases there) while the value of \(u_{1}(0,s)\) increases, see equation (14). In other words, the continuity of \(\beta_{1}(s)\) is not guaranteed. To explore this shape, we have computed the best-response map \(\beta_{1}(s)\) for the specific set of values \(r=0.5\), \(\gamma_{r}=5\), and \(\gamma_{p}=0.5\). Panel \(\mathbf{b}\) of Figure 1 displays the numerical results for the best response of Contender \(\mathbf{1}\) to Contender \(\mathbf{2}\)'s rich-rewarding-front investment ratio \(s\). As shown, for these values of the contenders' resources ratio and Tullock function parameters, \(\beta_{1}(s)\) is a well-behaved monotone increasing function in the whole range \(0<s<1\).
To obtain the main features of the best-response map \(\beta_{2}(t)\) of the contender \(\mathbf{2}\), we analyze its expected gain \(u_{2}\) as a function of its second variable \(\alpha_{2}\) for fixed values of \(\alpha_{1}=t\).
\[u_{2}(t,\alpha_{2})=1+r-(r-1)\frac{t^{\gamma_{r}}}{(t^{\gamma_{r}}+(\alpha_{2 }r)^{\gamma_{r}})}\frac{(1-t)^{\gamma_{p}}}{((1-t)^{\gamma_{p}}+((1-\alpha_{2}) r)^{\gamma_{p}})}-\frac{t^{\gamma_{r}}}{(t^{\gamma_{r}}+(\alpha_{2}r)^{\gamma_{r}})}- \frac{(1-t)^{\gamma_{p}}}{((1-t)^{\gamma_{p}}+((1-\alpha_{2})r)^{\gamma_{p}})}\]
For \(t=0\), \(u_{2}(0,\alpha_{2})\) is a monotone decreasing function of \(\alpha_{2}\):
\[u_{2}(0,\alpha_{2})=1+r-\frac{1}{1+((1-\alpha_{2})r)^{\gamma_{p}}}\;. \tag{16}\]
However, in a similar way as we discussed above for the function \(u_{1}(\alpha_{1},0)\), due to the discontinuity of \(u_{2}(0,\alpha_{2})\) at the origin, i.e.
\[u_{2}(0,0^{+})\equiv\lim_{\alpha_{2}\to 0}u_{2}(0,\alpha_{2})=1+r-\frac{1}{1+r^{ \gamma_{p}}}>\frac{(1+r)r^{\gamma_{p}}}{1+r^{\gamma_{p}}}=u_{2}(0,0)\;, \tag{17}\]
the best response of contender \(\mathbf{2}\) to \(t=0\) is to invest as small as possible a positive quantity, say \(\beta_{2}(0)=0^{+}\).
The analysis of \(u_{2}(t,\alpha_{2})\) for small positive values of \(t\) is similar to that of \(u_{1}(\alpha_{1},s)\) for small positive values of \(s\), and leads to analogous conclusions, i.e. the function \(u_{2}(t,\alpha_{2})\) shows a local minimum followed by a local maximum before approaching expression (16). The location of this local maximum gives the best-response map \(\beta_{2}(t)\), and thus this map is a well-behaved monotone increasing function for small positive values of \(t\).
These qualitative features of \(u_{2}(t,\alpha_{2})\) remain unaltered for generic, not too small, values of \(t\). Also, its maximum cannot occur at \(\alpha_{2}=1\) because its slope there diverges to \(-\infty\). And again, there is no guarantee that the best-response map is given by the location of the local maximum of \(u_{2}(t,\alpha_{2})\), for \(u_{2}(t,0)\) keeps growing with increasing values of \(t\), so that an eventual jump discontinuity where \(\beta_{2}(t)\) drops to zero may occur. As for Contender \(\mathbf{1}\), we have numerically explored the expected gain and best response of Contender \(\mathbf{2}\). Panels \(\mathbf{a}\) (top) and \(\mathbf{b}\) (bottom) of Figure 2 display, for \(r=0.5\), \(\gamma_{r}=5\), \(\gamma_{p}=0.5\), the expected gain \(u_{2}(t,\alpha_{2})\) of Contender \(\mathbf{2}\) versus the fraction \(\alpha_{2}\) she invested in the rich-rewarding front, for three fixed values of the relative investment of Contender \(\mathbf{1}\) in the reach-rewarding front. As predicted, the numerical results confirm the non-monotonous behavior for \(t>0\). Panel \(\mathbf{c}\) (right) displays the best-response map \(\beta_{2}(t)\) for Contender \(\mathbf{2}\), showing the aforementioned discontinuity, \(\beta_{2}(t)\) dropping to zero at \(t\simeq 0.585\) for the chosen values (\(r=0.5\), \(\gamma_{r}=5\), \(\gamma_{p}=0.5\)).
### The Nash equilibrium
The previous characterization of the best-response maps, \(\beta_{1}(s)\) and \(\beta_{2}(t)\), leads to the conclusion that a Nash equilibrium \((\bar{\alpha}_{1},\bar{\alpha}_{2})\) of a KR game must be an interior point of the unit square, i.e. \(0<\bar{\alpha}_{1}\), \(\bar{\alpha}_{2}<1\). Indeed, on one hand, \(\beta_{1}(s)\neq 1\) for all \(s\), and \(\beta_{2}(t)\neq 1\), for all \(t\). On the other hand, \(\beta_{i}(0)\) (\(i=1,2\)) is a small positive quantity and then it is ensured that \(\beta_{j}(\beta_{i}(0))\) is a positive quantity. The important consequence is that any Nash equilibrium of a KR game must solve for the system of equations:
\[\frac{\partial u_{1}(\alpha_{1},\alpha_{2})}{\partial\alpha_{1}}=0\;,\quad \frac{\partial u_{2}(\alpha_{1},\alpha_{2})}{\partial\alpha_{2}}=0\;. \tag{18}\]
Figure 1: KR game with parameters \(r=0.5\), \(\gamma_{r}=5\), and \(\gamma_{p}=0.5\). Panel \(\mathbf{a}\) (left) shows the graphs of the contender \(\mathbf{1}\) expected gain, \(u_{1}\), as a function of its investment fraction \(\alpha_{1}\) in the rich-rewarding front (RRF), for \(s=0\) (red) and \(s=0.035\) (purple), where \(s\) is the contender \(\mathbf{2}\) invested fraction of resources in RRF. The local minimum of \(u_{1}(\alpha_{1},s=0.035)\) is shown in the inset. Panel \(\mathbf{b}\) (right) shows the best-response map \(\beta_{1}(s)\). See text for details.
Using the equalities (3) and (4), the system (18) is written as:
\[\frac{\alpha_{1}}{1-\alpha_{1}}=f(\alpha_{1},\alpha_{2})\;,\quad\frac{\alpha_{2}} {1-\alpha_{2}}=f(\alpha_{1},\alpha_{2})\;, \tag{19}\]
where \(f(\alpha_{1},\alpha_{2})\) is the following function:
\[f(\alpha_{1},\alpha_{2})=\frac{\gamma_{r}p(1-p)}{\gamma_{p}q(1-q)}\;\frac{1+(r -1)q}{1+(r-1)p}\;. \tag{20}\]
First, one sees that \(\bar{\alpha_{1}}=\bar{\alpha_{2}}\equiv\bar{\alpha}\). Then, due to the scaling property of the Tullock functions, (19) becomes a simple linear equation
\[\frac{\bar{\alpha}}{1-\bar{\alpha}}=\bar{f}\equiv\frac{\gamma_{r}(1+r^{\gamma _{p}})(1+r^{1-\gamma_{p}})}{\gamma_{p}(1+r^{\gamma_{r}})(1+r^{1-\gamma_{r}})}\;, \tag{21}\]
with a unique solution (for fixed \(r\), \(\gamma_{r}\) and \(\gamma_{p}\) values) given by
\[\bar{\alpha}=\frac{\bar{f}}{1+\bar{f}}\;. \tag{22}\]
In Figure 3 we show the graph of the function \(\bar{\alpha}(r)\) for three different pairs of values of the technology parameters \((\gamma_{r},\gamma_{p})\). Still, we should be aware that it is not guaranteed that for fixed values of \(r\), \(\gamma_{r}\), and \(\gamma_{p}\), the pair \((\bar{\alpha},\bar{\alpha})\) is a Nash equilibrium. So far, we have only shown that \(\bar{\alpha}\) is a local maximum of \(u_{1}(\alpha_{1},\bar{\alpha})\) and a local maximum of \(u_{2}(\bar{\alpha},\alpha_{2})\); because any Nash equilibrium of a KR game must be an interior point, this is a necessary condition, but not a sufficient one.
The solution \((\bar{\alpha},\bar{\alpha})\) of the system of equations (18) is a Nash equilibrium of the KR game if the following conditions are satisfied:
C1.- \(\bar{\alpha}\) is a global maximum of \(u_{1}(\alpha_{1},\bar{\alpha})\), i.e.:
\[u_{1}(\bar{\alpha},\bar{\alpha})>u_{1}(0,\bar{\alpha})\;,\;\;\mbox{and}\;\; \;u_{1}(\bar{\alpha},\bar{\alpha})>u_{1}(1,\bar{\alpha})\;. \tag{23}\]
C2.- \(\bar{\alpha}\) is a global maximum of \(u_{2}(\bar{\alpha},\alpha_{2})\), i.e.:
\[u_{2}(\bar{\alpha},\bar{\alpha})>u_{2}(\bar{\alpha},0)\;,\;\;\mbox{and}\;\; \;u_{2}(\bar{\alpha},\bar{\alpha})>u_{2}(\bar{\alpha},1)\;. \tag{24}\]
Figure 2: KR game with parameters \(r=0.5\), \(\gamma_{r}=5\), and \(\gamma_{p}=0.5\). Left panels (**a** and **b**) show the graphs of the contender **2** expected gain, \(u_{2}\), as a function of its investment fraction \(\alpha_{2}\) in the rich-rewarding front (RRF), for \(t=0\) (Panel **a**, blue line), \(t=0.035\) (Panel **a**, purple), and \(t=0.6\) (Panel **b**), where \(t\) is the contender **1** invested fraction of resources in RRF. Panel **c** shows the best-response map \(\beta_{2}(t)\). See text for details.
It is straightforward to check that
\[u_{1}(\bar{\alpha},\bar{\alpha})=\frac{1}{1+r^{\gamma_{r}}}+\frac{1}{1+r^{\gamma_ {p}}}+(r-1)\frac{1}{1+r^{\gamma_{r}}}\ \frac{1}{1+r^{\gamma_{p}}}>1\;,\]
while
\[u_{1}(0,\bar{\alpha})=q(0,\bar{\alpha})<1\;,\;\;\mbox{and}\;\;\;u_{1}(1,\bar{ \alpha})=p(1,\bar{\alpha})<1\;,\]
and one concludes that the conditions C1 are satisfied for all values of the game parameters \(r\), \(\gamma_{r}\) and \(\gamma_{p}\).
On the contrary, one can easily find values of the game parameters where the conditions C2 do not hold, as well as other values for which they do. As an illustrative example, we show in Figure 4 the graphs of the best-response maps for the Tullock parameters \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\), relative to \(r=0.5\) (top panels, **a** and **b**) and \(r=0.85\) (bottom, **c** and **d**), left panels (**a** and **c**) corresponding to \(\beta_{1}(\beta_{2}(\alpha_{1}))\) and right ones (**b** and **d**) to \(\beta_{2}(\beta_{1}(\alpha_{2}))\). An inner intersection of the curve with the black main diagonal indicates the existence of a Nash equilibrium. As shown in this example (\(\gamma_{r}=5,\gamma_{p}=0.5\)), for \(r=0.85\), there is a Nash equilibrium, while for \(r=0.5\), there is not. Our extensive exploration of the \((\gamma_{r},\gamma_{p})\) plane strongly suggests that there is a threshold value \(r_{th}(\gamma_{r},\gamma_{p})\), that depends on the Tullock parameters, such that for \(r>r_{th}\) both conditions C2 are satisfied. In this case, the corresponding KR game has a Nash equilibrium, where both contenders invest a fraction \(\bar{\alpha}(r,\gamma_{r},\gamma_{p})\) of their resources in the rich-rewarding front.
The existence of a Nash equilibrium given by the pair \((\bar{\alpha},\bar{\alpha})\) for large enough values of the parameter \(r\) can be proved by a continuation argument from the "equal resources" limit \(r=1\), where one can directly check that the conditions C2 hold. Indeed, in this limit \(\bar{f}=\gamma_{r}/\gamma_{p}\), and then
\[\bar{\alpha}(r=1)=\frac{\gamma_{r}}{\gamma_{r}+\gamma_{p}}<1\;,\;\;\mbox{and }\;\;\;u_{2}(\bar{\alpha},\bar{\alpha})=1\;,\]
Figure 3: KR game. Graph of the function \(\bar{\alpha}(r)\) for three different pairs of values (shown in legend) of the technology parameters \((\gamma_{r},\gamma_{p})\). The point \((\alpha_{1},\alpha_{2})=(\bar{\alpha},\bar{\alpha})\) corresponds to the local maxima of the expected gain \(u_{1}(\alpha_{1},\bar{\alpha})\), \(u_{2}(\bar{\alpha},\alpha_{2})\), for both contenders, where \(\alpha_{1}\) (resp., \(\alpha_{2}\)) is the fraction invested by Contender **1** (resp., **2**) in the rich-rewarding front. See the text for further details.
while
\[u_{2}(\bar{\alpha},0)=\left(1+\left(\frac{\gamma_{p}}{\gamma_{r}+\gamma_{p}} \right)^{\gamma_{p}}\right)^{-1}<1\;,\;\;\text{and}\;\;\;u_{2}(\bar{\alpha},1)= \left(1+\left(\frac{\gamma_{r}}{\gamma_{r}+\gamma_{p}}\right)^{\gamma_{p}} \right)^{-1}<1\;,\]
and thus conditions C2 are satisfied in the equal resources limit.
In Figure 5, Panel **a** shows, for \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\), the graph of \(u_{1}(\bar{\alpha},\bar{\alpha})\) as a function of \(r\), along with \(u_{1}(0,\bar{\alpha})\) and \(u_{1}(1,\bar{\alpha})\), to illustrate the conditions C1. Panel **b** displays \(u_{2}(\bar{\alpha},\bar{\alpha})\), \(u_{2}(\bar{\alpha},0)\), and \(u_{2}(\bar{\alpha},1)\), showing that conditions C2 are only satisfied simultaneously for \(r>0.77635\) (dashed vertical line).
## IV Redistributing resources when trying
In this section, we analyze the RR game, in which ties are followed by a redistribution of resources among the contenders that depend on their investments on each front. Specifically, each contender collects the sum of the investments employed in the front where she reached victory. Thus the expected gain is:
\[u_{1}=(\alpha_{1}+\alpha_{2}r)p+\left((1-\alpha_{1})+(1-\alpha_{2})r\right)q\;, \tag{25}\]
Figure 4: KR game with \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\). Plots of the composition of players’ best-response maps for \(r=0.5\) (top panels, **a** and **b**) and \(r=0.85\) (bottom panels, **c** and **d**). Left panels (**a** and **c**) show \(\beta_{1}(\beta_{2}(\alpha_{1}))\), while \(\beta_{2}(\beta_{1}(\alpha_{2}))\) is shown in right panels (**b** and **d**). The main diagonal (in dashed black) is plotted to visualize the existence for \(r=0.85\) of a Nash equilibrium, and its absence for \(r=0.5\).
\[u_{2}=(\alpha_{1}+\alpha_{2}r)(1-p)+\left((1-\alpha_{1})+(1-\alpha_{2})r\right)(1- q)\;, \tag{26}\]
where the winning probabilities, \(p\) and \(q\), of contender \(\mathbf{1}\) in each front are given by equation (2).
### The best-response maps
First, let us consider the expected gain \(u_{1}\) of contender \(\mathbf{1}\) as a function of \(\alpha_{1}\), for a fixed value of \(\alpha_{2}=s\),
\[u_{1}(\alpha_{1},s)=(\alpha_{1}+sr)\frac{\alpha_{1}^{\gamma_{r}}}{\alpha_{1}^ {\gamma_{r}}+(sr)^{\gamma_{r}}}+\left((1-\alpha_{1})+(1-s)r\right)\frac{(1- \alpha_{1})^{\gamma_{p}}}{(1-\alpha_{1})^{\gamma_{p}}+((1-s)r)^{\gamma_{p}}}\;. \tag{27}\]
For \(s=0\), we have
\[u_{1}(\alpha_{1},0)=\alpha_{1}+(1-\alpha_{1}+r)\frac{(1-\alpha_{1})^{\gamma_{ p}}}{(1-\alpha_{1})^{\gamma_{p}}+r^{\gamma_{p}}}\;. \tag{28}\]
Note that, contrary to the situation in the KR game, analyzed in the previous section III.1, this is a continuous function at the origin:
\[u_{1}(0^{+},0)=u_{1}(0,0)=\frac{1+r}{1+r^{\gamma_{p}}}<1\;. \tag{29}\]
As \(u_{1}(1,0)=1\), it is plain that \(\beta_{1}(0)\neq 0\). Furthermore, the first derivative of \(u_{1}(\alpha_{1},0)\), given by
\[u_{1}^{\prime}(\alpha_{1},0)=\frac{r^{\gamma_{p}}}{(1-\alpha_{1})^{\gamma_{p} }+r^{\gamma_{p}}}\left(1-\left(1+\frac{r}{1-\alpha_{1}}\right)\frac{\gamma_{p }(1-\alpha_{1})^{\gamma_{p}}}{(1-\alpha_{1})^{\gamma_{p}}+r^{\gamma_{p}}} \right)\;, \tag{30}\]
is positive at the origin,
\[u_{1}^{\prime}(0,0)=\frac{r^{\gamma_{p}}}{1+r^{\gamma_{p}}}\left(1-\frac{ \gamma_{p}(1+r)}{1+r^{\gamma_{p}}}\right)>0\;, \tag{31}\]
and diverges to \(-\infty\) as \(\alpha_{1}\to 1\), as
\[u_{1}^{\prime}(1^{-},0)\sim(1-\alpha_{1})^{\gamma_{p}-1}\;; \tag{32}\]
Figure 5: Illustration of C1 conditions (Panel **a**) and C2 conditions (Panel **b**) for RR game. Panel **a** depicts the expected gain \(u_{1}(r)\) evaluated at \((\alpha_{1},\alpha_{2})=(\bar{\alpha},\bar{\alpha})\) (blue curve), \((0,\bar{\alpha})\) (purple) and \((1,\bar{\alpha})\) (red). Similarly, Panel **b** depicts the expected gain \(u_{2}(r)\) evaluated at \((\alpha_{1},\alpha_{2})=(\bar{\alpha},\bar{\alpha})\) (blue curve), \((\bar{\alpha},0)\) (purple) and \((\bar{\alpha},1)\) (red). The vertical dashed line marks at \(r=0.77635\) the point where conditions C1 and C2 start to be simultaneously satisfied.
thus \(\beta_{1}(0)\neq 1\), and \(\beta_{1}(0)\) must be an interior point \(0<\alpha_{1}^{*}<1\). Then one concludes that \(\alpha_{1}^{*}\) must solve for the equation
\[u_{1}^{\prime}(\alpha_{1},0)=0\;. \tag{33}\]
From (30), with the change of variable \(z\equiv r/(1-\alpha_{1})\), we can simply write (33) in terms of \(z\) as
\[\gamma_{p}(1+z)=1+z^{\gamma_{p}}\;. \tag{34}\]
Note that as \(s=0\), the variable \(z\) is no other thing than the ratio of the resources invested by the contenders in the poor-rewarding front. This is so since \((1-\alpha_{2})y/[(1-\alpha_{1})x]\), and \(\alpha_{2}=0\) in the RR game. Moreover, it is not difficult to realize that the equation (34) has a unique positive solution, say \(z^{*}\). Indeed, let us call \(f(z)\) its LHS, and \(g(z)\) its RHS; clearly \(f(0)<g(0)\), while at very large values of \(z\gg 1\), \(z\gg z^{\gamma_{p}}\), so that \(f(z)>g(z)\). Then, there exists at least a solution of (34), and because \(f(z)\) is linear and \(g(z)\) is a convex function, the solution is unique.
It is worth remarking that \(z^{*}\) is solely determined by the value of the Tullock parameter, \(\gamma_{p}\), of the poor-rewarding front, and that \(z^{*}(\gamma_{p})\) is a monotone decreasing function of its argument. Thus, as \(\gamma_{p}<1\), the value of \(z^{*}\) is bounded below by \(z^{*}(1^{-})\simeq 3.590175>1\), after carefully noticing that the correct limit when \(\gamma_{p}\to 1\) of the equation (34) is \(1+z=z\ln z\).
The unique solution of the equation (33), \(\alpha_{1}^{*}=1-r/z^{*}\) is clearly, due to (31) and (32), the maximum of \(u_{1}(\alpha_{1},0)\), and then,
\[\beta_{1}(0)=1-\frac{r}{z^{*}}\;. \tag{35}\]
For very small values of \(s\ll 1\), if \(\alpha_{1}\) is also small, \(u_{1}(\alpha_{1},s)\) differs qualitatively from \(u_{1}(\alpha_{1},0)\). Though for the expected gain \(u_{1}\) we have that \(u_{1}(0^{+},0)=u_{1}(0,0)=u_{1}(0,0^{+})\), the first derivative of \(u_{1}(\alpha_{1},s)\) at \(\alpha_{1}=0\),
\[u_{1}^{\prime}(0,s)=-\frac{1}{1+((1-s)r)^{\gamma_{p}}}\left(1+\gamma_{p}(1+( 1-s)r)\frac{((1-s)r)^{\gamma_{p}}}{1+((1-s)r)^{\gamma_{p}}}\right)<0\;, \tag{36}\]
converges, as \(s\to 0\) to the limit
\[u_{1}^{\prime}(0,0^{+})=-\frac{1}{1+r^{\gamma_{p}}}\left(1+\gamma_{p}(1+r) \frac{r^{\gamma_{p}}}{1+r^{\gamma_{p}}}\right)<0<u_{1}^{\prime}(0,0)\;, \tag{37}\]
where the last inequality comes from (31). Then, \(u_{1}(\alpha_{1},s)\) is a decreasing function at the origin as soon as \(s\neq 0\), showing a local minimum that detaches from \(0\) with increasing values of \(s\). Also, it has a local maximum whose location \(\alpha_{1}^{*}(s)\) is a smooth continuation of \(\alpha_{1}^{*}=1-r/z^{*}\), the maximum of \(u_{1}(\alpha_{1},0)\), because for \(\alpha_{1}\gg s\) both functions are uniformly very close each other. That local maximum is the value of the best-response map \(\beta_{1}(s)\), for small values of \(s\).
For larger values of \(s\) the qualitative features of \(u_{1}(\alpha_{1},s)\) remain the same. The location of its local maximum \(\alpha_{1}^{*}(s)\) increases with \(s\), and, as \(u_{1}(0,s)\) is a decreasing function of \(s\), its value remains lower than \(u_{1}(\alpha_{1}^{*}(s),s)\). Then \(\beta_{1}(s)=\alpha_{1}^{*}(s)\) increases smoothly, and approaches the value \(1\), as \(s\to 1\), with no jump discontinuities.
Now we turn our attention to the expected gain \(u_{2}\) of the contender \({\bf 2}\) as a function of \(\alpha_{2}\) for fixed values of its first argument \(\alpha_{1}=t\).
\[u_{2}(t,\alpha_{2})=(t+\alpha_{2}r)\frac{(\alpha_{2}r)^{\gamma_{r}}}{(\alpha _{2}r)^{\gamma_{r}}+t^{\gamma_{r}}}+(1-t+(1-\alpha_{2})r)\,\frac{((1-\alpha_ {2})r)^{\gamma_{p}}}{((1-\alpha_{2})r)^{\gamma_{p}}+(1-t)^{\gamma_{p}}}\;. \tag{38}\]
For \(t=0\) we have
\[u_{2}(0,\alpha_{2})=\alpha_{2}r+(1+(1-\alpha_{2})r)\frac{((1-\alpha_{2})r)^{ \gamma_{p}}}{((1-\alpha_{2})r)^{\gamma_{p}}+1}\;. \tag{39}\]
This is a continuous function at the origin,
\[u_{2}(0,0^{+})=u_{2}(0,0)=\frac{(1+r)r^{\gamma_{p}}}{1+r^{\gamma_{p}}}>r\;, \tag{40}\]
\[u_{2}(0,1)=r\;, \tag{41}\]
then it is assured that \(\beta_{2}(0)<1\). The derivative of \(u_{2}(0,\alpha_{2})\) is easily calculated as
\[u_{2}^{\prime}(0,\alpha_{2})=\frac{r}{1+((1-\alpha_{2})r)^{\gamma_{p}}}\left(1- (1+(1-\alpha_{2})r)\frac{\gamma_{p}((1-\alpha_{2})r)^{\gamma_{p}-1}}{1+((1- \alpha_{2})r)^{\gamma_{p}}}\right)\;; \tag{42}\]
it diverges to \(-\infty\) at \(\alpha_{2}=1\), and takes the value, at the origin,
\[u_{2}^{\prime}(0,0^{+})=\frac{r}{1+r^{\gamma_{p}}}\left(1-\frac{(1+r)\gamma_{p }r^{\gamma_{p}-1}}{1+r^{\gamma_{p}}}\right)\;. \tag{43}\]
After the change of variable \(z=(1-\alpha_{2})r\), the equation \(u_{2}^{\prime}(0,\alpha_{2})=0\) can be re-written as
\[\gamma_{p}(1+z)=z+z^{\gamma_{p}-1}\;, \tag{44}\]
and note that, for \(\alpha_{1}=0\), the variable \(z\) is just the ratio of resources invested by the contenders in the poor-rewarding front: \(z=[(1-\alpha_{2})/(1-\alpha_{1})]r=(1-\alpha_{2})y/(1-\alpha_{1})x\).
An argument similar to the one used above with the equation (34) convinces oneself that the equation (44) has a unique positive solution \(z^{*}(\gamma_{p})\), that depends solely on the Tullock parameter \(\gamma_{p}\), and it is a monotone increasing function of this parameter. As a consequence, the value of \(z^{*}\) is bounded above by \(z^{*}(1^{-})\simeq 0.278465\), after noticing that the correct limit when \(\gamma_{p}\to 1\) of the equation (44) is \(1+z=-\ln z\).
Thus, provided the condition \(r>z^{*}(\gamma_{p})\) holds, the solution of equation \(u_{2}^{\prime}(0,\alpha_{2})=0\) is
\[\alpha_{2}^{*}(r,\gamma_{p})=1-\frac{z^{*}(\gamma_{p})}{r}\;, \tag{45}\]
We are led to the conclusion that for values of \(r<z^{*}(\gamma_{p})\) the function \(u_{2}(0,\alpha_{2})\) is a monotone decreasing function of \(\alpha_{2}\) and then \(\beta_{2}(0)=0\), while for \(r>z^{*}(\gamma_{p})\) the best-response to \(t=0\) is \(\beta_{2}(0)=\alpha_{2}^{*}(r,\gamma_{p})\), the location of the local maximum of \(u_{2}(0,\alpha_{2})\), given by the equation (45), which increases continuously from zero at \(r=z^{*}(\gamma_{p})\) up to the value \(1-z^{*}\) at \(r=1\).
For very small values of \(t\) and \(\alpha_{2}\gg t\), \(u_{2}(t,\alpha_{2})\) is essentially given by \(u_{2}(0,\alpha_{2})\). However, for small values of \(\alpha_{2}\) both functions are quite different. To see this, consider the partial derivative of \(u_{2}(t,\alpha_{2})\) respect to \(\alpha_{2}\):
\[u_{2}^{\prime}(t,\alpha_{2}) = \frac{r(\alpha_{2}r)^{\gamma_{r}}}{(\alpha_{2}r)^{\gamma_{r}}+t^{ \gamma_{r}}}+(t+\alpha_{2}r)\frac{r\gamma_{r}(\alpha_{2}r)^{\gamma_{r}-1}t^{ \gamma_{r}}}{((\alpha_{2}r)^{\gamma_{r}}+t^{\gamma_{r}})^{2}} \tag{46}\] \[{}-\frac{r((1-\alpha_{2})r)^{\gamma_{p}}}{((1-\alpha_{2})r)^{ \gamma_{p}}+(1-t)^{\gamma_{p}}}-(1-t+(1-\alpha_{2})r)\frac{r\gamma_{p}((1- \alpha_{2})r)^{\gamma_{p}-1}(1-t)^{\gamma_{p}}}{(((1-\alpha_{2})r)^{\gamma_{p }}+(1-t)^{\gamma_{p}})^{2}}\;.\]
which takes the value, at \(\alpha_{2}=0\),
\[u_{2}^{\prime}(t,0)=-\frac{r^{\gamma_{p}}}{r^{\gamma_{p}}+(1-t)^{\gamma_{p}}} \left(r+(1-t+r)\frac{\gamma_{p}(1-t)^{\gamma_{p}}}{r^{\gamma_{p}}+(1-t)^{\gamma _{p}}}\right)\;. \tag{47}\]
We then see that, unlike \(u_{2}^{\prime}(0,0^{+})\) whose sign depends on the \(r\) value, its limit when \(t\to 0\) is negative, for all values of \(r\):
\[u_{2}^{\prime}(0^{+},0)=-\frac{r^{\gamma_{p}}}{1+r^{\gamma_{p}}}\left(r+\frac{ \gamma_{p}(1+r)}{1+.r^{\gamma_{p}}}\right)<0\;, \tag{48}\]
and, furthermore, from (43) we have \(u_{2}^{\prime}(0^{+},0)<u_{2}^{\prime}(0,0^{+})\) for all values of \(r\). Thus the function \(u_{2}(t,\alpha_{2})\) decreases initially faster than \(u_{2}(0,\alpha_{2})\). Also, it is initially convex, before changing to concave in an interval of \(\alpha_{2}\) values of the scale of \(t/r\). Depending on the values of \(r\), one observes three different behaviors for the location of its maximum, \(\beta_{2}(t)\), for very small values of \(t\): In the regime of very small values of \(r\), \(\beta_{2}(t)=0\). For values of \(r>z^{*}(\gamma_{p})\), \(\beta_{2}(t)\)
increases slowly from its value \(\alpha_{2}^{*}(r,\gamma_{p})\). In an intermediate regime of not too small values of \(r<z^{*}(\gamma_{p})\), the map \(\beta_{2}(t)\) increases from zero with a relatively large slope.
For larger values of \(t\), the best-response map \(\beta_{2}(t)\) drops to a zero value in the last two regimes, while remaining at zero in the first regime. In other words, in the RR game, the best response of the contender \(\mathbf{2}\) to any not-too-small (compared to \(r\)) investment of its (richer) opponent in the rich-rewarding front is to invest all of its resources in the poor-rewarding front.
To illustrate these findings, in Figure 6, we show the graphs corresponding to the best-response maps for both contenders. Red dots display the maps for Contender \(\mathbf{1}\) and blue ones for Contender \(\mathbf{2}\). Panels \(\mathbf{a}\) (top), \(\mathbf{b}\) (center), and \(\mathbf{c}\) (bottom) correspond to \(r=0.05\), \(r=0.1\), and \(r=0.85\), respectively. On the one hand, the red dots show the predicted smooth increase of \(\beta_{1}\) with \(\alpha_{2}\). On the other hand, the blue dots display the three regimes predicted for \(\beta_{2}(\alpha_{1})\): i) for very small values of \(r\) (here, \(r=0.05\)), \(\beta_{2}(\alpha_{1})=0\) for any \(\alpha_{1}\); ii) for intermediate values of \(r\) (\(r<z^{*}(\gamma_{p})\simeq 0.1715\), here \(r=0.1\)), \(\beta_{2}(\alpha_{1})\) shows an increase from zero, through a steep slope, and then, through a discontinuity, goes to zero; iii) finally, for large values of \(r\) (\(r>z^{*}(\gamma_{p})\), here \(r=0.85\)), \(\beta_{2}(\alpha_{1})\) increases from a strictly positive value \(\alpha_{2}^{*}(r,\gamma_{p})\) for \(\alpha_{1}=0\), according to equation (45), and finally, through a discontinuity, goes to zero.
### The Nash equilibrium
The analysis of the best-response map \(\beta_{2}(t)\) of the contender \(\mathbf{2}\) indicates its marked overall preference for investing all its resources in the poor-rewarding front. On the other hand, we have also shown that the best-response of the contender \(\mathbf{1}\) to that eventuality is \(\beta_{1}(0)=1-\frac{r}{z^{*}}\), equation (35). Consequently, if it is the case that
\[\beta_{2}\left(1-\frac{r}{z^{*}}\right)=0\;, \tag{49}\]
we are led to the conclusion that the pair \((1-\frac{r}{z^{*}},0)\) is a Nash equilibrium of the RR game. Let us remark here that \(z^{*}(\gamma_{p})\) is bounded below by \(z^{*}(1^{-})\simeq 3.590175>1\), so that \(1-\frac{r}{z^{*}}\) is bounded below by \(0.721462\), not a small quantity.
The expected gain \(u_{2}(t,\alpha_{2})\), at \(t=1-\frac{r}{z^{*}}\), are given by
\[u_{2}\left(1-\frac{r}{z^{*}}\,,\alpha_{2}\right)=\left(1-\frac{r}{z^{*}}+ \alpha_{2}r\right)\,\frac{(\alpha_{2}r)^{\gamma_{r}}}{(1-r/z^{*})^{\gamma_{r} }+(\alpha_{2}r)^{\gamma_{r}}}+\frac{r}{z^{*}}(1+z^{*}(1-\alpha_{2}))\frac{(z^ {*}(1-\alpha_{2}))^{\gamma_{p}}}{1+(z^{*}(1-\alpha_{2}))^{\gamma_{p}}}\;. \tag{50}\]
For small values of \(r\), the dominant term in (50) is the second term (linear in \(r\)) in the RHS, because \(\gamma_{r}>1\). This term is maximum at \(\alpha_{2}=0\), and this proves that at least for small values of \(r\), one has \(\beta_{2}(1-\frac{r}{z^{*}})=0\).
To exemplify these findings, Figure 7 displays the best-response maps corresponding to the Tullock parameters \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\) for \(r=0.5\) (panels \(\mathbf{a}\) and \(\mathbf{b}\)) and \(r=0.85\) (\(\mathbf{c}\) and \(\mathbf{d}\)). Left panels (\(\mathbf{a}\) and \(\mathbf{c}\)) show the \(\beta_{1}(\beta_{2}(\alpha_{1}))\)
Figure 7: RR game with \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\). The top panels (\(\mathbf{a}\) and \(\mathbf{b}\)) display the composition of players’ best-response maps for \(r=0.5\), and the bottom panels (\(\mathbf{c}\) and \(\mathbf{d}\)) for \(r=0.85\). Correspondingly, left panels (\(\mathbf{a}\) and \(\mathbf{c}\)) show \(\beta_{1}(\beta_{2}(\alpha_{1}))\), while \(\beta_{2}(\beta_{1}(\alpha_{2}))\) is shown in right panels (\(\mathbf{b}\) and \(\mathbf{d}\)). The main diagonal (in dashed black) is plotted to visualize the existence of Nash equilibrium for \(r=0.5\), and its absence for \(r=0.85\).
maps and right ones (**b** and **d**) the \(\beta_{2}(\beta_{1}(\alpha_{2}))\) ones. Nash equilibria would be denoted by an inner intersection of the curve with the black main diagonal. Our extensive numerical exploration in the parameter plane \((\gamma_{r},\gamma_{p})\) indicates the existence of an upper bound \(r_{th}(\gamma_{r},\gamma_{p})\) such that if \(r<r_{th}^{RR}\), the equation (49) holds, and then the pair \((1-\frac{r}{z^{*}},0)\) is a Nash equilibrium of the RR game. As a numerical example, for \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\), we find the value \(r_{th}^{RR}=0.790541\). Figure 8 depicts the landscape of threshold values \(r_{th}\) for both games, KR (Panel **a**) and RR (Panel **b**).
## V Repeated combat
Let us note that, as we have shown above in section III, the expected gain of contender **1** in the KR game \(u_{1}(\bar{\alpha},\bar{\alpha})>1\) is greater than its initial resources, and then, whenever the Nash equilibrium exists for the KR game there is an incentive for him/her to fight. In a similar way, in section IV we have seen that provided a Nash equilibrium exists for a RR game, the contender **1** earns \(1-\frac{r}{z^{*}}\) with certainty in the rich-rewarding front, and also that, as its investment in the poor-rewarding front, \(\frac{r}{z^{*}}<r\) is lower than its opponent investment, its expected gain in this front is larger than its investment, and then there is an incentive for the contender **1** to fight in a RR game.
As a consequence, for both KR and RR games, it seems rather natural to assume that in the eventuality that combat ends in a tie, the combat will be repeated, until either _a_) one of the contenders reaches a victory in both fronts or, _b_) as it may happen in the RR game where resources are redistributed when tying, a Nash equilibrium no longer exists after the tie.
First, we analyze in subsection V.1 the repeated KR game, where we will reach a somewhat surprising simple result, namely that the repeated KR game is equivalent to a non-repeated game in one front with a Tullock CSF with a parameter that is the sum of those of the CSF fronts' functions, \(\gamma_{r}\) and \(\gamma_{p}\). In subsection V.2 we study the repeated RR game and show that under the condition that a game is played, i.e. combat takes place, only if a Nash equilibrium exists (and contrary to the KR game), it is not equivalent to a single non-repeated game in one front, for there is a non-zero probability of reaching a situation in which a Nash equilibrium does not exist.
### Repeated KR game
Assuming a Nash equilibrium \((\bar{\alpha},\bar{\alpha})\) of the KR game exists, see equations (21) and (22), let us simply denote by \(\bar{p}\) (resp. \(\bar{q}\)) the probability of victory, at the Nash equilibrium values of investments, of the contender **1** in the
Figure 8: Heat maps showing the value of the threshold value \(r_{th}\) for KR game (Panel **a**) and RR game (Panel **b**) in the space \((\gamma_{r},\gamma_{p})\). Results have been obtained through numerical exploration. Recall that in the KR game, only when \(r<r_{th}^{KR}\) the Nash equilibrium disappears and the contenders have no incentive to fight, whereas in the RR game it is when \(r>r_{th}^{RR}\) that peace sets in.
rich-rewarding (resp. poor-rewarding) front, i.e.
\[\bar{p}=(1+r^{\gamma_{r}})^{-1}\;,\;\;\text{and}\;\;\;\bar{q}=(1+r^{\gamma_{p}})^{ -1}\;, \tag{51}\]
so that the probability of a tie is \(\bar{p}(1-\bar{q})+\bar{q}(1-\bar{p})=\bar{p}+\bar{q}-2\bar{p}\bar{q}\).
In a KR game, the situation after an eventual tie is just the initial one, and these probabilities are thus unchanged. Now, the probability \(p_{\infty}\) that the repeated combats end in a victory of contender \(\mathbf{1}\) is
\[p_{\infty}=\sum_{k=1}^{\infty}(\bar{p}+\bar{q}-2\bar{p}\bar{q})^{k}\bar{p} \bar{q}=\frac{\bar{p}\bar{q}}{(1-\bar{p}-\bar{q}+2\bar{p}\bar{q})}=\frac{1}{1+ r^{(\gamma_{r}+\gamma_{p})}}\;. \tag{52}\]
This, somehow unexpectedly simple, result can be stated in the following way: Provided a Nash equilibrium exists for a KR game with Tullock parameters \(\gamma_{r}\) and \(\gamma_{p}\), the repeated game is equivalent to a single combat with a Tullock parameter \(\gamma_{r}+\gamma_{p}\), i.e. a single combat with a CSF that is more rich-rewarding than any of the original ones. Indeed, after a second thought, given that the incentive to fight a single combat is on the rich contender's side, the result shouldn't come as much surprise, for the repetition of it can only increase the (cumulative) expected gain. Nonetheless, we find it remarkable that the set of Tullock functions is, in this particular (and admittedly loose, in need of precision) sense, a closed set under the "repetition operation".
### Repeated RR game
Assuming that a Nash equilibrium exists for a RR game, a tie occurs whenever the contender \(\mathbf{2}\) reaches victory in the poor-rewarding front. Thus the probability of a tie in a single combat is
\[p_{t}=\frac{(z^{*})^{\gamma_{p}}}{1+(z^{*})^{\gamma_{p}}}\;, \tag{53}\]
Figure 9: Map \(\tau(r)\) representing the rescaled resources of contender \(\mathbf{2}\) after a repeated RR game, thus a tie, departing from a resource base of \(r\) in the initial game. The dashed diagonal line \(\tau(r)=r\) marks the boundary where resources after a tie would be the same as before.
where it should be noted that (as \(z^{*}>1\)) \(p_{t}>1/2\). In other words, a tie has a larger probability than a victory of the contender \(\mathbf{1}\). Also, note that this probability is independent of the resources \(r\) of the contender \(\mathbf{2}\). Consequently, though the resources of the contenders change after a tie, this probability remains unchanged, provided there is a Nash equilibrium after redistributing resources.
After a tie occurs, the contender \(\mathbf{1}\) resources become \(1-\frac{r}{z^{*}}\), while those of contender \(\mathbf{2}\) are now \(r(1+\frac{1}{z^{*}})\). For the analysis of the repeated RR game, it is convenient to rescale the new resources of the contenders, so that the rescaled resources are \(1\) for the contender \(\mathbf{1}\) and
\[\tau(r)=\frac{r(1+z^{*})}{z^{*}-r} \tag{54}\]
for the contender \(\mathbf{2}\). The map defined by equation (54) is a continuous monotone (thus invertible) increasing map with a slope larger than \(1\) for all \(r\). Figure 9 depicts the graph of this map.
If it is the case that \(\tau(r)<r_{th}^{RR}\), a Nash equilibrium for the "rescaled" RR game exists, and then the contender \(\mathbf{1}\), despite its recent defeat and the fact that she owns lower resources than before, has the incentive to fight and thus the combat is repeated. Otherwise, if \(r_{th}^{RR}<\tau(r)<(r_{th}^{RR})^{-1}\), there is no Nash equilibrium after the tie, and none of the contenders has the incentive to fight. The eventuality that \(\tau(r)>(r_{th}^{RR})^{-1}\) (being \(r<r_{th}^{RR}\)) would require \(r_{th}^{RR}>z^{*}/(1+z^{*})\), a condition that we have never found in our extensive numerical exploration of the \(r_{th}^{RR}\) values in the plane \((\gamma_{r},\gamma_{p})\). This observation excludes the possibility that a repeated RR game could end in a final victory of contender \(\mathbf{2}\). Incidentally, there are situations where for some interval of values of \(r<r_{th}^{RR}\), \(\tau(r)>1\); in these cases, the repeated RR game ends with interchanged (rich-poor) contenders' role.
We have been led to the conclusion that there are two mutually exclusive outcomes for a repeated RR game, namely either a victory of the contender \(\mathbf{1}\) or a final situation of survival of the two contenders with no Nash equilibrium, that we will briefly call peace. For fixed values of \(\gamma_{r}\) and \(\gamma_{p}\), we define the function \(\rho(r)\) (for \(r\) in the open interval \((0,1)\)) as the probability that a repeated RR game where the ratio of the resources is \(r\) ends in peace. This function can be computed once the values of \(z^{*}(\gamma_{p})\) and \(r_{th}^{RR}(\gamma_{r},\gamma_{p})\) have been numerically determined.
Figure 10: Probability of tying in RR game as a function of the resource ratio \(r\) in the first round of the game. Every red dot represents the probability of a tie event in a Monte Carlo simulation averaged over \(10^{5}\) realizations. Horizontal dashed lines represent the analytical result as given by \(\rho(x)\). Theory and simulations match perfectly.
The function \(\rho(r)\) is a piecewise constant, i.e. a staircase. It takes the value \(1\) for \(r_{th}^{RR}<r<1\). If \(\tau^{-1}(r_{th}^{RR})<r<r_{th}^{RR}\), a tie occurs with probability \(p_{t}\), after which peace is reached, so \(\rho(r)=p_{t}\) for \(r\) in this interval, and so on. Then
\[\rho(x)=\left\{\begin{array}{ll}1&\mbox{if $r_{th}^{RR}<r<1$}\\ p_{t}^{n}&\mbox{if $\tau^{-n}(r_{th}^{RR})<r<\tau^{-n+1}(r_{th}^{RR})$, $n=1,2,...$}\end{array}\right.\]
In Figure 10 it is shown how this expression matches the stochastic simulations performed on the RR game with \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\).
The computation of the expected gain \(u_{2}^{\rm rep}(r)\) of contender \(\mathbf{2}\) for the repeated RR game requires undoing the rescaling of resources made at each iteration of the map \(\tau\). The rescaling factor for the \(i\)-th iteration is \(1-\tau^{i-1}(r)/z^{*}\), and thus if \(\tau^{-n}(r_{th}^{RR})<r<\tau^{-n+1}(r_{th}^{RR})\), after \(n\) repeated tying contests ending in a peaceful situation, the final resources of contender \(\mathbf{1}\) will be
\[\Pi_{i=1}^{n}\left(1-\frac{\tau^{i-1}(r)}{z^{*}}\right)\;,\]
and then
\[u_{2}^{\rm rep}(r)=p_{t}^{n}\left(1+r-\Pi_{i=1}^{n}\left(1-\frac{\tau^{i-1}(r )}{z^{*}}\right)\right)\;\mbox{ if $\tau^{-n}(r_{th}^{RR})<r<\tau^{-n+1}(r_{th}^{RR})$, $n=1,2,...$ }. \tag{55}\]
Figure 11 shows the staircase form for \(u_{2}^{\rm rep}\) together with the boundaries marked by the inverse map \(\tau^{-n}(r_{th}^{RR})\), \(n=1,2,...\) Computations have been done, as usual, for \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\).
## VI Concluding remarks
In this work, we have explored the resolution of conflicts under the probabilistic framework of Tullock's combat success functions and game theory. These functions depend on the ratio resources of the contenders, \(r=y/x\), and a
Figure 11: Expected gain \(u_{2}^{\rm rep}\) of contender \(\mathbf{2}\) for the repeated RR game as a function of the resource ratio \(r\) at the beginning of the game. Straight dashed vertical lines mark the succesive \(n\) jumps performed with map \(\tau^{-n}(r_{th}^{RR})\). Results shown for \(\gamma_{r}=5\) and \(\gamma_{p}=0.5\).
parameter \(\gamma\), called the technology parameter. In particular, we have focused on conflicts taking part simultaneously on two fronts. Each front is characterized by a different value of \(\gamma\), being one front rich-rewarding (\(\gamma_{R}>1\)), where the richer contender has incentives to fight, and the other poor-rewarding (\(0<\gamma_{P}<1\)), where the poorer may take the lead. We define the game or combat in such a way that if a contender wins on both fronts, takes all of the adversary resources plus their initial resources, \(1+r\), and the one losing is defeated and the game is over. Not all resolutions lead to a total victory, if a contender wins one front but loses the other, a tie happens. In case of a tie, different scenarios are possible in order to reward/punish the contenders and allow for the next round. Here, we proposed two and thus gave birth to two different games as a consequence. In the keeping resources (KR) game, after a tie, both contenders conserve their original resources and simply a next round takes place. In the redistributing resources (RR) game, the winner of each front gains all the resources deployed at that front. These different rules give rise to different conflict dynamics and resolutions.
Just by performing elementary mathematical analysis on the expected gain functions and the best-response maps for each player we can almost fully characterize each game. However, in order to gain a full understanding of the situation, the analytical results were checked and extended with numerical analysis and extensive simulations of the conflict dynamics.
We remark the following main results. For both games there exists a threshold value of the resource ratio \(r\) separating a regime where a Nash equilibrium exists in the best-response dynamics between contenders and a regime where this does not happen. This threshold is solely determined by the tuple of Tullock technology parameters \((\gamma_{R},\gamma_{P})\). In case of the existence of that equilibrium, combat takes place whereas if not, the contenders remain at peace. In the KR game, it is found that the peaceful regime occurs for \(r\in[0,r_{th}^{KR})\), whereas in the RR game this happens for \(r\in(r_{th}^{RR},1)\).
In particular, for the KR game, when a Nash equilibrium exists, it is found that the investment fractions maximizing the contenders' expected gains are identical, \(\bar{\alpha}\). The existence of a Nash equilibrium is subjected to a set of conditions. The value of \(\bar{\alpha}\) must be a global maximum for both expected gains and it turns out that for a certain set of values of \(r\), this condition does not always hold. It is also found that provided a Nash equilibrium exists with Tullock parameters \(\gamma_{R}\) and \(\gamma_{P}\) for the KR game, the repeated game is equivalent to a game with a single front where the technology parameter is \(\gamma_{R}+\gamma_{P}\), the sum of the parameters at both fronts and thus it is equivalent to a more rich-rewarding front.
In the RR game, when a Nash equilibrium exists, it is found that the investment fraction at the rich-rewarding front for Contender **2** is always zero, while for Contender **1** an analytical expression is found (not holding in the peaceful regime, indeed). In this game, a tie occurs whenever Contender **2** reaches victory in the poor-rewarding front. As resources are redistributed after a tie, the repeated RR game involves a richer behavior than the KR game. It is found that, provided a Nash equilibrium exists, the tie outcome occurs with a probability \(p_{t}\) higher than the total victory of Contender **1** (winning at both fronts) and this probability is independent of \(r\) and ultimately determined by \(\gamma_{P}\). Redistribution after a tie always leads to the enrichment of the poorer contender and impoverishment of the richer one and thus the resource ratio after every round tends to increase. If repetition continues, eventually \(r>r_{th}^{RR}\), and thus there is no incentive to fight for any contender. While there is a possibility to surpass \(r>1\) from a higher enough \(r<r_{th}^{RR}\), these jumps cannot overcome \(r=1/r_{th}^{RR}\), and thus nonexistence of a Nash equilibrium still holds. We conclude for this repeated game that there are two mutually exclusive outcomes, namely either a victory of Contender **1** or a final situation of survival of the two contenders with no Nash equilibrium, a state of peace, where the contenders' resource difference has diminished. This repeated RR game dynamic is nicely represented in the staircase diagram, formulated analytically and perfectly reproduced by simulations, that depicts the probability of reaching a tie as a function of the resource ratio \(r\) in the first round.
Throughout this analysis, we have assumed perfect rationality for the contenders involved together with perfect information. We recognize that these assumptions may be in general too rigid in order to translate our analysis and conclusions into practical applications. Thus, a direction of future work demands clearly a relaxation of some of these hypotheses. Another readily possible extension of the model could be to include more realism on the managing and deployment of resources by the contenders. Finally and most importantly, we have restricted ourselves to thoroughly analyzing the conflict involving just two agents and thus pairwise interactions. Of course, the reality is more complex and conflict may involve an arbitrarily large number of entities or contenders, each of it with its own particularities while interacting in complex ways (i.e. higher-order interactions). For this, the frameworks of complex networks and hypergraphs arise as very suggestive tools to extend this conflict dynamic to large heterogeneous systems.
###### Acknowledgements.
A.dM.A. is funded by an FPI Predoctoral Fellowship of MINECO. We acknowledge partial support from the Government of Aragon, Spain, and "ERDF A way of making Europe" through grant E36-20R (FENOL) to A.dM.A, C.G.L, M.F. and Y. M., from Ministerio de Ciencia e Innovacion, Agencia Espanola de Investigacion (MCIN/ AEI/10.13039/501100011033) Grant No. PID2020-115800GB-I00 to A.dM.A, C.G.L., M.F. and Y.M.
| ゲーム理論の枠組みの中で、競争は、報酬は争い手の relative rank に依存する、つまり、絶対的なパフォーマンスではなく、争い手の順位によって決まる状況での意思決定を研究します。 Tullock 成功関数の形式的運用に基づき、資源の要求量に大きな違いがある二つの争い手による対決をモデル化し、資源の要求量に大きな違いがあり、資源の要求量が異なる2つの戦場(資源の要求量に大きな違いがある戦場と資源の要求量に小さな違いがある戦場)をモデル化しています。各戦場の成功関数のパラメータは、資源の要求量を決定します。また、引き分けの際の資源の再配分は、異なるゲームを定義します。 私たちはこのモデルを、最適応答マップの動的解析によって解き、資源の比率の critical threshold を求めて、Nash 均衡のbasinを決定し、結果として |
2309.15476 | Dynamic Multi-Scale Context Aggregation for Conversational Aspect-Based
Sentiment Quadruple Analysis | Conversational aspect-based sentiment quadruple analysis (DiaASQ) aims to
extract the quadruple of target-aspect-opinion-sentiment within a dialogue. In
DiaASQ, a quadruple's elements often cross multiple utterances. This situation
complicates the extraction process, emphasizing the need for an adequate
understanding of conversational context and interactions. However, existing
work independently encodes each utterance, thereby struggling to capture
long-range conversational context and overlooking the deep inter-utterance
dependencies. In this work, we propose a novel Dynamic Multi-scale Context
Aggregation network (DMCA) to address the challenges. Specifically, we first
utilize dialogue structure to generate multi-scale utterance windows for
capturing rich contextual information. After that, we design a Dynamic
Hierarchical Aggregation module (DHA) to integrate progressive cues between
them. In addition, we form a multi-stage loss strategy to improve model
performance and generalization ability. Extensive experimental results show
that the DMCA model outperforms baselines significantly and achieves
state-of-the-art performance. | Yuqing Li, Wenyuan Zhang, Binbin Li, Siyu Jia, Zisen Qi, Xingbang Tan | 2023-09-27T08:17:28 | http://arxiv.org/abs/2309.15476v1 | Dynamic Multi-Scale Context Aggregation for Conversational Aspect-Based Sentiment Quadruple Analysis
###### Abstract
Conversational aspect-based sentiment quadruple analysis (DiaASQ) aims to extract the quadruple of target-aspect-opinion-sentiment within a dialogue. In DiaASQ, a quadruple's elements often cross multiple utterances. This situation complicates the extraction process, emphasizing the need for an adequate understanding of conversational context and interactions. However, existing work independently encodes each utterance, thereby struggling to capture long-range conversational context and overlooking the deep inter-utterance dependencies. In this work, we propose a novel Dynamic Multi-scale Context Aggregation network (DMCA) to address the challenges. Specifically, we first utilize dialogue structure to generate multi-scale utterance windows for capturing rich contextual information. After that, we design a Dynamic Hierarchical Aggregation module (DHA) to integrate progressive cues between them. In addition, we form a multi-stage loss strategy to improve model performance and generalization ability. Extensive experimental results show that the DMCA model outperforms baselines significantly and achieves state-of-the-art performance1.
Footnote 1: The code is available at [https://github.com/qdCassie-Li/DMCA](https://github.com/qdCassie-Li/DMCA)
Yuqing Li\({}^{1,2}\) Wenyuan Zhang\({}^{1,2}\) Binbin Li \({}^{1}\) Siyu Jia\({}^{1}\) Zisen Qi\({}^{1}\) Xingbang Tan\({}^{1}\)\({}^{1}\) Institute of Information Engineering, Chinese Academy of Sciences
\({}^{2}\) School of Cyber Security, University of Chinese Academy of Sciences
Conversational sentiment quadruple extraction, sentiment analysis, dialogue systems
## 1 Introduction
In recent years, sentiment analysis of reviews has gained increasing attention. Broad applications include stance detection [1][2], document-level [3][4] and aspect-based [5][6][7] sentiment analysis. Recent research [8] has broadened the scope of sentiment analysis to incorporate dialogue-level reviews, called the conversational aspect-based sentiment quadruple analysis (DiaASQ), which reflects more realistic dialogue-driven user review scenarios. DiaASQ aims to predict the quads \(\{(\mathbf{t},\mathbf{a},\mathbf{o},\mathbf{s})\}\) from a dialogue. As shown in Fig. 1, multiple speakers express their reviews around several targets (iPhone 7 and Xiaomi 5). They emphasize different aspects (power consumption and system), while expressing their respective opinions (high and smooth). The sentiment is determined based on the opinion of the target.
In contrast to sentiment tuples extraction focuses on independent sentence [9][10], DiaASQ expands extraction perspective to the dialogue. Uniquely, a quadruple might span across several utterances, so a comprehensive understanding of the dialogue and the context of utterances is crucial. Despite previous research [8] efforts to mitigate this limitation through attention mechanisms and positional encoding techniques, it still faces challenges in capturing the semantic interactions and rich contextual information in multi-turn dialogues. Relevant works [11][12] have proposed methods for PLMs to adapt to longer inputs, but they mainly focus on attention mechanisms [13] or network architectures, rather than capturing critical information from dialogues. Fixed-size sliding window methods are commonly used for processing long dialogues [14][15], but they overlook the benefits of multi-scale windows which can capture richer context.
In this paper, we propose a novel **D**ynamic **M**ulti-scale **C**ontext **A**gregation network (DMCA) for DiaASQ, as shown in Fig. 2. **Firstly**, we employ a flexible sliding window scheme to create variable-sized utterance windows. This
Figure 1: Conversational aspect-based sentiment quadruple analysis task with its corresponding outputs. Utterances are represented as nodes in the tree, with the color indicating the speaker, and the structure presents reply relationships.
approach facilitates the comprehensive capturing of dialogue context, ranging from a single utterance to broader spans. **Secondly**, we introduce a **D**ynamic **H**ierarchical **A**gregation (DHA) module. The goal of DHA is to enhance dialogue quadruple prediction by aggregating the output logits from multi-scale windows, eliminating the necessity for intricate network designs. Specifically, DHA hierarchically uses logits from smaller windows as a basis to aggregate and update the logits of larger windows that encompass these smaller windows. This process continues until aggregated logits are obtained at the dialogue level. **Furthermore**, we introduce multi-stage losses to jointly optimize different levels of aggregation, including window-level, thread-level, and dialogue-level. We conduct extensive experiments on two public benchmark datasets, and the results prove that DMCA significantly outperforms comparative methods.
The main contributions are summarized as follows: 1) We introduce the DMCA network to improve the extraction of dialogue quadruples by utilizing multi-scale context. 2) Without relying on complex network architectures, we design the Dynamic Hierarchical Aggregation module (DHA) along with multi-stage losses to optimize the decision-making process. 3) Extensive experiments show that the DMCA significantly outperforms state-of-the-art methods.
## 2 Methodology
### Problem Definition and Preliminaries
A dialogue is denoted as \(\{(u_{i},s_{i},r_{i})\}|_{i=1}^{|D|}\), where utterance \(u_{i}\) is uttered by the speaker \(s_{i}\) and is in response to \(u_{r_{i}}\). \(|D|\) denotes the total number of utterances. Based on the aforementioned input, the goal of the task is to predict all the sentiment quadruples \(Q=\{(\textbf{t},\textbf{a},\textbf{o},\textbf{s})\}\), where each quadruple contains: target(\(t\)), aspect(\(a\)), opinion(\(o\)), and sentiment polarity(\(s\)). Here, sentiment polarity \(\in\{pos,neg,other\}\).
_Tagging schema._ To transform the extraction of dialogue quadruples into a unified grid tagging task, we follow the tagging strategy of previous work [8]. Specifically, the dialogue quadruple extraction task is broken down into three joint tasks: detection of _entity boundaries (ent)_, _entity relation pairs (pair)_, and _sentiment polarity (pol)_. In the _entity boundaries_ detection phase, 'tgt', 'asp', and 'opi' are used to respectively represent the head and tail relations of the target, aspect, and opinion items between any word pairs within the window. In the _entity relation pair_ detection phase, the labels 'h2h' and 't2t' are used to align the head and tail markers between two types of entities. For instance, 'iPhone' (target-head) and 'power' (aspect-head) are connected by 'h2h', while '7' (target-tail) and 'consumption' (aspect-tail) are connected by 't2t'. Sentiment labels \(\{pos,neg,other\}\) are obtained in _sentiment polarity_ detection. By combining the results derived from these three tasks, we can efficiently extract the complete dialogue quadruples.
### DMCA Model
#### 2.2.1 Multi-scale context windows generation
A set of utterances within a dialogue with a complete reply relationship is defined as a thread [8]. Studies [16][17] have delved into the independence between dialogue threads. To effectively capture the rich context, we use a sliding window method to construct multi-scale windows for each thread. Firstly, we analyze the dialogue structure using the reply records \(\{r_{i}\}_{i=1}^{|D|}\), treating each dialogue branch as an independent thread. This gives rise to a collection of threads \(T=\{T_{t}\}_{t=1}^{|T|}\), where \(|T|\) represents the number of threads and each thread \(T_{t}=\{u_{1},u_{j},\cdots,u_{j+\ell_{t}-1}\}\) consists of \(\ell_{t}\) utterances. For each thread, we use a flexible sliding window schema to generate continuous subsets from the thread. We denote these subsets as windows, represented by \(W^{t}=\{W^{t}_{w}\}_{w=1}^{|W^{t}|}\). The size of these windows varies from 1 to \(\ell_{t}\). Therefore, for each thread, the total number of windows \(|W^{t}|\) is determined by the formula \(|W^{t}|=\frac{1}{2}(\ell_{t}^{2}+\ell_{t})\). We have verified that all generated windows meet the input requirements of the PLM. Consider the illustration in Fig. 1, where \(T_{1}=\{u_{1},u_{2},u_{3}\}\). It can produce 6 distinct windows: \(\{u_{1}\}\), \(\{u_{2}\}\), \(\{u_{3}\}\), \(\{u_{1},u_{2}\}\), \(\{u_{2},u_{3}\}\) and \(\{u_{1},u_{2},u_{3}\}\).
Secondly, we encode windows to obtain representations:
\[\textbf{H}^{t}_{w}=[\textbf{h}_{[CLS]},\textbf{h}_{1},\cdots\textbf{h}_{N_{w}},\textbf{h}_{[SEP]}]=\text{Encoder}\left(W^{t}_{w}\right), \tag{1}\]
\[W^{t}_{w}=\{[CLS];u_{1};u_{j};\cdots u_{j+k-1};[SEP]\}. \tag{2}\]
We use RoBERTa [18] as Encoder. \(\textbf{H}^{t}_{w}\in\mathbb{R}^{N_{w}\times D_{h}}\) denotes the representation of \(W^{t}_{w}\). \(N_{w}\) is the number of tokens in the window and \(D_{h}\) is hidden size.
Subsequently, we obtain the output logits of the word pair matrix, denoted as \(\mathcal{S}_{w}=\{s_{ij}\,|\,i,j\in[1,N_{w}]\}\). Additionally, we introduce a window-level cross-entropy loss \(\mathcal{L}_{w}\) to super-wise predictions at a more granular level for each window:
\[\widetilde{\textbf{h}}_{i}=\widetilde{\textbf{W}}\textbf{h}_{i}+ \widetilde{\textbf{b}}, \tag{3}\] \[s_{ij}=\left(\widetilde{\textbf{h}}_{i}\right)^{T}\widetilde{ \textbf{h}}_{j},\] (4) \[p_{ij}=\text{Softmax}(s_{ij}),\] (5) \[\mathcal{L}_{w}=-\sum_{w=1}^{|W|}\sum_{i=1}^{N_{w}}\sum_{j=1}^{N_ {w}}y_{ij}\log(p_{ij}), \tag{6}\]
where \(s_{ij}\in\mathbb{R}^{K}\), \(K\) represents the predefined number of categories in the decoding table and \(y_{ij}\) represents the truth label. \(\widetilde{\textbf{W}}\) and \(\widetilde{\textbf{b}}\) are trainable parameters.
#### 2.2.2 Dynamic Hierarchical Aggregation module
Windows of different scales capture distinct information: smaller windows focus on local details, while larger ones emphasize contextual understanding. We introduce the Dynamic Hierarchical Aggregation (DHA) module to aggregate predicted logits from these windows, avoiding the need for
designing complex network architectures. This aggregation process is categorized into thread-level and dialogue-level.
_Thread-level Aggregation._ The predicted logits of all windows within the \(t\)-th thread are denoted as \(\mathcal{S}=\{\mathcal{S}_{i}\mid u_{i}\in T_{t}\}\). Adding a superscript \(l\) indicates the number of utterances comprising the window. DHA utilizes the \(\mathcal{S}_{i}^{l}\) from the small window \(W_{i}^{t}\) to aggregate and augment the \(\mathcal{S}_{j}^{l+1}\) of larger overlapping window \(W_{j}^{t}\), while ensuring that \(W_{i}^{t}\subseteq W_{j}^{t}\). Specifically, we extract logits corresponding to \(W_{i}^{t}\) from \(\mathcal{S}_{j}^{l+1}\) to form \(\hat{\mathcal{S}}_{i}^{l}\). To enhance the predictions in the larger window, we select logits among \(\mathcal{R}_{i}^{l}\), \(\hat{\mathcal{S}}_{i}^{l}\), and \(\mathcal{R}_{i}^{l}+\hat{\mathcal{S}}_{i}^{l}\) based on the principle of minimizing cross-entropy. These selected logits are then aggregated using a weighted summation approach. This process updates \(\mathcal{S}_{j}^{l+1}\) to \(\mathcal{R}_{j}^{l+1}\). The definition of this dynamic aggregation process is as follows:
\[\mathcal{R}_{j}^{l+1}=\mathcal{S}_{j}^{l+1}\oplus\alpha\cdot \mathcal{I}_{i}^{l}, \tag{7}\] \[\mathcal{F}_{i}^{l}=\operatorname*{arg\,min}_{x\in\mathcal{X}_{i }^{l}}CrossEntropy(x,y),\] (8) \[\mathcal{X}_{i}^{l}=\{\mathcal{R}_{i}^{l},\hat{\mathcal{S}}_{i}^ {l},\mathcal{R}_{i}^{l}+\hat{\mathcal{S}}_{i}^{l}\}, \tag{9}\]
where \(\oplus\) denotes the broadcast addition. \(\alpha\) is a predefined parameter. Padding(\(\cdot\)) implies zero-padding. \(y\) denotes corresponding truth labels. The initial value for \(\mathcal{R}_{i}^{l}\) is set as \(\mathcal{S}_{i}^{l}\).
Through the dynamic hierarchical process, we obtain the aggregated thread-level logits as: \(\mathcal{T}_{t}\mathcal{R}=\mathcal{R}_{|W^{t}|}^{\ell_{t}}\). The thread-level loss \(\mathcal{L}_{t}\) is calculated in a manner analogous to Eq. 6. Notably, DHA is only used during the training phase since it requires label information (Eq. 8). For validation and test, we adopt Static Hierarchical Aggregation (SHA). The SHA approach hierarchically aggregates the logits of overlapping windows through a direct sum operation. SHA is defined as:
\[\mathcal{R}_{j}^{l+1}=\mathcal{S}_{j}^{l+1}\oplus\mathcal{R}_{i}^{l} \tag{10}\]
_Dialogue-level Aggregation._ After the aggregation process at the thread level, we obtain refined logits for each thread. Since these threads overlap only at the root utterance \(u_{1}\), we utilize the SHA method to derive final dialogue-level logits \(\mathcal{DR}\in\mathbb{R}^{N\times N\times K}\) and subsequently obtain \(\mathcal{L}_{d}\).
\[Padding(\mathcal{T}_{|T|}\mathcal{R})\mathcal{DR}=\mathcal{T}_{1}\mathcal{R} \oplus\cdots\oplus\mathcal{T}_{|T|}\mathcal{R} \tag{11}\]
\[\mathcal{L}_{d}=-\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}y_{ij}\log(p_{ij}) \tag{12}\]
where N denotes the number of tokens in the dialogue.
#### 2.2.3 Training
During the training process, we jointly incorporate three distinct stages of loss: \(\mathcal{L}_{w}\), \(\mathcal{L}_{t}\), and \(\mathcal{L}_{d}\). These losses are employed to minimize errors at different aggregation stages. For each task \(\psi\), the loss can be calculated as follows:
\[\mathcal{L}^{\psi}=\mathcal{L}_{d}^{\psi}+\eta\mathcal{L}_{t}^{\psi}+\zeta \mathcal{L}_{w}^{\psi} \tag{13}\]
where \(\psi\in\{ent,pair,pol\}\), \(\eta\) and \(\zeta\) are predefined weights.
The final objective function is determined by the sum of the loss for the three tasks:
\[\mathcal{L}=\mathcal{L}^{ent}+\mathcal{L}^{pair}+\mathcal{L}^{pol} \tag{14}\]
## 3 Experiments
### Tasks and Datasets
We conduct experiments on two datasets: the Chinese dataset **ZH**[10] and the English dataset **EN**[10]. Both datasets contain speaker and reply-record information for each conversation utterance. Each dataset consists of 1000 dialogues related to electronic product reviews, with an average of 7 utterances and 5 speakers per dialogue. Specifically, the Chinese dataset contains 5,742 quadruples, while the English dataset contains 5,514 quadruples. About 22% of the quadruples in both datasets are cross-utterance.
Figure 2: The overall framework of our DMCA model. The model consists of two key components: 1) a flexible sliding window scheme that captures conversational context at multiple scales and granularities, and 2) a Dynamic Hierarchical Aggregation (DHA) module along with a multi-stage loss strategy that hierarchically aggregates the logits of multi-scale windows. Note: The third dimension of the logits has been omitted from the matrix for clearer visualization.
### Comparison Methods
_Baseline._ Following the comparison in [8], we consider several powerful performance models closely tied to the task as baselines. These models include ExtractClassify [20], SpERT [21], Span-ASTE [9], ParaPhrase [10], and DiaASQ [8].
_Implementation Details._ To encode **ZH** and **EN** datasets, we take the Chinese-Roberta-wwm-base [22] and RoBERTLarge [18], respectively. Each training process contains 25 epochs. The parameters for both the DHA module (\(\alpha\)) and the loss (\(\eta\) and \(\zeta\)) are initialized to 1 by default. For the tasks \(\psi\in\{ent,pair,pol\}\), the values of \(K\) is \(\{6,4,3\}\). We use micro F1 and identification F1 [19] as the evaluation metrics.
### Results and Analysis
_Overall Results._ The overall results are shown in Table 1. Our model outperforms the previous best baseline in almost all tasks and datasets. Notably, on ZH dataset, our model surpasses the previous state-of-the-art by an impressive 7.7%.
_Cross-Utterance Results._ To further demonstrate the effectiveness of DMCA model in addressing cross-utterance quadruple extraction, we conduct a detailed analysis and comparison of the cross-utterance results, as shown in Fig. 3. Our approach outperforms previous model in all cross-utterance counts, especially achieving high performance when cross \(\geq 3\). This indicates that DMCA model is more effective in handling extraction problems in multi-turn dialogues.
### Ablation
We conduct experiments to assess the impact of the DHA module and the three distinct stage loss functions. As shown in Table 2, the DHA method, which considers the credibility of predictions from multi-scale windows, achieves the highest performance. Without the dynamic weighted aggregation, the performance of the SHA method diminishes. When we remove the aggregation module, the results significantly decline on both datasets, highlighting the success of our DHA. Moreover, as depicted in Table 3, removing any stage of the loss function results in a decrease in performance, particularly for the problem of cross-utterance extraction. This further demonstrates the effectiveness of the multi-stage losses.
## 4 Conclusion
In this paper, we propose a novel DMCA network for conversational aspect-based sentiment quadruple analysis. To address the challenges of encoding long dialogues and extracting cross-utterance quadruples, we construct multi-scale utterance windows to capture rich dialogue context. We also design a DHA module and multi-stage loss strategy to enhance the decision-making logits from these multi-scale windows. Experimental results on two datasets demonstrate the superiority of our DMCA over the state-of-the-art methods.
\begin{table}
\begin{tabular}{l|c c|c c c|c c|c c||c c|c c|c c} \hline \multirow{2}{*}{Model} & \multicolumn{6}{c||}{ZH-dataset} & \multicolumn{6}{c|}{EN-dataset} \\ \cline{2-13} & \multicolumn{3}{c|}{Entity detection} & \multicolumn{3}{c|}{Pair detection} & \multicolumn{3}{c||}{Quads extraction} & \multicolumn{3}{c|}{Entity detection} & \multicolumn{3}{c|}{Pair detection} & \multicolumn{3}{c|}{Quads extraction} \\ & \multicolumn{1}{c}{\(T\)} & \multicolumn{1}{c}{\(A\)} & \multicolumn{1}{c}{\(O\)} & \multicolumn{1}{c}{\(T\).A} & \multicolumn{1}{c}{\(T\).O} & \multicolumn{1}{c||}{\(A\)-\(O\)} & micro-F1 & iden-F1 & \multicolumn{1}{c||}{\(T\)} & \multicolumn{1}{c}{\(A\)} & \multicolumn{1}{c}{\(O\)} & \multicolumn{1}{c}{\(T\).A} & \multicolumn{1}{c}{\(T\).O} & \multicolumn{1}{c}{\(A\)-\(O\)} & micro-F1 & iden-F1 \\ \hline Extract-Classify & 91.11 & 75.24 & 50.06 & 32.47 & 26.78 & 18.90 & 8.81 & 9.25 & 88.31 & 71.71 & 47.90 & 34.31 & 20.94 & 19.21 & 11.59 & 12.80 \\ SpERT & 90.69 & 76.81 & 54.06 & 38.05 & 31.28 & 21.89 & 13.00 & 14.19 & 87.82 & 74.65 & 54.17 & 28.33 & 21.39 & 23.64 & 13.07 & 13.38 \\ ParaPhrase & / & / & 37.81 & 34.32 & 27.76 & 23.27 & 27.98 & / & / & / & 37.22 & 32.19 & 30.78 & 24.54 & 26.76 \\ Span-ASTE & / & / & 44.13 & 34.46 & 32.21 & 27.42 & 30.85 & / & / & / & 42.19 & 30.44 & 45.90 & 26.99 & 28.34 \\ DiaASQ & 90.23 & 76.94 & 59.35 & 48.61 & 43.31 & 45.44 & 34.94 & 37.51 & **88.62** & **74.71** & 60.22 & 47.91 & 45.58 & 42.27 & 33.31 & 36.80 \\ \hline
**Ours(DMCA)** & **92.03** & **77.07** & **60.27** & **56.88** & **51.70** & **52.80** & **42.68** & **45.36** & 88.11 & 73.95 & **63.47** & **53.08** & **50.99** & **52.40** & **37.96** & **41.00** \\ \hline \end{tabular}
\end{table}
Table 1: We report the micro-F1 scores for all tasks and the additional identification F1 (iden-F1) [19] scores for quads extraction. Here, T-A-O stands for Target-Asepct-Opinion, respectively.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c|}{**ZH**} & \multicolumn{2}{c}{**EN**} \\ \cline{2-5} & micro-F1 & iden-F1 & micro-F1 & iden-F1 \\ \hline
** DHA** & **42.68** & **45.36** & **37.96** & **41.00** \\ \hline SHA(Eq. 10) & 42.31 & 44.92 & 37.73 & 39.91 \\ Concat & 41.24 & 43.50 & 34.75 & 37.31 \\ \hline \end{tabular}
\end{table}
Table 2: Results against different aggregation methods. ‘Concat’ denotes the direct concatenation of logits from the largest window across all threads.
Figure 3: Results of cross-utterance quadruples. ‘cross-0’ indicates elements of the quadruple contained in one utterance.
\begin{table}
\begin{tabular}{l c c} \hline
**Methods** & **Intra** & **Inter** & **Overall** \\ \hline DMCA & **46.23** & **32.73** & **42.68** \\ - w/o \(\mathcal{L}_{w}\) & 46.05(\(\downarrow\)0.18) & 31.78(\(\downarrow\)0.95) & 42.43(\(\downarrow\)0.25) \\ - w/o \(\mathcal{L}_{t}\) & 45.10(\(\downarrow\)1.13) & 27.74(\(\downarrow\)4.99) & 40.57(\(\downarrow\)2.11) \\ - w/o \(\mathcal{L}_{d}\) & 45.17(\(\downarrow\)1.06) & 30.94(\(\downarrow\)1.79) & 41.51(\(\downarrow\)1.17) \\ \hline \end{tabular}
\end{table}
Table 3: Ablation results of DMCA. We report the micro-F1 score for the ZH dataset. ‘Inter’ denotes the score of cross-utterance quadruple extraction. | диалектичний аспект-залежний аналіз враження (DiaASQ) спрямований на збору четверозначного ряду елементів, що стосуються цільового аспекту-оцінки-співвідношення, в рамках діалогу. У DiaASQ елементи цього ряду часто перетинають кілька висловлень. Ця ситуація ускладнює процес збору, підкреслюючи потребу в достатній розуміння діалектичного контексту та взаємодій. Однак, існуючі роботи окремо кодують кожне висловлювання, що зумовлює проблеми з залученням довгострокового діалектичного контексту та знехтування глибокими взаємозв'язками між висловлюваннями. В цьому роботі пропонується новий динамічний багаторівневий модуль для об'єдна |
2309.05665 | Robot Parkour Learning | Parkour is a grand challenge for legged locomotion that requires robots to
overcome various obstacles rapidly in complex environments. Existing methods
can generate either diverse but blind locomotion skills or vision-based but
specialized skills by using reference animal data or complex rewards. However,
autonomous parkour requires robots to learn generalizable skills that are both
vision-based and diverse to perceive and react to various scenarios. In this
work, we propose a system for learning a single end-to-end vision-based parkour
policy of diverse parkour skills using a simple reward without any reference
motion data. We develop a reinforcement learning method inspired by direct
collocation to generate parkour skills, including climbing over high obstacles,
leaping over large gaps, crawling beneath low barriers, squeezing through thin
slits, and running. We distill these skills into a single vision-based parkour
policy and transfer it to a quadrupedal robot using its egocentric depth
camera. We demonstrate that our system can empower two different low-cost
robots to autonomously select and execute appropriate parkour skills to
traverse challenging real-world environments. | Ziwen Zhuang, Zipeng Fu, Jianren Wang, Christopher Atkeson, Soeren Schwertfeger, Chelsea Finn, Hang Zhao | 2023-09-11T17:59:17 | http://arxiv.org/abs/2309.05665v2 | # Robot Parkour Learning
###### Abstract
Parkour is a grand challenge for legged locomotion that requires robots to overcome various obstacles rapidly in complex environments. Existing methods can generate either diverse but blind locomotion skills or vision-based but specialized skills by using reference animal data or complex rewards. However, _autonomous_ parkour requires robots to learn generalizable skills that are both vision-based and diverse to perceive and react to various scenarios. In this work, we propose a system for learning a single end-to-end vision-based parkour policy of diverse parkour skills using a simple reward without any reference motion data. We develop a reinforcement learning method inspired by direct collocation to generate parkour skills, including climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezing through thin slits, and running. We distill these skills into a single vision-based parkour policy and transfer it to a quadrupedal robot using its egocentric depth camera. We demonstrate that our system can empower two different low-cost robots to autonomously select and execute appropriate parkour skills to traverse challenging real-world environments.
Keywords:Agile Locomotion, Visuomotor Control, Sim-to-Real Transfer
## 1 Introduction
Humans and animals possess amazing athletic intelligence. Parkour is an examplar of athletic intelligence of many biological beings capable of moving swiftly and overcoming various obstacles in complex environments by running, climbing, and jumping [1]. Such agile and dynamic movements require real-time visual perception and memorization of surrounding environments [2; 3], tight
Figure 1: We present a framework for learning parkour skills on low-cost robots. Our end-to-end vision-based parkour learning system enable the robot to climb high obstacles, leap over large gaps, crawl beneath low barriers, squeeze through thin slits and run. Videos are on the project website.
coupling of perception and action [4; 5], and powerful limbs to negotiate barriers [6]. One of the grand challenges of robot locomotion is building autonomous parkour systems.
Boston Dynamics Atlas robots [7] have demonstrated stunning parkour skills. However, the massive engineering efforts needed for modeling the robot and its surrounding environments for predictive control and the high hardware cost prevent people from reproducing parkour behaviors given a reasonable budget. Recently, learning-based methods have shown robust performance on walking [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 12; 21; 22; 23; 24; 25; 26; 27], climbing stairs [20; 28; 29; 30; 31; 32; 33], mimicking animals [34; 35; 36; 37; 38; 39] and legged mobile manipulation [40; 41; 42] by learning a policy in simulation and transferring it to the real world while avoiding much costly engineering and design needed for robot-specific modeling. Can we leverage learning-based methods for robot parkour but only using low-cost hardware?
There are several challenges for robot parkour learning. First, learning diverse parkour skills (e.g. running, climbing, leaping, crawling, squeezing through, and etc) is challenging. Existing reinforcement learning works craft complex reward functions of many terms to elicit desirable behaviors of legged robots. Often each behavior requires manual tuning of the reward terms and hyper-parameters; thus these works are not scalable enough for principled generation of a wide range of agile parkour skills. In contrast, learning by directly imitating animals' motion capture data can circumvent tedious reward design and tuning [34; 43], but the lack of egocentric vision data and diverse animal MoCap skills prevents the robots from learning diverse agile skills and autonomously selecting skills by perceiving environment conditions. Second, obstacles can be challenging for low-cost robots of small sizes, as illustrated in Figure 2. Third, beyond the challenge of learning diverse skills, visual perception is dynamical and laggy during high-speed locomotion. For example, when a robot moves at 1m/s, a short 0.2 second of signal communication delay will cause a perception discrepancy of 0.2m (7.9 inches). Existing learning-based methods have not demonstrated effective high-speed agile locomotion. Lastly, parkour drives the electric motors to their maximum capacity, so proactive measures to mitigate potential damage to the motors must be included in the system.
This paper introduces a robot parkour learning system for low-cost quadrupedal robots that can perform various parkour skills, such as climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezing through thin slits, and running. Our reinforcement learning method is inspired by direct collocation and consists of two simulated training stages: RL pre-training with soft dynamics constraints and RL fine-tuning with hard dynamics constraints. In the RL pre-training stage, we allow robots to penetrate obstacles using an automatic curriculum that enforces soft dynamics constraints. This encourages robots to gradually learn to overcome these obstacles while minimizing penetrations. In the RL fine-tuning stage, we enforce all dynamics constraints and fine-tune the behaviors learned in the pre-training stage with realistic dynamics. In both stages, we only use a simple reward function that motivates robots to move forward while conserving mechanical energy. After each individual parkour skill is learned, we use DAgger [44; 45] to distill them into a single vision-based parkour policy that can be deployed to a legged robot using only onboard perception and computation power.
The main contributions of this paper include:
* **an open-source system for robot parkour learning**, offering a platform for researchers to train and deploy policies for agile locomotion;
* **a two-stage RL method** for overcoming difficult exploration problems, involving a pre-training stage with soft dynamics constraints and a fine-tuning stage with hard dynamics constraints;
Figure 2: We illustrate the challenging obstacles that our system can solve, including climbing high obstacles of 0.40m (1.53x robot height), leap over large gaps of 0.60m (1.5x robot length), crawling beneath low barriers of 0.2m (0.76x robot height), squeezing through thin slits of 0.28m by tilting (less than the robot width).
* **extensive experiments in simulation and the real world** showing that our parkour policy enables low-cost quadrupedal robots to autonomously select and execute appropriate parkour skills to traverse challenging environments in the open world using only onboard computation, onboard visual sensing and onboard power, including climbing high obstacles of 0.40m (1.53x robot height), leap over large gaps of 0.60m (1.5x robot length), crawling beneath low barriers of 0.2m (0.76x robot height), squeezing through thin slits of 0.28m by tilting (less than the robot width), and running;
* **generalization to different robots**, where we demonstrate that our system with the same training pipeline can power two different robots, A1 and Go1.
## 2 Related Work
**Agile Locomotion.** Model-based control has achieved much success in agile locomotion, from MIT Cheetah robots and A1 robots jumping over or onto obstacles of various heights [46; 47; 48], ETH StarIETH robots jumping vertically [49], CMU Unified Snake robots climbing trees [50], X-RHex robots self-righting using tails [51], ANYmal ALMA robots opening doors [52], ATRIAS robots walking over stepping stones [53; 54], Marc Raibert's One-Legged Hopping Machine [55], and Boston Dynamics Atlas' parkour skills [7]. Recently, learning-based methods have also demonstrated various agile locomotion capabilities, including high-speed running [56; 16; 57; 35], resetting to the standing pose from random states [11; 38; 15; 58], jumping [59; 60; 61], climbing stairs [20; 10; 28; 29; 30; 32; 33], climbing over obstacles [62], walking on stepping stones [29], back-flipping [63], quadrupedal standing up on rear legs [43], opening doors [40; 64; 65; 66], moving with damaged parts [67], catching flying objects [68], balancing using a tail [69], playing football/soccer [70; 71; 72; 73], weaving through poles [74] and climbing ramps [74]. Most of these skills are blind or rely on state estimation, and specialized methods are designed for these individual skills. In contrast, we build a system for learning a single end-to-end vision-based parkour policy for various parkour skills.
**Vision-Based Locomotion.** Classical modular methods rely on decoupled visual perception and control pipelines, where the elevation maps [75; 76; 77; 78; 79; 80; 81], traversability maps [82; 83; 84; 85], or state estimators [86; 87; 88; 89; 90; 91; 92; 93; 94; 95] are constructed as intermediate representations for downstream foothball planning, path planning and control [96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108]. Recently, end-to-end learning-based methods have also incorporated visual information into locomotion, where visual perception is performed using depth sensing [29; 61; 31], elevation maps [28; 109; 110; 111; 112], lidar scans [113], RGB images [32], event cameras [68] or learned neural spaces [30; 33], but none have demonstrated effective high-speed agile locomotion.
## 3 Robot Parkour Learning Systems
Our goal is to build an end-to-end parkour system that directly uses raw onboard depth sensing and proprioception to control every joint of a low-cost robot to perform various agile parkour skills, such as climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezing through thin slits, and running. Unlike prior work where different methods and training schemes are used for different locomotion skills, we aim to generate these five parkour skills automatically and systemically. To achieve this, we develop a two-stage reinforcement learning method that is
Figure 3: Soft dynamics constraints and hard dynamics constraints for each skill. Given soft dynamics constraints, the obstacles are penetrable.
inspired by direct collocation to learn these parkour skills under the same framework. In the RL pre-training stage, we allow robots to penetrate obstacles using an automatic curriculum that enforces soft dynamics constraints. We encourage robots to gradually learn to overcome these obstacles while minimizing penetrations and mechanical energy. In the RL fine-tuning stage, we fine-tune the pre-trained behaviors with realistic dynamics. In both stages, we only use a simple reward function that motivates robots to move forward while conserving mechanical energy. After each individual parkour skill is learned, we use DAgger [44, 45] to distill them into a single vision-based parkour policy that can be deployed. For robust sim-to-real deployment on a low-cost robot, we employ several pre-processing techniques for the depth images, calibrate onboard visual delays, and enforce proactive measures for motor safety.
### Parkour Skills Learning via Two-Stage RL
Since depth images are costly to render, and directly training RL on visual data is not always stable, we use privileged visual information about the environments to help RL to generate specialized parkour skills in simulation. The privileged visual information includes the distance from the robot's current position to the obstacle in front of the robot, the height of the obstacle, the width of the obstacle, and a 4-dimensional one-hot category representing the four types of obstacles. We formulate each specialized skill policy as a gated recurrent neural network (GRU [114]). The inputs to a policy other than the recurrent latent state are proprioception \(s_{t}^{\text{propropio}}\in\mathbb{R}^{29}\) (row, pitch, base angular velocities, positions and velocities of joints), last action \(a_{t-1}\in\mathbb{R}^{12}\), the privileged visual information \(e_{t}^{\text{vis}}\), and the privileged physics information \(e_{t}^{\text{phy}}\). We use a similar approach to prior work [8, 10] to sample physics properties like terrain friction, center of mass of the robot base, motor strength and etc to enable domain adaptation from simulation to the real world. The policy outputs the target joint positions \(a_{t}\in\mathbb{R}^{12}\).
We train all the specialized skill policies \(\pi_{\text{climb}},\pi_{\text{leap}},\pi_{\text{craul}},\pi_{\text{tilt}}, \pi_{\text{rnn}}\) separately on corresponding terrains shown in Figure 3 using the same reward structure. We use the formulation of minimizing mechanical energy in [35] to derive a general skill reward \(r_{\text{skill}}\) suitable for generating all skills with natural motions, which only consists of three parts, a forward reward \(r_{\text{forward}}\), an energy reward \(r_{\text{energy}}\) and an alive bonus \(r_{\text{alike}}\):
\[r_{\text{skill}}=r_{\text{forward}}+r_{\text{energy}}+r_{\text{alive}},\]
where
\[r_{\text{forward}} =-\alpha_{1}*|v_{x}-v_{x}^{\text{target}}|-\alpha_{2}*|v_{y}|^{2} +\alpha_{3}*e^{-|\omega_{\text{par}}|},\] \[r_{\text{energy}} =-\alpha_{4}*\sum_{j\in\text{joints}}|\tau_{j}\dot{q}_{j}|^{2}, \quad r_{\text{alive}}=2.\]
Measured at every time step, \(v_{x}\) is the forward base linear velocity, \(v_{x}^{\text{target}}\) is the target speed, \(v_{y}\) is the lateral base linear velocity, \(\omega_{\text{yaw}}\) is the base angular yaw velocity, \(\tau_{j}\) is the torque at joint \(j\), \(\omega_{\text{yaw}}\) is the joint velocity at at joint \(j\), and \(\alpha\) are hyperparameters. We set the target speed for all skills to around 1 m/s. We use the second power of motor power at each joint to reduce both the average and the variance of motor power across all joints. See the supplementary for all hyperparameters.
**RL. Pre-training with Soft Dynamics Constraints.** As illustrated in Figure 2, the difficult learning environments for parkour skills prevent generic RL algorithms from effectively finding policies that can overcome these challenging obstacles. Inspired by direct collocation with soft constraints, we propose to use soft dynamics constraints to solve these difficult exploration problems. Shown in Figure 3, we set the obstacles to be penetrable so the robot can violate the physical dynamics in the simulation by directly go through
\begin{table}
\begin{tabular}{c c c c} \hline \hline Skill & Obstacle Properties & Training Ranges & Test Ranges \\ & & ([\(l_{\text{easy}},l_{\text{hard}}\)]) & ([\(l_{\text{easy}},l_{\text{hard}}\)]) \\ \hline Climb & obstacle height & [0.2, 0.45] & [0.25, 0.5] \\ Leap & gap length & [0.2, 0.8] & [0.3, 0.9] \\ Crawl & clearance & [0.32, 0.22] & [0.3, 0.2] \\ Tilt & path width & [0.32, 0.28] & [0.3, 0.26] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ranges for obstacle properties for each skill during training, measured in meters.
Figure 4: We show collisions points on the robot. Collision points that penetrate obstacles are in red.
the obstacles without get stuck near the obstacles as a result of local minima of RL training with the realistic dynamics, i.e. hard dynamics constraints. Similar to the Lagrangian formulation of direct collocation [115], we develop a penetration reward \(r_{\text{penetrate}}\) to gradually enforce the dynamics constraints and an automatic curriculum that adaptively adjusts the difficulty of obstacles. This idea has also been explored in robot manipulation [116, 117]. Shown in Figure 4, to measure the degree of dynamics constraints' violation, we sample collision points within the collision bodies of the robot in order to measure the volume and the depth of penetration. Since the hips and shoulders of the robot contain all the motors, we sample more collision points around these volumes to enforce stronger dynamics constraints, encouraging fewer collisions of these vulnerable body parts in the real world. Denote a collision point on the collision bodies as \(p\), an indicator function of whether \(p\) violates the soft dynamics constraints as \(\mathbbm{1}[p]\), and the distance of \(p\) to the penetrated obstacle surface as \(d(p)\). The volume of penetration can be approximated by the sum of \(\mathbbm{1}[p]\) over all the collision points, and the average depth of penetration can be approximated by the sum of \(d(p)\). In Figure 4, the collisions points violating the soft dynamics constraints (\(\mathbbm{1}[p]=1\)) are in red, and those with \(\mathbbm{1}[p]=0\) are in green. Concretely, the penetration reward is
\[r_{\text{penetrate}}=-\sum_{p}\left(\alpha_{5}*\mathbbm{1}[p]+\alpha_{6}*d(p )\right)*v_{x},\]
where \(\alpha_{5}\) and \(\alpha_{6}\) are two fixed constants. We multiply both the penetration volume and the penetration depth with the forward base velocity \(v_{x}\) to prevent the robot from exploiting the penetration reward by sprinting through the obstacles to avoid high cumulative penalties over time. In addition, we implement an automatic curriculum that adaptively adjusts the difficulty of the obstacles after a reset based on the performance of individual robots simulated in parallel in simulation. We first calculate the performance of a robot based on its penetration reward averaged over the previous episode before the reset. If the penetration reward is over a threshold, we increase the difficulty score \(s\) of obstacles that the robot will face by one unit (0.05); if lower, then we decrease it by one unit. Every robot starts with a difficulty score 0 and the maximum difficulty score is 1. We set the obstacle property for the robot based on its difficulty score by \((1-s)*l_{\text{easy}}+s*l_{\text{hard}}\), where \(l_{\text{easy}}\) and \(l_{\text{hard}}\) are the two limits of the ranges of obstacle properties corresponding to different parkour skills (shown in Table 1). We pre-train the specialized parkour skills with soft dynamics constraints using PPO [118] with the sum of the general skill reward and the penetration reward \(r_{\text{skill}}+r_{\text{penetrate}}\).
**RL Fine-tuning with Hard Dynamics Constraints.** After the pre-training stage of RL is near convergence, we fine-tune every specialized parkour skill policy on the realistic hard dynamics constraints (shown in Figure 3); hence, no penetrations between the robots and obstacles are possible at the second stage of RL. We use PPO to fine-tune the specialized skills using only the general skill reward \(r_{\text{skill}}\). We randomly sample obstacle properties from the ranges listed in Table 1 during fine-tuning. Since the running skill is trained on terrains without obstacles, we directly train the running skill with hard dynamics constraints and skip the RL pre-training stage with soft dynamics constraints.
### Learning a Single Parkour Policy by Distillation
The learned specialized parkour skills are five policies that use both the privileged visual information \(e_{t}^{\text{vis}}\), and the privileged physics information \(e_{t}^{\text{phy}}\). However, the ground-truth privilege information is only available in the simulation but not in the real world. Furthermore, each specialized policy can only execute one skill and cannot autonomously execute and switch between different parkour skills based on visual perception of the environments. We propose to use DAgger [44, 45] to distill
Figure 5: We bridge the visual gap between simulation and real world by applying pre-processing techniques. We use depth clipping, Gaussian noise and random artifacts in simulation, and depth clipping and hole-filling, spatial and temporal filters in the real world.
a single vision-based parkour policy \(\pi_{\text{parkour}}\) using only onboard sensing from the five specialized skill policies \(\pi_{\text{climb}},\pi_{\text{leap}},\pi_{\text{crawl}},\pi_{\text{tilt}},\pi_{ \text{run}}\). We randomly sample obstacles types and properties from Table 1 to form a simulation terrain consisting of 40 tracks and 20 obstacles on each track. Since we have full knowledge of the type of obstacle related to every state \(s_{t}\), we can assign the corresponding specialized skill policy \(\pi_{s_{t}}^{\text{specialized}}\) to teach the parkour policy how to act at a state. For example, we assign the climb policy \(\pi_{\text{climb}}\) to supervise the parkour policy given a high obstacle. We parameterize the policy as a GRU. The inputs except the recurrent latent state are the proprioception \(s_{t}^{\text{proprio}}\), the previous action \(a_{t-1}\) and a latent embedding of the depth image \(I_{t}^{\text{depth}}\) processed by a small CNN. The distillation objective is
\[\underset{\theta_{\text{parkour}}}{\arg\min}\operatorname{\mathbb{E}}_{s_{t},a _{t}\sim\pi_{\text{parkour}},sim}\left[D\left(\pi_{\text{parkour}}\left(s_{t}^ {\text{proprio}},a_{t-1},I_{t}^{\text{depth}}\right),\pi_{s_{t}}^{\text{ specialized}}\left(s_{t}^{\text{proprio}},a_{t-1},e_{t}^{\text{vis}},e_{t}^{\text{phy}} \right)\right)\right],\]
where \(\theta_{\text{parkour}}\) are the network parameters of the parkour policy, \(sim\) is the simulator with hard dynamics constraints, and \(D\) is the divergence function which is binary cross entropy loss for policy networks with tanh as the last layer. Both polices \(\pi_{\text{parkour}}\) and \(\pi_{s_{t}}^{\text{specialized}}\) are stateful. More details of the parkour policy network are in the supplementary.
### Sim-to-Real and Deployment
Although the distillation training in Section 3.2 can bridge the sim-to-real gap in physical dynamics properties such as terrain friction and mass properties of the robot [8; 10], we still need to address the sim-to-real gap in visual appearance between the rendered depth image in simulation and the onboard depth image taken by a depth camera in the real world. Shown in Figure 5, we apply pre-processing techniques to both the raw rendered depth image and the raw real-world depth image. We apply depth clipping, pixel-level Gaussian noise, and random artifacts to the raw rendered depth image, and apply depth clipping, hole filing, spatial smoothing and temporal smoothing to the raw real-world depth image.
The depth images in both simulation and real-world have a resolution of 48 * 64. Due to the limited onboard computation power, the refresh rate of onboard depth image is 10Hz. Our parkour policy operates at 50Hz in both simulation and the real world to enable agile locomotion skills, and asynchronously fetches the latest latent embedding of the depth image processed by a small CNN. The output actions of the policy are target joint positions which are converted to torques on the order of 1000Hz through a PD controller of \(K_{p}=50\) and \(K_{d}=1\). To ensure safe deployment, we apply a torque limits of 25Nm by clipping target joint positions: \(\text{clip}(q^{\text{target}},(K_{d}*\dot{q}-25)/K_{p}+q,(K_{d}*\dot{q}+25)/K_ {p}+q)\).
## 4 Experimental Results
**Robot and Simulation Setup.** We use IsaacGym [119] as the simulator to train all the policies. To train the specialized parkour skills, we construct large simulated environments consisting of 40 tracks and 20 obstacles on each track. The obstacles in each track have linearly increasing difficulties based on the obstacle property ranges in Table 1. We use a Unitree A1 and a Unitree Go1 that are equipped with Nvidia Jetson NX for onboard computation and Intel RealSense D435 for onboard visual sensing. More details are in the supplementary.
\begin{table}
\begin{tabular}{l|c c c c c|c c c c c} \hline \hline & \multicolumn{5}{c|}{Success Rate (\%) \(\uparrow\)} & \multicolumn{5}{c}{Average Distance (m) \(\uparrow\)} \\ & Climb & Leap & Crawl & Tilt & Run & Climb & Leap & Crawl & Tilt & Run \\ \hline Blind & 0 & 0 & 13 & 0 & 100 & 1.53 & 1.86 & 2.01 & 1.62 & 3.6 \\ MLP & 0 & 1 & 63 & 43 & 100 & 1.59 & 1.74 & 3.27 & 2.31 & 3.6 \\ No Distill & 0 & 0 & 73 & 0 & 100 & 1.57 & 1.75 & 2.76 & 1.86 & 3.6 \\ RMA [8] & - & - & - & **74** & - & - & - & - & **2.7** & - \\ Ours (parkour policy) & **86** & **80** & **100** & 73 & 100 & **2.37** & **3.05** & **3.6** & 2.68 & 3.6 \\ \hline Oracles w/o Soft Dyn & 0 & 0 & 93 & 86 & 100 & 1.54 & 1.73 & 3.58 & 1.73 & 3.6 \\ Oracles & 95 & 82 & 100 & 100 & 100 & 3.60 & 3.59 & 3.6 & 2.78 & 3.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: We test our method against several baselines and ablations in the simulation with a max distance of 3.6m. We measure the success rates and average distances of every skill averaged across 100 trials and 3 random seeds. Our parkour policy shows the best performance using only sensors that are available in the real world. We evaluate on the test environments with obstacles proprieties that are more difficult than the ones of training environments shown in Table 1.
**Baselines and Ablations.** We compare our parkour policy with several baselines and ablations. The baselines include **Blind**, **RND**[120], **MLP** and **RMA**[8]. The ablations include **No Distill**, **Oracles w/o Soft Dyn**. We also include **Oracles**, specialized parkour skills conditioned on priviledge information in simulation, for the completeness of the comparisons.
* **Blind**: a blind parkour policy baseline distilled from the specialized skills, implemented by setting depth images \(I^{\text{depth}}\) as zeros.
* **RND**: a RL exploration baseline method for training specialized skills with bonus rewards based on forward prediction errors. We train it without our RL pre-training on soft dynamics constraints.
* **MLP**: a MLP parkour policy baseline distilled from the specialized skills. Instead of using a GRU, it uses only the depth image, proprioception and previous action at the current time step without any memory to output actions.
* **RMA**: a domain adaptation baseline that distills a parkour policy on a latent space of environment extrinsics instead of the action space.
* **No Distill**: an ablation training a vision-based parkour policy with GRU directly using PPO with our two-stage RL method but but skipping the distillation stage.
* **Oracles w/o Soft Dyn**: an ablation training specialized skill policies using privileged information directly with hard dynamics constraints.
* **Oracles** (w/ Soft Dyn): our specialized skill policies using privileged information trained with our two-stage RL approach.
### Simulation Experiments
**Vision is crucial for learning parkour.** We compare the Blind baseline with our approach. Shown in Table 2, without depth sensing and relying only on proprioception, the distilled blind policy cannot complete any climbing, leaping or tilting trials and can only achieve a 13% success rate on crawling. This is expected, as vision enables sensing of the obstacle properties and prepares the robot for execute agile skills while approaching the obstacles.
**RL pre-training with soft dynamics constraints enables parkour skills' learning.** We compare the RND, Oracles w/o Soft Dyn and ours (Oracles w/ Soft Dyn), all trained using privileged information without the distillation stage. We aim to verify that our method of RL pre-training with soft dynamics constraints can perform efficient exploration. In Figure 7, we measure the average success rates of each method averaged over 100 trials across all the parkour skills that require exploration including climbing, leaping, crawling and tilting. We trained using three random seeds for each method to measure the standard deviations. Our method using RL pre-training with soft dynamics constraints can achieve much faster learning progress and a better final success rate around 95%. We notice that RND struggles to learn meaningful behaviors with scenarios that require fine-grained maneevurs such as crawling through a thin slit, due to its tendency to reach states where future states are difficult to predict. Both RND and Oracles w/o
Figure 6: Real-world indoor quantitative experiments. Our parkour policy can achieve the best performance, compared with a blind policy and built-in MPC controllers. We control the MPC in A1 special mode by teleoperating the robot lower down or tilt the body during crawling and tilt respectively.
Figure 7: Comparison of specialized oracles trained with soft dynamics constraints with baselines averaged across every skill and three trials.
Soft Dyn cannot make any learning progress on climbing and leaping, the two most difficult parkour skills. More plots showing the success rates for each skill separately are in the supplementary.
**Recurrent networks enable parkour skills requiring memories.** We compare the MLP baseline with ours using a GRU to parameterize the vision-based parkour policy. Shown in Table 2, the MLP baseline cannot learn the climbing and leaping skills and achieve much lower performance on crawling and tilting. Both climbing and leaping requires the robot to hold a short-term memory of the past visual perceptions. For example, during climbing when the robot has its front legs on the obstacles, it still needs memory about the spatial dimensions of the obstacle captured in past depth images to control the rear legs to complete the climbing.
**Distillation is effective for Sim2Real.** We compare the RMA baseline and the No Distill baseline with ours. Although RMA can achieve similar performance on one skill that it is trained on, i.e. tilting, RMA fixes the network parameters of the MLP which processes the latent embeddings of the backbone GRU, and directly copies them from the specialized skill to the distilled policy. Consequently, it cannot distill multiple specialized skill policies, which have different MLP parameters, into one parkour policy. No Distill cannot learn climbing, leaping and tilting due to the complexity of training directly from visual observations without privileged information.
### Real-World Experiments
**Emergent Re-trying Behaviors during Climbing.** Our parkour policy has emergent re-trying behaviors in the real world. When trying to overcoming a high obstacle but failing at the first trial, the robot will push itself away from the obstacle to ensure adequate run-up space for subsequent attempts.. Although we do not program such re-trying behaviors, they nicely emerge out of learning with simple rewards. This behavior is also observed in simulation.
**Indoor Quantitative Experiments.** Shown in Figure 1, we test our parkour policy in a constructed parkour terrain consisting of crawling, climbing, and leaping in sequential. We also conduct quantitative indoor experiments in the real world on the A1 robot. In Figure 6, we compare our vision-based parkour policy, with Blind, MPC (A1 default controller) and MPC (A1 special mode). We show the success rates of each method in every skill under varying difficulties averaged over 10 trials each. We change the skill difficulty by modifying the key obstacle properties, such as obstacle heights for climbing and gap length for leaping. In A1 special mode, we directly teleoperate the robot to change its state, such as lowering the body during crawling. We observe that our parkour policy can enable the robot to climb obstacles as high as 0.40m (1.53x robot height) with an 80% success rate, to leap over gaps as large as of 0.60m (1.5x robot length) with an 80% success rate, to crawl beneath barriers as low as of 0.2m (0.76x robot height) with an 90% success rate, and to squeeze through thin slits of 0.28m by tilting (less than the robot width). Our method has the best performance across all skills. Please refer to our project website for indoor experiment videos.
**Outdoor Experiments.** Shown in Figure 1, we test our robot in the various outdoor environments. We observe that the robot controlled by our parkour policy can complete a wide range of agile parkour skills. It can leap over two disconnected stone stools by the river with a 0.6m wide gap. It can continuously climb several stairs of 0.42m high each. It can crawl beneath a camping cart as well as handle slippery grass terrain. Please refer to our project website for outdoor experiment videos.
## 5 Conclusion, Limitations and Future Directions
We present a parkour learning system for low-cost robots. We propose a two-stage reinforcement learning method for overcoming difficult exploration problems for learning parkour skills. We also extensively test our system in both simulation and the real world and show that our system has robust performance for various challenging parkour skills in challenging indoor and outdoor environments. However, the current system requires the simulation environments to be manually constructed. As a result, new skills can only be learned when new environments with different obstacles and appearances are added to the simulation. This reduces how automatically new skills can be learned. In the future, we hope to leverage recent advances in 3D vision and graphics to construct diverse simulation environments automatically from large-scale real-world data. We will also investigate how we can train agile locomotion skills directly from RGB that contains semantic information instead of depth images.
#### Acknowledgments
We would like to thank Wenxuan Zhou and her Emergent Extrinsic Dexterity project [116] for inspiring our training pipeline allowing penetration. We would also like to thank Xiaozhu Lin, Wenqing Jiang, Fan Nie, Ruihan Yang, Xuxin Chen, Tony Z. Zhao and Unitree Robotics (Yunguo Cui) for their help in the real-world experiments. Zipeng Fu is supported by Stanford Graduate Fellowship (Pierre and Christine Lamond Fellowship). This project is supported by Shanghai Qi Zhi Institute and ONR grant N00014-20-1-2675.
| ロボットによる腿の運動を克服する壮大な挑戦であり、複雑な環境において様々な障害物に素早く対処する必要がある。現行の方法では、参考動物データや複雑な報酬を用いて、多様な運動スキルを生成することができるか、視覚に基づくしかし専門性の高いスキルを生成することができる。しかし、自律型のパークourには、ロボットが視覚に基づいた多様なスキルを学習する必要がある。本研究では、参考モーションデータなしに、単一のエンドツーエンドの視覚に基づいたパークourポリシーを学習するためのシステムを提案する。直接共置をインスパイアされた強化学習方法を用いて、高障害物を越える、大きなギャップを飛び越える、低の壁の下に這う、細いスリットをスルーする、ランニングなどのパークourスキルを生成する。これらのスキルを1つの視覚に基づいたパークourポリシーに凝縮し、 |
2309.11610 | Hand Gesture Recognition with Two Stage Approach Using Transfer Learning
and Deep Ensemble Learning | Human-Computer Interaction (HCI) has been the subject of research for many
years, and recent studies have focused on improving its performance through
various techniques. In the past decade, deep learning studies have shown high
performance in various research areas, leading researchers to explore their
application to HCI. Convolutional neural networks can be used to recognize hand
gestures from images using deep architectures. In this study, we evaluated
pre-trained high-performance deep architectures on the HG14 dataset, which
consists of 14 different hand gesture classes. Among 22 different models,
versions of the VGGNet and MobileNet models attained the highest accuracy
rates. Specifically, the VGG16 and VGG19 models achieved accuracy rates of
94.64% and 94.36%, respectively, while the MobileNet and MobileNetV2 models
achieved accuracy rates of 96.79% and 94.43%, respectively. We performed hand
gesture recognition on the dataset using an ensemble learning technique, which
combined the four most successful models. By utilizing these models as base
learners and applying the Dirichlet ensemble technique, we achieved an accuracy
rate of 98.88%. These results demonstrate the effectiveness of the deep
ensemble learning technique for HCI and its potential applications in areas
such as augmented reality, virtual reality, and game technologies. | Serkan Savaş, Atilla Ergüzen | 2023-09-20T19:53:05 | http://arxiv.org/abs/2309.11610v1 | # Hand Gesture Recognition with Two Stage Approach Using Transfer Learning and Deep Ensemble Learning
###### Abstract
Human-Computer Interaction (HCI) has been the subject of research for many years, and recent studies have focused on improving its performance through various techniques. In the past decade, deep learning studies have shown high performance in various research areas, leading researchers to explore their application to HCI. Convolutional neural networks can be used to recognize hand gestures from images using deep architectures. In this study, we evaluated pre-trained high-performance deep architectures on the HG14 dataset, which consists of 14 different hand gesture classes. Among 22 different models, versions of the VGGNet and MobileNet models attained the highest accuracy rates. Specifically, the VGG16 and VGG19 models achieved accuracy rates of 94.64% and 94.36%, respectively, while the MobileNet and MobileNetV2 models achieved accuracy rates of 96.79% and 94.43%, respectively. We performed hand gesture recognition on the dataset using an ensemble learning technique, which combined the four most successful models. By utilizing these models as base learners and applying the Dirichlet ensemble technique, we achieved an accuracy rate of 98.88%. These results demonstrate the effectiveness of the deep ensemble learning technique for HCI and its potential applications in areas such as augmented reality, virtual reality, and game technologies.
Hand gesture recognition, ensemble learning, deep learning, transfer learning, human computer interaction
## I Introduction
Recent research has led to the development of interfaces and applications to provide more effective communication between users and computers. These interfaces and applications, referred to as Human-Computer Interaction (HCI), incorporate both human and computer factors, drawing from various fields such as information technologies, software, design, human psychology, and human behavior. Designers work on new technology and interface development while researchers investigate new techniques for interaction, usability, and efficiency of the technologies used.
With the advancements in technology, new interaction methods and technologies have emerged in the field of HCI. From simple office programs, dialog boxes, and error messages in the 1980s, HCI studies have expanded with the development of the internet, mobile and portable devices, touch screens, image, motion, and sensation sensors. Today, the most widely studied areas in HCI are mobile devices, touch screens, voice command processing, human motion calculation, image processing, sensors, and interactive systems developed using wearable technologies [1].
Recently, machine learning (ML) studies for computer vision have focused on human gesture recognition and hand gestures (HG). The purpose of these studies is to provide control systems to enhance HCI [2]. To achieve this purpose, identifying hand movements is important for controlling hardware or software [3]. Especially in the last two decades, the application areas using hand recognition systems have increased and become widespread. These systems, which are used in different applications such as augmented reality (AR), virtual reality (VR), extended reality (XR), computer games, internet of things, sign language recognition, robotic systems, etc., [3, 4] have even become the technological themes of science fiction and futuristic movies also.
Interfaces developed in the field of HCI are widely used in industries such as military, tourism, education, communication, health, robotics, entertainment, and others. Interactive and user-controlled educational materials are designed using new technologies in education. In the health sector, systems have been developed that allow users to monitor daily pulse, blood pressure, heart rate, sugar, etc., and systems that enable operations to be performed using remote and robotic systems. In the entertainment industry, digital games and virtual environments that recognize user movements are designed. With advancements in the industrial field, all processes can be monitored and controlled in digital environments. In the military field, simulations are used for armed training, defence, and attack systems. In the tourism industry, museum tours are conducted in virtual environments. In the field of communication, sign language recognition and language translation systems bridge the gap between people. In robotic areas, many systems are controlled by users with motion and voice control. Interfaces developed in the field of HCI are increasingly being used effectively in all areas of our lives [1].
A HG identification system can be created using sensors to recognize hand movements, or markers can be used in this system. This system is called sensor-based, and specialized hardware as gloves are often used, which can be a disadvantage due to the expensive set-up costs. Another methodology for creating HG identification systems is using machine vision to detect hand movement. In these vision-based systems, different information like edges, colour, and hand shapes, etc., is
extracted from images or videos using algorithms [6]. Due to recent advances in ML and deep learning (DL) studies, vision-based systems are being widely used by researchers.
In this study, a two-stage approach is proposed to achieve more accurate HCI rates. In the first stage, fine-tuning was performed to train deep architectures on the dataset determined by the transfer learning method. High-performance pre-trained models were included in the study, and their performances were compared. The most successful models were determined, and in the second stage, they were brought together with the ensemble learning method, and the results were evaluated. The structure of the study is as follows: In the second section, related works are explained, and in the third section, the materials and methodology used in the study are explained. In the fourth section, the results obtained from the tests are explained. Finally, in the fifth and last section, the study is concluded with discussion.
## II Related Works
In recent years, there has been an increase in studies on hand gestures (HG) in response to the growing popularity of applications such as three-dimensional (3D), augmented reality (AR), virtual reality (VR), and extended reality (XR) in technology. In particular, the Meta universe, formed by the merger of Facebook and its sub-brands under the name Meta, has accelerated human-computer interaction (HCI) studies in this area.
Several studies have been conducted on AR applications using HG. Chun and Hollerer [7] developed a marker-based AR application that enabled users to interact with objects on their mobile phone screens. Seo and Lee [8] improved the feel and interaction in AR-based environments by using a depth camera. Hurst and van Wezel [9] used colored markers on fingertips for finger tracking and gesture-based interaction in AR applications on mobile phones, allowing for translation, scaling, and rotation operations on objects. Akman et al. [10] developed a HG recognition system with multiple hand detection and tracking methods using video glasses with two cameras. Similarly, Ng et al. [11] used a stereo camera to obtain hand depth information and played with virtual objects using the extended hand.
Other studies on AR applications using HG include Asad and Slabaugh's [12] study on hand recognition and displaying a virtual object on the recognized hand using a depth camera, and AlAgha and Rasheed's [13] examination of three different techniques that interact with virtual 3D content displayed in AR environments using the NyARToolkit library and Kinect sensor. Adeen et al. [14] presented an animated and intuitive solution to interact with the image projected on flat surfaces, using hand gestures, finger tracking, and hand tracking. Bikos et al. [15] developed an AR chess game that used the thumb and index finger to interact with the virtually developed content. Chang et al. [16] conducted a study on surface drawing and aerial drawing methods, which allow motion input directly on the surfaces of real-world objects and the user's fingertip, respectively, to project onto the real-world model when released, using the HoloLens supported AR platform.
Moreover, virtual environments created using AR technology provide HCI tools, such as applications on tablets or mobile phones, for users/employees to interact with machines, control operating systems, and follow maintenance and assembly processes [17]. Guler and Yucedag [18] developed an industrial maintenance and repair application with AR for computer numerical control (CNC) lathe looms. Their developed model was used in the education system, and an increase in student motivation was observed. Another study of these researchers was on the skeletal system with AR using animated 3D models, menus, voice, and text [19].
Tsai et al. [20] developed a multi-template AR system consisting of three units, namely, a multi-template AR, an online 3D model assembly, and an HG interaction, for an interactive assembly teaching aid. Fucentese and Koch [21] developed an AR-based surgical guidance system to measure the effect of prosthesis alignment and positioning on soft tissue balance during surgery. Furthermore, Guler [22] examined the use of AR training applications for aircraft turbo engine training in the aviation sector.
While these developments regarding AR are being experienced, DL studies have started to be carried out in this field, recently. Different models were used for different purposes in these studies. Nunez Fernandez & Kwolek [23] used CNN algorithm for recognition of hands from images in their study. CNN algorithm is also used for skin detection and hand area position detection [24], 3D hand recognition using hand-skeleton data [25], hand position prediction and hand sign recognition [26], and directly HG recognition [27]. In addition, 2-level CNN is also used for static HG recognition [27]. The CNN algorithm is also used as 3D-CNN and Recurrent 3D-CNN to recognize the HG of vehicle drivers and for detection and classification of dynamic HG [29, 30]. Recurrent 3D-CNN is also used for interaction with wearable devices such as helmets and glasses in another study [31]. In addition to these studies, Deep CNN is used for HG recognition from image [32] or from Doppler signals [33] using the Deep CNN algorithm have also been made. Besides, motion recognition on multiple data including image and depth data and skeleton properties study was carried out with the use of deep dynamic in NN format [34]. CNN algorithm is also used as Region-Based for two types of HG recognition in open and closed positions [35] and Faster R-CNN for object detector intelligent HG recognition for collaborative robots [36]. In some other studies, it was aimed to increase the performance of the algorithm by using hybrid methods. CNN + LSTM is used for mixed HG recognition, consisting of gestures created with leap motion controller [37] and long-term recurrent convolution network is used for a vision-based HG recognition system for smart vehicles [38]. While deep belief network and CNN combined in a study for sign language recognition using Kinect sensor [39], capsule network + CNN is used for hand gesture from a dataset consisting 14 different classes in another study [40]. Besides, Koller et al., [41] used Expectation maximization (EM) & CNN for multimodal sign language recognition and Cote-Allard et al., [42] used Continuous wavelet transform and CNN for Electromyography (EMG)-based motion recognition.
## III Material and Methodology
The study used the CNN algorithm, a deep neural network algorithm, for image processing on the HG14 dataset 1 published on the Kaggle platform. The researchers included 22 pre-trained high performance architectures designed using this algorithm and fine-tuned them by adapting the classification layers of the models to the problem in the study. After the training, validation, and testing stages, the weights of the models were recorded. The researchers applied the deep ensemble learning technique to use the most successful models together and compared the results.
Footnote 1: [https://www.kaggle.com/datasets/gulerosman/hg14-handgesture14-dataset](https://www.kaggle.com/datasets/gulerosman/hg14-handgesture14-dataset)
HG14 dataset contains 14 different hand gestures with RGB channel images with resolution of 256x256 for hand interaction and application control in AR applications. There are 1000 images from each class and 14000 images in total from 17 different people's hands using different backgrounds. The dataset was created from first-person view and does not include RGB-D and depth data. In addition, it is created directly with a usual camera not with special camera or infrared or sensors [42]. Fig. 1 presents the sample images of each class in the dataset.
The dataset used in the study is divided into three subsets to be used in the training, validation, and testing stages. In the first stage, 10% random images from 14000 images were selected from each class and a total of 1400 images were reserved for testing. Then, 20% of the remaining 12600 images (2520 images) were randomly divided for validation. The remaining 10080 images were used for the train process.
The main purpose of DL algorithms, which have developed rapidly in the last 10 years and started to be used in almost all fields in a multidisciplinary way, is to produce generalizable solutions. The most important advantage of deep learning compared to machine learning is that a model can be adapted to different problems instead of problem-specific solutions. In addition, models that have achieved high performance in different competitions, especially in the ImageNet competition in recent years, are offered to different studies through the Keras library2. Thus, researchers can use these models in their own studies by applying techniques called transfer learning and fine-tuning.
Footnote 2: [https://keras.io/api/applications/](https://keras.io/api/applications/)
Based on this, 22 models, which were successful in the ImageNet competition and frequently used in the literature, were used with the transfer learning method, in this study. In this method, the weights of the models in the ImageNet study are downloaded to train on the HG14 dataset. After the feature extraction layers, since the HG14 dataset contains 14 classes, the number of output neurons is reduced to 14 by applying the fine-tuning method in the classification layer. Other fine-tuning processes are as follows. In the study, the images were reduced to 128x128 resolution. The batch-size is set to 20 and the number of epochs to 50. A 0.5% DropOut layer was used in the classification layer, and then the number of neurons was reduced to 512 and 14, respectively. ReLU and Softmax were used as activation functions in these layers, respectively.
In the study, the weights obtained after the training and validation stages were saved and the test process was carried out with these weights. By saving the test results, the confusion matrix was created and the results of all operations were graphed. The results of the models were compared and the ensemble learning model, which consisted of combining the most successful models, was established. Dirichlet Ensemble Learning methodology was applied in the establishment of the model.
Ensemble learning is the process of merging various learning algorithms to gain their collective performance, or to enhance the performance of current models by mixing many models to produce one trustworthy model [43]. DL models alone performed well in the majority of applications, but there is always room to employ a collection of DL models to accomplish the same goal as an assembly technique.
Randomized weighted ensemble, which is used in the study, is an ensemble technique that weights the prediction of each ensemble member, combining the weights to calculate a combined prediction (as shown in Equation 1). Weight optimization search is performed with randomized search based on the dirichlet distribution on a validation dataset [44].
\[w_{1},[\mathcal{Y}_{1}]+w_{2},[\mathcal{Y}_{2}]+\cdots w_{n},[\mathcal{Y}_{n}] =[\mathcal{Y}] \tag{1}\]
where \(w\) is weight of each member, \(\mathcal{Y}\) is output of each member, and \(\mathcal{Y}\) is the weighted average ensemble output.
## IV Experimental Results
The training, validation, and testing results of the models used for the first phase of the study are presented in Table I.
It has been determined that among the models in the table, two model groups are superior to the others. It has been determined that MobileNet and VGGNet models have achieved more successful results than other pre-trained models. The Loss value in the table is a metric that supports the accuracy ratio in evaluating the performance of the models. This metric is used to measure the inconsistency between predicted and actual values. Thus, it is an important indicator for CNN models, which is a non-negative value, where the robustness of the model increases along with the decrease of the value of loss function [45].
Fig. 1: The class examples of the dataset
As the most successful model among these two models, MobileNet models have been the most successful group with both test accuracy and loss results, validation accuracy, and test accuracy and loss rates. One of the important findings here is that the validation accuracy rates of almost all of the models are lower than the train and test rates. In addition, validation loss rates were also at high levels. The graph of the train and validation accuracy rates of the four models that achieved the highest accuracy rate in the study is shown in Fig. 2.
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\
**Model** & **Time/step** & **Loss** & **Accuracy** & **Loss** & **Accuracy** & **Loss** & **Accuracy** & **Loss** & **Accuracy** \\ \hline VGG16 & 36s 71ms & 0.2121 & **0.9841** & 5.9066 & 0.7948 & 1.3210 & **0.9464** \\ \hline VGG19 & 36s 71ms & 0.1848 & **0.9848** & 5.4644 & 0.7956 & **0.9298** & **0.9436** \\ \hline Xception & 34s 67ms & **0.0973** & 0.9739 & 3.7009 & 0.6369 & 1.1383 & 0.8464 \\ \hline ResNet50 & 34s 67ms & 0.1049 & **0.9847** & 3.8690 & 0.7647 & 1.0533 & 0.9214 \\ \hline ResNet50V2 & 31s 62ms & 0.1000 & **0.9863** & 4.2230 & 0.7841 & 1.3900 & 0.9007 \\ \hline ResNet101 & 36s 72ms & 0.1189 & **0.9823** & 4.1808 & 0.7762 & **0.9118** & 0.9271 \\ \hline ResNet101V2 & 33s 66ms & **0.0870** & **0.9864** & 4.1413 & 0.7825 & 1.2859 & 0.9021 \\ \hline ResNet152 & 39s 77ms & 0.1038 & **0.9823** & 3.9453 & 0.7448 & 1.1360 & 0.9071 \\ \hline ResNet152V2 & 36s 71ms & **0.0897** & **0.9855** & 3.9288 & 0.7476 & 1.5495 & 0.8886 \\ \hline InceptionV3 & 38s 75ms & 0.3013 & 0.8947 & 2.1544 & 0.6012 & 1.0144 & 0.7664 \\ \hline InceptionResNetV2 & 43s 86ms & 0.2191 & 0.9271 & 1.8657 & 0.6813 & **0.6885** & 0.8407 \\ \hline MobileNet & 34s 67ms & **0.0622** & **0.9931** & 3.0226 & **0.8675** & **0.5633** & **0.9679** \\ \hline MobileNetV2 & 35s 69ms & **0.0506** & **0.9935** & 3.3827 & **0.8341** & **0.7605** & **0.9443** \\ \hline DenseNet121 & 38s 75ms & 0.1316 & 0.9661 & 2.0020 & 0.7369 & **0.7487** & 0.8850 \\ \hline DenseNet169 & 39s 78ms & 0.1202 & 0.9719 & 2.1225 & 0.7492 & **0.5843** & 0.9086 \\ \hline DenseNet201 & 43s 86ms & 0.1118 & 0.9757 & 2.5271 & 0.7718 & **0.5394** & 0.9321 \\ \hline EfficientNetB0 & 56s 111ms & **0.0849** & **0.9840** & 6.4851 & **0.8599** & 1.8422 & 0.9336 \\ \hline EfficientNetB1 & 62s 122ms & 0.1265 & **0.9890** & 3.9277 & **0.8774** & 1.5096 & 0.9371 \\ \hline EfficientNetB2 & 58s 115ms & 0.1426 & **0.9858** & 4.7746 & **0.8333** & 1.3951 & 0.9279 \\ \hline ConvNeXtTiny & 43s 84ms & **0.0709** & **0.9893** & 3.2226 & **0.8028** & **0.8192** & 0.9300 \\ \hline ConvNeXtSmall & 61s 120ms & **0.0640** & **0.9897** & 3.9029 & 0.7643 & **0.9123** & 0.9214 \\ \hline ConvNeXtBase & 70s 138ms & **0.0750** & **0.9876** & 3.2020 & 0.7881 & **0.8759** & 0.9229 \\ \hline \end{tabular}
E-ISBN: 978-605-72180-3-2
Confusion matrix performances of the models were also obtained in order to display the estimation results for each class in the HG14 dataset of the four models. Obtained results are shown in Figure 3.
After the four most successful models were determined, these models were combined with the Dirichlet ensemble weighted average method and tested on the HG14 train and test dataset. The tests for robustness of the model were repeated 10 times and the average was determined. Dirichlet Ensemble Weighted Average results are given in Table II.
## V Discussion and Conclusion
The study has demonstrated the superiority of the proposed approach in hand gesture identification compared to the state-of-the-art techniques. The importance of HG studies has been emphasized due to the increasing prevalence of technologies such as 3D, AR, VR, and XR. The control of hardware and software is a crucial element in HCI, with hand movements playing a significant role in control systems.
The two-stage approach of the study involved the transfer learning and fine-tuning of high-performance pre-trained deep architectures on the HG14 dataset containing 14 different hand sign classes. The dataset was divided into three groups as training, validation, and test data. Two different model groups, MobileNet and VGGNet, were found to outweigh the other pre-trained models in the first stage.
In the second stage, these four models were combined using the dirichlet ensemble method and used in the classification process with the weighted average method. The test data were used, and the tests were repeated 10 times for reliability. The proposed method achieved more successful results than both state-of-the-art studies and single transfer learning models.
Future studies could test this approach on different HG datasets and assess the performances of models and deep ensemble learning. The successful models' weights can also be recorded and used in camera systems, game consoles, and other applications.
| 人間とコンピュータのインタラクション(HCI)は、長年にわたり研究の対象となってきました。近年では、様々な手法を用いてその性能を向上させる研究が盛んに行われています。過去10年間、深層学習研究は、様々な研究分野で高い性能を発揮しており、その応用をHCIに探求する研究者も増えています。畳み込みニューラルネットワークを用いることで、画像から手書きの認識を行うことができます。この研究では、HG14データセットを用いて、高性能な深層アーキテクチャの事前学習結果を評価しました。HG14データセットは、14種類の異なる手書きジェスチャーのクラスを含むものです。22種類のモデルのうち、VGGNetとMobileNetのバージョンが最高精度を獲得しました。具体的には、VGG16とVGG19モデルの精度率はそれぞれ94.64%、94.36%でした |
2309.17261 | Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware
Diffusion Priors | Reconstructing 3D objects from a single image guided by pretrained diffusion
models has demonstrated promising outcomes. However, due to utilizing the
case-agnostic rigid strategy, their generalization ability to arbitrary cases
and the 3D consistency of reconstruction are still poor. In this work, we
propose Consistent123, a case-aware two-stage method for highly consistent 3D
asset reconstruction from one image with both 2D and 3D diffusion priors. In
the first stage, Consistent123 utilizes only 3D structural priors for
sufficient geometry exploitation, with a CLIP-based case-aware adaptive
detection mechanism embedded within this process. In the second stage, 2D
texture priors are introduced and progressively take on a dominant guiding
role, delicately sculpting the details of the 3D model. Consistent123 aligns
more closely with the evolving trends in guidance requirements, adaptively
providing adequate 3D geometric initialization and suitable 2D texture
refinement for different objects. Consistent123 can obtain highly 3D-consistent
reconstruction and exhibits strong generalization ability across various
objects. Qualitative and quantitative experiments show that our method
significantly outperforms state-of-the-art image-to-3D methods. See
https://Consistent123.github.io for a more comprehensive exploration of our
generated 3D assets. | Yukang Lin, Haonan Han, Chaoqun Gong, Zunnan Xu, Yachao Zhang, Xiu Li | 2023-09-29T14:13:07 | http://arxiv.org/abs/2309.17261v2 | # Consistent123:
###### Abstract
Reconstructing 3D objects from a single image guided by pretrained diffusion models has demonstrated promising outcomes. However, due to utilizing the case-agnostic rigid strategy, their generalization ability to arbitrary cases and the 3D consistency of reconstruction are still poor. In this work, we propose Consistent123, a case-aware two-stage method for highly consistent 3D asset reconstruction from one image with both 2D and 3D diffusion priors. In the first stage, Consistent123 utilizes only 3D structural priors for sufficient geometry exploitation, with a CLIP-based case-aware adaptive detection mechanism embedded within this process. In the second stage, 2D texture priors are introduced and progressively take on a dominant guiding role, delicately sculpting the details of the 3D model. Consistent123 aligns more closely with the evolving trends in guidance requirements, adaptively providing adequate 3D geometric initialization and suitable 2D texture refinement for different objects. Consistent123 can obtain highly 3D-consistent reconstruction and exhibits strong generalization ability across various objects. Qualitative and quantitative experiments show that our method significantly outperforms state-of-the-art image-to-3D methods. See [https://Consistent123.github.io](https://Consistent123.github.io) for a more comprehensive exploration of our generated 3D assets.
Figure 1: **The reconstructed highly consistent 3D assets from a single image of Consistent123.** Rendered 3D models are presented by seven views (middle part) and normals (right part).
## 1 Introduction
The experienced 3D artists can craft intricate 3D models from images, however, this demands hundreds of hours of manual effort. In this study, we aim to efficiently generate highly consistent 3D model from a single image. This endeavor promises to furnish a potent adjunct for 3D creation and offers a swift means of procuring 3D objects for virtual three-dimensional environments construction.
Despite decades of extensive research efforts (Mescheder et al., 2019; Park et al., 2019; Wang et al., 2018; Hanocka et al., 2020; Mildenhall et al., 2020), the task of reconstructing 3D structure and texture from a single viewpoint remains inherently challenging due to its ill-posed nature. To address this challenge, one category of approaches relies on costly 3D annotations obtained through CAD software or tailored domain-specific prior knowledge (Wang et al., 2023; Zhang et al., 2023), e.g. human and clothing templates, which contribute to consistent results while also limiting applicability to arbitrary objects. Another cue harnesses the generalization ability of 2D generation models like CLIP (Radford et al., 2021) and Stable Diffusion (Rombach et al., 2022). However, Melas-Kyriazi et al. (2023) and Tang et al. (2023) suffer from severe multi-face issue, that is, the face appears at many views of the 3D model. With 3D structure prior, Liu et al. (2023) and Qian et al. (2023) avoid the multi-face issue, but struggle to obtain consistent reconstruction. All these methods do not take into account the unique characteristics of object, and utilize fixed strategy for different cases. These case-agnostic approaches face difficulty in adapting optimization strategies to arbitrary objects.
However, our objective is to establish a versatile approach applicable to a broad spectrum of objects, endowed with the capability to dynamically adapt guidance strategy according to the extent of reconstruction progress. To achieve this aim, we draw attention to two pivotal **observations**: **(1)** Across various objects, a case-aware optimization phase, driven solely by 3D structural prior in the early stage, ensures the fidelity and consistency of the eventual reconstruction. **(2)** During the reconstruction process, the initial focus lies on capturing the object's overall structure, followed by the meticulous refinement of geometric shape and texture details, as illustrated in Fig 2.
Considering these, we propose _Consistent123_, a novel approach for one image to highly consistent 3D asset using case-aware 2D and 3D diffusion priors. Specifically, Consistent123 takes two stages. _Stage 1_: Consistent123 initializes the 3D content solely with 3D prior, thereby mitigating any disruption from 2D prior in structure exploitation. This process involves a case-aware boundary judgement, where we periodically sample the 3D content from fixed perspectives and measure their similarity with textual information. Once the changing rate of the similarity falls below a threshold, Consistent123 switches to stage 2. _Stage 2_: Consistent123 optimizes the 3D content with dynamic prior, namely the combination of 2D and 3D prior. Our rationale is to reduce the emphasis on 3D prior over time, while accentuating the significance of 2D prior, which serve as the principal guidance for exploring texture intricacies. Consistent123 adaptively tailors an continuous optimization procedure for different input, facilitating the creation of exceptionally coherent 3D assets.
We evaluate Consistent123 on RealFusion15 (Melas-Kyriazi et al., 2023) dataset and our collected C10 dataset. Through quantitative and qualitative analysis, we demonstrate the superiority of Consistent123 when compared to state-of-the-art methods. In summary, our contributions can be summarized as follows:
Figure 2: **The observation of optimization. For each case, the top row shows the optimization process using 2D priors, and the bottom row using 3D priors.**
* We propose an case-aware image-to-3D method, **Consistent123**, which aligns more effectively with the demands of prior knowledge. It places a heightened emphasis on 3D structural guidance in the initial stage and progressively integrates 2D texture details in the subsequent stage.
* Consistent123 incorporates a adaptive detection mechanism, eliminating the necessity for manual adjustments to the 3D-to-2D prior ratio. This mechanism autonomously identifies the conclusion of 3D optimization and seamlessly transitions to a 3D-to-2D reduction strategy, improving its applicability across objects with diverse geometric and textural characteristics.
* Consistent123 demonstrates excellent 3D consistency in contrast to purely 3D, purely 2D, and 3D-2D fusion methodologies. Furthermore, our approach yields superior geometric and textural quality, concurrently addressing the challenge of multi-face problem.
## 2 Related Work
### Text-to-3D Generation
Generating 3D models is a challenging task, often hindered by the scarcity of 3D data. As an alternative, researchers have turned to 2D visual models, which are more readily available. One such approach is to use the CLIP model (Radford et al., 2021), which has a unique cross-modal matching mechanism that can align input text with rendered perspective images. Mohammad Khalid et al. (2022) directly employed CLIP to optimize the geometry and textures of meshes. Jain et al. (2022) and Wang et al. (2022) utilized the neural implicit representation, NeRF (Mildenhall et al., 2020), as the optimization target for CLIP.
Due to the promising performance of the Diffusion model in 2D image generation (Rombach et al., 2022; Ramesh et al., 2022; Wang et al., 2022; He et al., 2023), some studies have extended its application to 3D generation. Poole et al. (2023) directly used a 2D diffusion model to optimize the alignment between various rendered perspectives and text with SDS loss, thereby generating 3D objects that match the input text. Lin et al. (2023) used the two-stage optimization with diffusion model to get a higher resolution result. Seo et al. (2023) generated a 2D image as a reference and introduced a 3D prior based on the generated image. It also incorporated optimization with a prompt embedding to maintain consistency across different perspectives. Richardson et al. (2023) generated textures using a depth-to-image diffusion model and blended textures from various perspectives using a Trimap. Wang et al. (2023) and Xu et al. (2023) bridged the gap between vision and language with CLIP, and achieved a unified 3D diffusion model for text-conditioned and image-conditioned 3D generation. Cao et al. (2023) transformed the observation space to a standard space with a human prior and used a diffusion model to optimize NeRF for each rendered perspective.
### Single Image 3D Reconstruction
Single-image 3D reconstruction has been a challenging problem in the fields of graphics and computer vision, due to the scarcity of sufficient information. To address this issue, researchers have explored various approaches, including the use of 3D-aware GANs and Diffusion models. Some work (Chan et al., 2022; Yin et al., 2022; Xiang et al., 2022; Xie et al., 2023; He et al., 2023) leveraged 3D-aware GANs to perform 3D face generation with GAN inversion techniques (Roich et al., 2022; Wang et al., 2022). Other works used Diffusion models to generate new perspectives in reconstruction. Wang et al. (2023) proposed a 3D diffusion model for high-quality 3D content creation, which is trained on synthetic 3D data. Liu et al. (2023) fine-tuned Stable Diffusion with injected camera parameters on a large 3D dataset to learn novel view synthesis.
Another line of work adopted 2D diffusion prior to directly optimize a 3D object without the need for large-scale 3D training data. These approaches represent promising avenues for addressing the challenge of single-image 3D reconstruction. As a seminal work, Tang et al. (2023) used an image caption model (Li et al., 2022) to generate text descriptions of the input image. The researchers then optimized the generation of novel views with SDS loss, as well as introducing a denoised CLIP loss to maintain consistency among different views. Meanwhile, Melas-Kyriazi et al. (2023) utilized textual inversion to optimize prompt embedding from input images and then employed SDS loss to optimize the generation of new perspectives. Qian et al. (2023) leveraged a rough 3D prior generated by zero 1-to-3 (Liu et al., 2023) and combined it with textual inversion to optimize prompt embedding using SDS loss with fixed weighting.
## 3 Methodology
As shown in Fig 3, the optimization process of Consistent123 can be categorized from a perspective standpoint into two phases: the reference view and the novel view. In the reference viewpoint, we primarily employ the input image as the basis for reconstruction, a topic comprehensively addressed in Section 3.1. The optimization of the novel view unfolds across two distinct stages. These two stages are thoroughly explored in Sections 3.2 and Section 3.3, respectively. The resultant model output consistently exhibits a high degree of 3D consistency and exceptional texture quality.
### Reference View Reconstruction
Imported a 2D RGB image, Consistent123 adopts a preprocess operation to get derivative ground truth which can be used in the loss calculation in the reference view. We utilize pretrained model (Eftekhar et al., 2021; Kar et al., 2022) to acquire the demerger \(\mathbf{I}^{gt}\), the binary mask \(\mathbf{M}^{gt}\) and the depth of object \(\mathbf{D}^{gt}\). \(\mathcal{L}_{rgb}\) ensures the similarity between the input image and the rendered reference view image. Mean Squared Error (MSE) loss is leveraged to calculate the \(\mathcal{L}_{rgb}\) in the form as follows:
\[\mathcal{L}_{rgb}=\left\|\mathbf{I}^{gt}-\mathcal{G}_{\theta}\left(v^{r} \right)\right\|_{2}^{2} \tag{1}\]
where \(\mathcal{G}_{\theta}\) stands for the representation model in the optimization process, \(v^{r}\) represents the viewpoint of reference view in the rendering process. The design of \(\mathcal{L}_{mask}\) likewise employ MSE to operate calculation whose concrete expression as follows:
\[\mathcal{L}_{mask}=\left\|\mathbf{M}^{gt}-\mathbf{M}\left(\mathcal{G}_{\theta }\left(v^{r}\right)\right)\right\|_{2}^{2} \tag{2}\]
where \(\mathbf{M}\left(\cdot\right)\) means the operation of extracting the mask of the rendered image. Seeing that the method of using depth prior in the former of this area, we decide to adopt the normalized negative Pearson correlation \(\mathcal{L}_{depth}\) in-depth loss computation. Given three vital parts of reference view reconstruction loss, we merge them into a modified form of expression:
\[\mathcal{L}_{rec}=\lambda_{rgb}\mathcal{L}_{rgb}+\lambda_{mask}\mathcal{L}_ {mask}+\lambda_{depth}\mathcal{L}_{depth} \tag{3}\]
where \(\lambda_{rgb}\), \(\lambda_{mask}\) and \(\lambda_{depth}\) are controllable parameters which are used to regulate the ratio of each supervision. With the help of merged loss \(\mathcal{L}_{rec}\), we can restore a high detail and correct geometry target on the reference viewpoint.
Figure 3: **The framework of Consistent123.** Consistent123 consists of two stages. In the first stage, we take advantage of 3D prior to optimize the geometry of 3D object. With the help of an optimization boundary judgment mechanism based on CLIP, we ensure the geometry initial optimization process is well conducted. Then, in the second stage, the output from the last stage continues to be optimized by the fusion of 2D prior and 3D prior in a specific ratio based on timestep, which is also named Dynamic Prior. To access a high-consistence and high-quality asset, we employ enhanced representation like Mesh instead of NeRF in the final period of optimization. The eventual result of the framework has correct geometry and exquisite texture from visual observation.
### Optimization Boundary Judgement
The optimization process illustrated in Fig 2 demonstrates the efficiency of 3D structural priors in capturing the shape of object, and the 3D priors play a crucial role mainly in the initial stage of reconstruction. To ensure the comprehensive recovery of the object's shape as depicted in the image, we establish a structural initialization stage, namely stage 1, where only 3D structural priors guide the optimization. The loss of 3D prior is expressed as follows:
\[\mathcal{L}_{3D}=\mathbb{E}_{t,\epsilon}\left[w(t)\left(\epsilon_{\phi}\left( \mathbf{z}_{t};\mathbf{\Gamma}^{r},t,R,T\right)-\epsilon\right)\frac{\partial \mathbf{\Gamma}}{\partial\theta}\right] \tag{4}\]
where \(t\) denotes the training timestep, \(\mathbf{z}\) represents the latent variable generated through the encoding of the image \(\mathbf{I}\), \(R\) and \(T\) mean the positional coordinate parameters of the camera. The function \(w(t)\) corresponds to a weighting function, while \(\epsilon_{\phi}\) and \(\epsilon\) respectively denote the noise prediction value generated by the U-Net component of the 2D diffusion model and the ground truth noise. During stage 1, 2D priors are deliberately excluded, effectively mitigating the multi-face issue. The output of this stage is 3D content with high-quality structure, yet it significantly lags in terms of texture fidelity compared to the image representation. That's mainly because of the deficiency of texture information, which is primarily driven by 2D priors.
Consequently, we embed a case-aware CLIP-based detection mechanism within stage 1 to determine whether the shape of the current 3D content has been accurately reconstructed. If so, a transition is made to stage 2, with 2D priors introduced gradually. During the first-stage training, we conduct boundary judgement at specific iterations. Specifically, we periodically perform detection at intervals of \(h\) iterations, set to 20 in our experiments. For each detection step \(k\), we render the current 3D content from different viewpoints, resulting in \(M\) images, and then calculate the average similarity score between these images and textual descriptions using the CLIP model:
\[\mathcal{S}_{CLIP}^{k}\left(y,\mathcal{G}_{\theta}^{k}\right)=\frac{1}{M}\sum _{v\in V}\varepsilon_{CLIP}\left(\mathcal{G}_{\theta}^{k}\left(v\right)\right) \cdot\varphi_{CLIP}\left(y\right) \tag{5}\]
where \(y\) is the description of the reference image, and \(v\) is a rendering perspective belonging to sample views set \(V\). \(\varepsilon_{CLIP}\) is a CLIP image encoder and \(\varphi_{CLIP}\) is a CLIP text encoder. To determine whether the shape of the current 3D content has been adequately recovered, we compute the moving average of changing rate of \(\mathcal{S}_{CLIP}\):
\[R^{k}=\frac{1}{L}\sum_{i=k-L+1}^{k}\left(\mathcal{S}_{CLIP}^{i}-\mathcal{S}_{ CLIP}^{i-1}\right)/\mathcal{S}_{CLIP}^{i-1} \tag{6}\]
where \(L\) is the size of the sliding window. When this rate falls below a threshold \(\delta\), the current 3D content is considered to possess a structure similar to that represented in the image.
### Dynamic Prior
Recognizing that 3D prior optimization is characterized by consistent structure guidance but weak texture exploration, while 2D prior optimization leads to high texture fidelity but may occasionally diverge from the input image, we posit these two priors exhibit complementarity, each benefiting the quality of the final 3D model. Consequently, in Stage 2, we introduce a 2D diffusion model as the guiding 2D prior to enrich the texture details of the 3D object. Throughout the optimization process, the 2D diffusion model primarily employs Score Distillation Sampling (SDS) loss to bridge the gap between predicted noise and ground truth noise. This concept is elucidated as follows:
\[\mathcal{L}_{2D}=\mathbb{E}_{t,\epsilon}\left[w(t)\left(\epsilon_{\phi}\left( \mathbf{z}_{t};y,t\right)-\epsilon\right)\frac{\partial\mathbf{z}}{\partial \mathbf{\Gamma}}\frac{\partial\mathbf{\Gamma}}{\partial\theta}\right] \tag{7}\]
where \(y\), originating from either user observations or the output of a caption model, represents the text prompt describing the 3D object. However, we have observed that, in the stage 2, when the optimization relies solely on the 2D prior, the resulting 3D asset often exhibits an unfaithful appearance. This is attributed to the low-resolution output of stage 1 possessing poor low-level information such as color, shading, and texture, which makes room for 2D prior to provide high-resolution but unfaithful guidance. Moreover, the alignment relationship between the input text
prompt and each individual novel view which is waiting to be optimized by the 2D prior varies. This variability leads the 2D prior to introduce certain unfaithful details, which we refer to as the 'Over Imagination' issue. Consequently, the eventual output typically maintains a reasonable structure but displays an unfaithful novel view, resulting in an inconsistent appearance.
To resolve the above problem, we incorporate 3D prior and 2D prior in an incremental trade-off method instead of only using 2D diffusion model in stage 2, which we call it **Dynamic Prior**. More specifically, we design a timestep-based dynamic integration strategy of two kinds of prior to gradually introduce exquisite guidance information while maintaining its faithfulness to input image. The loss formula of dynamic prior using both \(\mathcal{L}_{3D}\) and \(\mathcal{L}_{2D}\) is as follows:
\[\mathcal{L}_{DP}=e^{-\frac{\epsilon}{T}}\mathcal{L}_{3D}+\left(1-e^{-\frac{ \epsilon}{T}}\right)\mathcal{L}_{2D} \tag{8}\]
where \(T\) represents total timesteps of optimization. As shown in Equation (8), we determine the weighting coefficients of two losses using an exponential form which is dependent on the parameter \(t\). As \(t\) increases, \(\mathcal{L}_{3D}\) which is primarily contributing structural information undergoes a gradual reduction in weight, while \(\mathcal{L}_{2D}\) which is mainly responsible for optimizing texture information exhibits a progressive increase of influence. We have also considered expressing \(\mathcal{L}_{DP}\) in the form of other basis functions, but extensive experimental results have shown that the expression in Equation (8) yields many excellent and impressive results, and more details of the comparison can be found in Section 4.4. Compared to single prior or fixed ratio prior, the outputs of Consistent123 are more consistent and exquisite from the perspective of texture and geometry.
## 4 Experiments
### Implementation Details
For the diffusion prior, we adopt the open-source Stable Diffusion (Rombach et al., 2022) of version 2.1 as 2D prior, and employ the Zero-1-to-3 (Liu et al., 2023) as the 3D prior. We use Instant-NGP (Muller et al., 2022) to implement the NeRF representation and for mesh rendering, we utilize DMTet (Shen et al., 2021), a hybrid SDF-Mesh representation. The rendering resolutions are configured as \(128\times 128\) for NeRF and \(1024\times 1024\) for mesh. Following the camera sampling approach adopted in Dreamfusion (Poole et al., 2023), we sample the reference view with a 25% probability and the novel views with a 75% probability. For the case-aware detection mechanism, we sample from 8 viewpoints each time, that is \(V=\{0^{\circ},45^{\circ},90^{\circ},135^{\circ},180^{\circ},225^{\circ},270^{ \circ},315^{\circ}\}\). The sliding window size \(L\) is set to 5 and the threshold \(\delta\) of 0.00025. We use Adam optimizer with a learning rate of 0.001 throughout the reconstruction. For an image, the entire training process with 10,000 iterations takes approximately 30 minutes on a single NVIDIA A100 GPU.
### Comparison with State-of-the-art
**Datasets.** We consider a classic benchmark, RealFusion15, released by RealFusion (Melas-Kyriazi et al., 2023). RealFusion15 consists of 15 images featuring a variety of subjects. In addition, we introduced a C10 dataset consisting of 100 images collected from 10 categories which covers a wider range of items. These 10 categories broadly encompass common objects found in daily life, including fruits, balls, furniture, scenes, flora and fauna, food, transportation, clothing and footwear, cartoon characters, and artwork. Thus, the results on C10 can serve as an effective evaluation of the method's generalization ability.
**Baselines and metrics.** We evaluate Consistent123 against state-of-the-art baselines, including RealFusion (Melas-Kyriazi et al., 2023), Make-it-3D (Tang et al., 2023), Zero-1-to-3 (Liu et al., 2023), and Magic123 (Qian et al., 2023), on both the RealFusion15 and C10 datasets. Like Magic123, we use an improved implementation (Tang, 2022) of Zero-1-to-3, and the original released code for other works. For quantitative evaluation, we adopt three metrics, namely CLIP-similarity (Radford et al., 2021b), PSNR and LPIPS (Zhang et al., 2018). CLIP-similarity quantifies the average CLIP distance between the rendered image and the reference image, serving as a measure of 3D consistency by assessing appearance similarity across novel views and the reference view. PSNR and LPIPS assess the reconstruction quality and perceptual similarity at the reference view.
**Quantitative comparison.** As demonstrated in Table 1, on the RealFusion15 dataset, Consistent123 attains the most favorable results in the CLIP-Similarity metric which gain an increment of **11.2%**
Figure 4: **Qualitative comparison vs SOTA methods.** The results on the RealFusion15 dataset is shown on top, and results on the C10 dataset on the bottom. We randomly sample 2 novel views to showcase, and reference view and other views are included in Appendix A.1. Please visit [https://Consistent123.github.io](https://Consistent123.github.io) for a more intuitive comparison by watching videos.
compared to the original SOTA, signifying that our method yields the most consistent 3D models. Regarding reference view reconstruction, Consistent123 performs comparably to Magic123 and Zero-1-to-3, and significantly outperforms RealFusion and Make-it-3D. On the C10 dataset, encompassing images from 10 distinct categories, Consistent123 outpaces its counterparts by a substantial margin across all evaluation metrics. Moreover, there is a notable enhancement in CLIP-Similarity, accompanied by an improvement of **2.972** in PSNR and **0.066** in LPIPS metrics when compared to the previously top-performing model, which underscore robust generalization capability of Consistent123 across diverse object categories.
**Qualitative comparison.** We present a comprehensive set of qualitative results featuring 14 images drawn from the RealFusion15 and C10 datasets in Fig 4. In contrast to our method, RealFusion often yields flat 3D results with colors and shapes that exhibit little resemblance to the input image. Make-it-3D displays competitive texture quality but grapffes with a prominent issue of multi-face. For instance, when reconstructing objects like teddy bears and Spongeboy, it introduces facial features at different novel views, which should only appear in the reference view. Zero-1-to-3 and Magic123 produce visually plausible structures, but the consistency of texture among all views, especially in side views, is poor. For example, in the cases of fish and rugby, their textures fail to achieve a smooth transition when observed from the side view. In contrast, our methodology excels in generating 3D models that not only exhibit semantic consistency with the input image but also maintain a high degree of consistency in terms of both texture and geometry across all views.
### Ablation Study of Two Stage Optimization
In this section, we emphasize the significance of boundary judgment. We divide the reconstruction process into three parts, namely: 3D structural initialization, boundary judgment, and dynamic prior-based optimization. In cases where boundary judgment is absent, the optimization process can be categorized into two approaches: full 3D structural initialization (boundary at the training starting point) or full dynamic prior-based optimization (boundary at the training endpoint), denoted as Consistent123\({}_{3D}\) and Consistent123\({}_{dynamic}\), respectively. As illustrated in Fig 5, without the guidance of 2D texture priors, Consistent123\({}_{3D}\) produces visually unrealistic colors in the new view of the car, and in the absence of 3D structural initialization, Consistent123\({}_{dynamic}\) exhibits inconsistency and multi-face issue in Mona Lisa's face. In contrast, results with boundary judgment showcase superiority in both texture and structure.
### Ablation Study of Dynamic Prior
Dynamic prior refers to the method of dynamically adjusting the ratio of 2D and 3D priors based on different time steps during the optimization process. Depending on the transformation method, we compare the optimization effects of three different approaches: exponential (Equation (8)), linear
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \multicolumn{1}{c|}{**Dataset**} & \multicolumn{1}{c}{**Metrics/Methods**} & RealFusion & Make-it-3D & Zero-1-to-3 & Magic123 & Consistent123(ours) \\ \hline \multirow{3}{*}{**RealFusion15**} & CLIP-Similarity\(\uparrow\) & 0.735 & 0.839\({}^{\dagger}\) & 0.759 & 0.747 & **0.844** \\ & PSNR\(\uparrow\) & 20.216 & 20.010 & 25.386 & 25.637 & **25.682** \\ & LPIPS\(\downarrow\) & 0.197 & 0.119 & 0.068 & 0.062 & **0.056** \\ \hline \multirow{3}{*}{**C10**} & CLIP-Similarity\(\uparrow\) & 0.680 & 0.824\({}^{\dagger}\) & 0.700 & 0.751 & **0.770** \\ & PSNR\(\uparrow\) & 22.355 & 19.412 & 18.292 & 15.538 & **25.327** \\ \cline{1-1} & LPIPS\(\downarrow\) & 0.140 & 0.120 & 0.229 & 0.197 & **0.054** \\ \hline \end{tabular}
\end{table}
Table 1: Qualitative results on RealFusion15 and C10 datasets. Make-it-3D uses CLIP similarity to supervise the training, so its value\({}^{\dagger}\) is not considered for Make-it-3D in the comparison.
Figure 5: **The ablation of two stage optimization.**
and logarithmic (Equation (9)). We conducted comparisons of different metrics for three distinct dynamic optimization strategies on data from ten different categories. As shown in the Table 2, the exponential variation process, which is the our adopted method, can achieve a higher CLIP-Similarity on most of the categories, which to some extent reflects the reconstruction consistency. The actual reconstruction results also support this, as the exponential variation method can effectively mitigate the multi-head problem, leading to higher reconstruction quality and better consistency.
\[\mathcal{L}_{linear}=\frac{t}{T}\mathcal{L}_{3D}+\left(1-\frac{t}{T}\right) \mathcal{L}_{2D}\quad,\quad\mathcal{L}_{log}=\log_{2}\frac{t}{T}\,\mathcal{L} _{3D}+\left(1-\log_{2}\frac{t}{T}\right)\mathcal{L}_{2D} \tag{9}\]
The key difference between exponential transformation and the other two lies in the fact that exponential transformation can inject 2D priors more quickly. In the previous optimization stage, 3D priors were used to ensure the correctness of the basic geometric structure of the reconstruction. The purpose of dynamic priors is to optimize the quality and consistency of the reconstruction while maintaining the correctness of the 3D structure. The former has already undergone optimization in the first stage, requiring only a small amount of injection during the dynamic prior stage to maintain the effectiveness of the 3D prior.
### User Study
Due to the absence of ground-truth 3D models, we conducted a perceptual study to compare Consistent123 against SOTA baselines. Participants were tasked with selecting the best result that represents the texture and structure of the object depicted in the image. To quantify the likelihood of participants favoring SOTA methods over Consistent123, we present the corresponding results in Fig 6. Our method demonstrates superior performance compared to the alternatives, exhibiting a **65.7%** advantage in the user study. More details are available in the Appendix A.3.
## 5 Conclusion and Discussion
**Conclusion**. In this study, we introduce Consistent123, a two-stage framework designed for achieving highly detailed and consistent 3D reconstructions from single images. By recognizing the complementary nature of 3D and 2D priors during the optimization process, we have devised a training trade-off strategy that prioritizes initial geometry optimization with 3D priors, followed by the gradual incorporation of exquisite guidance from 2D priors over the course of optimization. Between the two optimization stages, we employ a large-scale pretrained image-text pair model as a discriminator for multi-view samples to ensure that the 3D object gains sufficient geometry guidance before undergoing dynamic prior optimization in stage 2. The formulation of our dynamic prior is determined through the exploration of various foundational function forms, with a subsequent comparison of their categorized experimental results. Our approach demonstrates enhanced 3D consistency, encompassing both structural and textural aspects, as demonstrated on existing benchmark datasets and those we have curated.
**Limitation**. Our study reveals two key limitations. Firstly, during stage 1, heavy reliance on 3D priors influences the 3D object, with reconstruction quality notably affected by the input image's viewpoint. Secondly, output quality depends on the description of asset in stage 2. Finer-grained descriptions enhance output consistency, while overly brief or ambiguous descriptions lead to the 'Over Imagination' issue in Stable Diffusion (Rombach et al., 2022), introducing inaccurate details.
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c} \hline \hline
**Methods** & **Metrics/class** & ball & biont & furniture & cartoon & fruit & statue & food & vehicle & costume & scene & average \\ \hline \multirow{3}{*}{**log**} & CLIP-Similarity1 & 0.79 & 0.85 & **0.58** & 0.77 & 0.87 & 0.71 & 0.87 & 0.74 & **0.67** & 0.68 & 0.76 \\ & PSNR\({}^{\dagger}\) & 26.45 & 25.46 & 23.19 & 23.97 & 24.62 & 22.94 & 27.33 & 24.24 & **26.14** & 21.71 & 24.59 \\ & LPIPS\({}_{\downarrow}\) & **0.04** & 0.06 & **0.12** & **0.06** & 0.06 & 0.11 & **0.03** & 0.07 & 0.06 & 0.10 & 0.07 \\ \hline \multirow{3}{*}{**linear**} & CLIP-Similarity1 & 0.82 & 0.85 & 0.55 & 0.74 & **0.88** & 0.73 & **0.88** & 0.72 & 0.65 & 0.70 & 0.76 \\ & PSNR\({}^{\dagger}\) & 26.32 & 25.51 & 22.96 & 23.43 & 25.31 & 25.71 & **27.41** & 24.57 & 25.36 & 21.63 & 24.96 \\ & LPIPS\({}_{\downarrow}\) & **0.04** & 0.05 & 0.13 & 0.09 & **0.04** & **0.06** & **0.03** & 0.07 & 0.06 & 0.10 & 0.07 \\ \hline \multirow{3}{*}{**exp**} & CLIP-Similarity1 & **0.87** & **0.88** & 0.54 & **0.78** & 0.87 & **0.77** & **0.88** & **0.76** & **0.67** & **0.72** & **0.79** \\ & PSNR\({}^{\dagger}\) & **27.50** & **26.09** & **23.28** & **24.29** & **25.39** & 25.63 & 27.02 & **25.16** & 25.65 & **21.78** & **25.30** \\ \cline{1-1} & LPIPS\({}_{\downarrow}\) & **0.04** & **0.04** & **0.12** & **0.06** & 0.05 & 0.07 & 0.04 & **0.05** & **0.05** & **0.09** & **0.06** \\ \hline \end{tabular}
\end{table}
Table 2: Ablation Study of Dynamic Prior
Figure 6: **User Study. The collected results of preference.** | 3Dオブジェクトを単一の画像から復元し、事前学習の拡散モデルでガイドすると、有望な成果が見られます。しかし、ケース無視の剛体戦略を用いるため、任意のケースに対する汎用性と復元における3D一致性が依然として低いのです。この研究では、ケースAwareの2段階方法であるConsistent123を提案しました。これは、1つの画像からの高度な一致性3Dアセットの復元に、2Dと3D拡散の両方を活用します。最初の段階では、Consistent123は、3D構造の事前情報のみを使用して、十分な幾何学的探索を行う。このプロセスには、CLIPベースのケースAwareの適応的な検出メカニズムが組み込まれています。2番目の段階では、2Dのテクスチャの事前情報は導入され、徐々に支配的なガイド役割を担い、3Dモデルの詳細を繊細に彫刻 |
2306.00198 | An Invariant Learning Characterization of Controlled Text Generation | Controlled generation refers to the problem of creating text that contains
stylistic or semantic attributes of interest. Many approaches reduce this
problem to training a predictor of the desired attribute. For example,
researchers hoping to deploy a large language model to produce non-toxic
content may use a toxicity classifier to filter generated text. In practice,
the generated text to classify, which is determined by user prompts, may come
from a wide range of distributions. In this paper, we show that the performance
of controlled generation may be poor if the distributions of text in response
to user prompts differ from the distribution the predictor was trained on. To
address this problem, we cast controlled generation under distribution shift as
an invariant learning problem: the most effective predictor should be invariant
across multiple text environments. We then discuss a natural solution that
arises from this characterization and propose heuristics for selecting natural
environments. We study this characterization and the proposed method
empirically using both synthetic and real data. Experiments demonstrate both
the challenge of distribution shift in controlled generation and the potential
of invariance methods in this setting. | Carolina Zheng, Claudia Shi, Keyon Vafa, Amir Feder, David M. Blei | 2023-05-31T21:35:08 | http://arxiv.org/abs/2306.00198v1 | # An Invariant Learning Characterization of Controlled Text Generation
###### Abstract
Controlled generation refers to the problem of creating text that contains stylistic or semantic attributes of interest. Many approaches reduce this problem to training a predictor of the desired attribute. For example, researchers hoping to deploy a large language model to produce non-toxic content may use a toxicity classifier to filter generated text. In practice, the generated text to classify, which is determined by user prompts, may come from a wide range of distributions. In this paper, we show that the performance of controlled generation may be poor if the distributions of text in response to user prompts differ from the distribution the predictor was trained on. To address this problem, we cast controlled generation under distribution shift as an invariant learning problem: the most effective predictor should be invariant across multiple text environments. We then discuss a natural solution that arises from this characterization and propose heuristics for selecting natural environments. We study this characterization and the proposed method empirically using both synthetic and real data. Experiments demonstrate both the challenge of distribution shift in controlled generation and the potential of invariance methods in this setting.
## 1 Introduction
The development of large language models (LLMs) has changed the landscape of research in NLP. Simply by conditioning on a prompt, an LLM can produce fluent and readable text. By using different and well-thought-out prompts, it can be adapted to many applications [6, 9, 35, 38, 44, 50].
But this increase in adaptability has also led to a greater need for _controlled generation_, to be able to generate text from an LLM that adheres to certain attributes. For example, suppose we want to use an LLM as a chatbot and deploy it to a large set of users. They might prompt the model in many different ways, such as by asking for advice, information, or just playing with its capabilities. We would like the users to freely explore the chatbot, but we also want to ensure that the text it generates is not toxic -- that is, not rude, disrespectful, or unreasonable. How can we allow users to freely prompt it, but ensure that the LLM does not produce toxic text?
There have been many approaches to solving this problem, each trying to ensure that the text produced by a prompted LLM adheres to the attribute, e.g., that it is not toxic [10, 24, 25, 47, 53]. Here we build on the simple method of filtering. Filtering reduces the problem of controlled generation to one of building a good classifier of the targeted attribute. First we collect a dataset of texts that is labeled as to whether each is toxic, and we use this data to fit a toxicity classifier. When a user prompts the LLM to produce a sample of text, we use the fitted classifier to filter its results. We collect multiple texts from the prompted LLM, but only retain one that is classified as non-toxic.
Filtering is a simple and direct approach to controlled generation, but it is only as effective as the fitted classifier. In this paper, we argue that a classifier that might perform well in a classical ML setting will likely perform worse in the context of a prompted LLM. The reason is that classical ML tacitly assumes that the future unlabeled text comes from a similar distribution as the training data. But, when used in the context of controlled generation, the unlabeled text to classify may come from any distribution as it is determined by a user's prompt. Compounding the problem, we hope the classifier will work well for many different prompts and thus many different distributions of unlabeled texts.
In this paper, we characterize controlled text generation as an out-of-distribution generalization problem. This characterization highlights that distribution shift is an inherent aspect of controlled
text generation and it suggests that methods addressing out-of-distribution generalization can be used in the context of controlled generation. Concretely, we employ recent algorithms for multi-environment learning [1, 27, 29, 36, 41, 46]. These are methods that analyze multiple related datasets, called "environments," to weed out spurious correlations and find patterns that are consistent across distributions of text. We develop two approaches to create these environments from common text classification datasets, and we demonstrate that invariant methods can be effective for controlled text generation.1
Footnote 1: Code is available at: [https://github.com/carolinazheng/invariant-control-gen](https://github.com/carolinazheng/invariant-control-gen).
## 2 Characterizing Controlled Generation
In this section, we review controllable text generation and illustrate the problem of distribution shifts in this setting.
### Controlled Generation
The goal of _controlled generation_ is to produce text that is compatible with certain controllable attributes [37]. For example, a group deploying a chatbot to interact with human users may wish for the bot to generate only non-toxic text. Here the controllable attribute is toxicity. Across all prompts posed by human users, the chatbot should generate only non-toxic text.
Formally, denote deployment distributions of text sequences indexed by a prompt \(h\) by \(p_{h}(x)\). In the chatbot scenario, a prompt \(h\) can index the entire interaction between a user and chatbot up to the current point in time, and \(p_{h}(x)\) provides a probability distribution over the text sequences the chatbot may respond with. Denote the controllable attribute as a binary random variable \(y\), e.g., \(y=1\) indicates the presence of toxic content.
We assume the relationship between text and the controllable attribute is governed by a ground truth conditional distribution \(p^{*}(y|x)\), which is well-defined for all text \(x\). For a prompt \(h\), the true joint distribution of text and attribute follows
\[p^{*}_{h}(x,y)=p_{h}(x)p^{*}(y|x). \tag{1}\]
The goal of controlled generation is to sample text from the deployment distribution, but conditional on the desired controlled value. That is, the text should be sampled from
\[p^{*}_{h}(x|y=0)=\frac{p_{h}(x)p^{*}(y=0|x)}{\int p_{h}(x)p^{*}(y=0|x)dx}. \tag{2}\]
When the relationship between text and attribute \(p^{*}(y|x)\) is known, it is possible to sample from \(p^{*}_{h}(x|y=0)\) either analytically or using Monte Carlo methods.
In practice this relationship is unknown, and the conditional distribution \(p^{*}(y|x)\) is estimated from data. Consider a dataset \(\mathcal{D}=(x_{i},y_{i})\sim p_{\mathcal{D}}\), where
\[p_{\mathcal{D}}(x,y)=p_{\mathcal{D}}(x)p^{*}(y|x). \tag{3}\]
For example, \(p_{\mathcal{D}}(x)\) can be a distribution over Reddit comments or transcripts from talk radio. Note this joint distribution differs from the one in Eq. 1: both are governed by the same relationship between text and attribute, \(p^{*}(y|x)\), but they differ in the distribution of text, \(p_{h}(x)\) vs. \(p_{\mathcal{D}}(x)\). Further, consider a class of predictors \(p_{\theta}(y|x)\), such as logistic regression models or neural network-based classifiers. A model is fit to the data to produce \(p_{\hat{\theta}}(y|x)\). Then, for any prompt \(h\), text from the controlled distribution can be sampled from
\[p_{h,\hat{\theta}}(x|y=0)\propto p_{h}(x)p_{\hat{\theta}}(y=0|x). \tag{4}\]
This quantity is typically sampled using Monte Carlo methods to filter out text that does not meet the desired attribute [52].
The success of this approach is determined by how well \(p_{\hat{\theta}}(y=0|x)\) models the true distribution \(p^{*}(y=0|x)\). When \(p_{\hat{\theta}}(y|x)\) perfectly models the true distribution, Eq. 2 is identical to Eq. 4 and so text can be generated from the desired distribution. Otherwise, toxic samples may be produced or non-toxic samples may be discarded unnecessarily.
### Distribution Shift
The success of controlled generation via Eq. 4 depends on how similar \(p_{\hat{\theta}}(y|x)\) is to \(p^{*}(y|x)\). Here, we show a change from \(p_{\mathcal{D}}(x,y)\) to \(p_{h}(x,y)\) can lead to failures in controlled generation.
The attribute predictor \(p_{\hat{\theta}}(y|x)\) will perform best on prompts that are similar to the samples it is trained on. In a world where the training distribution \(p_{\mathcal{D}}(x)\) and deployment distributions \(p_{h}(x)\) are the same for all prompts \(h\), an attribute predictor will perform similarly on both distributions: if \(p_{\hat{\theta}}(y|x)\) is accurate for samples \(x\sim p_{\mathcal{D}}(x)\), it will also be accurate for samples \(x\sim p_{h}(x)\).
However, in practice, there are many possible prompts \(h\) and deployment distributions \(p_{h}(x)\) will not be identical; users interacting with a chatbot will pose a wide range of questions and the chatbot should respond to all questions in a non-toxic way. Thus, it is inevitable that the training and deployment distributions will differ for many prompts.
When these distributions are far off, the quality of controlled generations can degrade. If a predictor is trained from samples from one distribution and applied to samples from another, its generalization abilities will suffer [4, 13]. The reason is that the fitted predictors may rely on _spurious correlations_ between text and attribute label that exist in the training distribution \(p_{\mathcal{D}}(x,y)\) but do not exist in the deployment distribution \(p_{h}^{*}(x,y)\)[33].
For example, if training samples are taken from an internet forum, there may be a correlation between the grammatical correctness of a post and its toxicity: civil posts that do not contain toxic content may be grammatically correct, while posts with toxic content may contain grammatical errors. In this sample, the grammatical correctness of a post would be an informative predictor of its toxicity. However, this correlation may not generalize to the deployment distribution. If the deployment distribution is a large language model that only generates grammatically correct text, for example, a predictor based on the internet forum posts would allow toxic posts to be generated as long as they are grammatically correct. Although the relationship between text and toxicity is governed by \(p^{*}(y|x)\) for both distributions, differences in \(p_{\mathcal{D}}(x)\) and \(p_{h}(x)\) may yield a predictor that does not generalize to the deployment distribution.
## 3 Controlled Generation with Invariant Learning
Section 2 describes how the task of controlled generation reduces to finding a predictor \(p_{\hat{\theta}}(y|x)\) to approximate the ground truth relationship between text and attribute, \(p^{*}(y|x)\). The predictor \(p_{\hat{\theta}}(y|x)\) is typically fitted by minimizing the training distribution risk,
\[R_{\mathcal{D}}(\theta)=\mathbb{E}_{p_{\mathcal{D}}(x)p^{*}(y|x)}[-\log p_{ \theta}(y|x)]. \tag{5}\]
However, the predictor \(p_{\hat{\theta}}(y|x)\) that is most effective for a deployment distribution \(p_{h}(y|x)\) is the minimizer of the deployment distribution risk,
\[R_{h}(\theta)=\mathbb{E}_{p_{h}(x)p^{*}(y|x)}[-\log p_{\theta}(y|x)]. \tag{6}\]
Thus, for a predictor \(p_{\hat{\theta}}(y|x)\) to generalize to many deployment distributions, it should not be trained to minimize the training distribution risk (Eq. 5). Instead, a good predictor \(p_{\hat{\theta}}(y|x)\) should have a low value for \(R_{h}(\hat{\theta})\) for many prompts \(h\). Even if there is only a single deployment distribution of interest, yielding a predictor that performs well for many prompts \(h\) will increase the quality of controlled generations for the single prompt.
Invariant Learning.We cast the task of finding a generalizable predictor as an invariant learning problem. Invariant learning refers to a class of methods developed to address distribution shifts [1, 27, 31, 36, 39, 54]. These methods posit that features are drawn from multiple distributions, or "environments," but the relationship between label and features is invariant across environments. The motivation is that if a predictor is optimal across environments seen during training, then it will generalize better to future unseen environments.
To adapt invariant learning for controlled generation, we note that each deployment distribution \(p_{h}(x)\) defines a new environment, indexed by \(h\). Since the true relationship between text and attribute \(p^{*}(y|x)\) is invariant across distributions of \(x\), the attribute predictor \(p_{\hat{\theta}}(y|x)\) should also be invariant in order to generalize to unseen deployment distributions \(p_{h}(x)\). The optimal invariant predictor will yield the desired controlled generations \(p_{h,\hat{\theta}}(x|y)=p_{h}^{*}(x|y)\).
Formally, we adapt the data generating process from Peters et al. [36] and Arjovsky et al. [1] for controlled generation:
\[x\sim p_{e}(x),\hskip 36.135pty\sim p^{*}(y|x), \tag{7}\]
where \(e\) denotes an environment. Each environment refers to a different data distribution over text. For example, environments can be different sources of toxic text, e.g., Reddit posts or tweets. Each environment may exhibit spurious correlations between text and toxicity, such as those that depend on grammar or hashtags, that do not hold outside the environment. We assume these environment labels are known; in Section 4 we propose strategies for building environments from text data.
This data generating process gives way to the _invariant risk minimization_ (IRM) objective [1]:
\[\min_{\theta}\sum_{e=1}^{m}R_{e}(\theta),\] \[\text{subject to}\hskip 14.226378pt\theta\in\operatorname*{arg \,min}_{\theta}R_{e}(\theta),\hskip 14.226378pt\forall e\in\mathcal{E}, \tag{8}\]
where \(R_{e}(\theta)=\mathbb{E}_{p_{e}(x)p^{*}(y|x)}[-\log p_{\theta}(y|x)]\) is the environment risk and \(\mathcal{E}\) refers to the set of all environments. This objective seeks an invariant predictor, \(p_{\hat{\theta}}(y|x)\), that minimizes the risk within each environment. Among all invariant predictors, the objective calls for the one that minimizes the sum of risks across all environments. If a predictor performs similarly across environments, the intuition goes, it is likely not relying on spurious correlations that only hold for a few environments.
Practical Optimization.In practice, solving Eq. 8 is challenging because each constraint calls an inner optimization [1]. Instead, we find invariant predictors by relying on algorithms developed to approximate Eq. 8. These methods add a regularizer to the empirical risk loss (Eq. 5) to encourage invariance. See App. A for a description of the three methods we employ in the empirical study.
These methods all rely on a hyperparameter, \(\beta\), that balances the tradeoff between empirical risk and the invariance regularizer. The best way to select this hyperparameter remains an open question [19]. In Section 6, we consider two ways of selecting \(\beta\). The first is to use a held-out training environment [19], while the second relies on samples from the deployment distribution.
## 4 Constructing Multiple Environments
Invariant learning relies on multiple data environments. In many settings, labeled environments are not available. This section describes how to build environments from passively collected data.
Recall that a training environment is a collection of data drawn from an environment distribution,
\[p_{e}(x,y)=p_{e}(x)p^{*}(y|x), \tag{9}\]
where \(e\in\mathcal{E}\) indexes an environment. Thus, the relationship between text \(x\) and attribute \(y\) is preserved across environments, but the distribution \(p_{e}(x)\) may differ.
Not all partition of data samples drawn from \(p_{\mathcal{D}}(x,y)\) will yield useful environments. For a partition to be effective, environments should be heterogeneous so that the predictor learns invariant relationships. If each data point is its own environment, there will not be enough observations in each environment to learn which relationships are spurious and which are invariant. On the other extreme, if the dataset contains a single environment, there will not be enough environments for a classifier to generalize.
We consider two approaches for creating environments. The first uses existing auxiliary labels to split data into environments. The second is a method we propose for creating environments that does not necessarily rely on auxiliary labels.
Auxiliary Labels.Auxiliary labels can be used to partition data into environments. Though training data may actually come from different sources, practitioners collate them into one large dataset. When each source reflects a different distribution of text with its own spurious correlations, partitioning environments based on these domains may yield an effective split. In toxicity data, these environments can correspond to different media platforms: if grammar is a spurious correlation between text and toxicity on Reddit but not in the _New York Times_ comments section, an invariant predictor across these environments will not rely on grammar.
EviaN.In practice, these spurious correlations are typically unknown or difficult to characterize. In these settings, we introduce an approach called **Environments via Negativa** (EviaN). EviaN seeks to partition data into environments so that spurious correlations are erased within environments. EviaN does not require enumerating spurious correlations; instead, it requires practitioners to specify a transformation that corrupts text by destroying the true relationship between text and attribute and preserving a spurious one. An attribute predictor fit to corrupted data is then relying on only spurious correlations. Environments are created by grouping examples with similar corrupted predictions, with the hope that examples with similar predictions contain similar spurious correlations. Thus, a predictor that is trained to be invariant across environments with different levels of the spurious correlation cannot rely on this relationship in its predictions.
EviaN consists of three steps. In the first step, data is corrupted. Assume a text transformation \(s:\mathcal{X}\rightarrow\mathcal{X}\), with \(\mathcal{X}\) denoting the space of all possible text sequences. A corrupted dataset \(\tilde{\mathcal{D}}=\{(\tilde{x}_{i},y_{i})_{i=1}^{n}\}\) is produced by applying the transformation to each data point,
\[(\tilde{x}_{i},y_{i})=(s(x_{i}),y_{i})\qquad\forall x_{i}\in\mathcal{D}. \tag{10}\]
The transformation \(s(\cdot)\) should be designed to remove the invariant relationship between text and attribute. Thus, the information about \(y\) from \(\tilde{x}\) must pertain only to spurious correlations.
In the second step, a predictor \(g_{\hat{\phi}}\) is fit to model the attribute label \(y\) from the corrupted text. For a loss function \(l\) such as cross-entropy,
\[\hat{\phi}=\operatorname*{arg\,min}_{\phi}\tfrac{1}{n}\sum_{i=1}^{n}l(g_{\phi}( \tilde{x}_{i}),y_{i}). \tag{11}\]
The predicted outcome \(\tilde{y}_{i}=g_{\hat{\phi}}(\tilde{x}_{i})\) provides a low-dimensional representation of the spurious correlations encoded in \(\tilde{x}_{i}\).
Finally, data can be partitioned into multiple environments by thresholding \(\tilde{y}_{i}\). Let \(K\) be the number of desired environments and let \(q_{k}\) denote \(1/k\) quantiles of the predicted outcome. For \(k\in\{1,...,K\}\), if \(\tilde{y}_{i}\in[q_{k-1},q_{k}]\), an environment can be assigned by setting \(e_{i}=k\). With the label \(e_{i}\) denoting the environment label of the original data point \((x_{i},y_{i})\), an invariant predictor can be fit across the new environments.
A challenge of applying EviaN in practice is finding suitable data transformations. The optimal data transformation is domain specific. Below, we describe two examples of data corruption schemes.
_Word order scrambling._ A possible domain assumption is that an attribute depends on word order. Consider the two statements: "We shouldn't respect people from minority backgrounds" and "Shouldn't we respect people from minority backgrounds." They have the same set of words, but the former is more likely to be labeled as toxic than the latter. If the word order assumption holds, a valid text transformation is "scrambling" the order of words in a sequence by randomly permuting them.
_Metadata prediction._ In some domains, there may be metadata associated with a piece of text that is predictive of the attribute. For example, in a dataset of social media comments, the ID of individual commenters may be predictive of toxicity. This correlation, however, must be spurious since it does not involve the actual text. While individual metadata labels may not be sufficient to render diverse environment splits, when combined into a single prediction, they can provide more insight into spurious correlations in the data.
## 5 Related Work
Controlled Generation.Generating text while controlling for specific attributes is a central problem in NLP [37]. Various approaches include modeling the conditional distribution directly [23, 24, 25, 55]; fine-tuning an existing language model to make use of the observed text and labels [7, 16, 20, 62]; and prompt engineering [8, 58]. The challenge of modeling the conditional distribution directly is that this limits the use of pre-trained models. There is little theoretical understanding of prompting or fine-tuning, which makes it difficult to predict the robustness of models on unseen data.
Similar to this paper, another line of work makes use of filtering-based controlled generation (Eq. 4) and focuses on training a discriminator \(p_{\hat{\theta}}(y\,|\,x)\). The discriminator is then used to modify the model activation [10, 30] or the decoding weights at the token level [10, 26, 30, 53] or simply through rejection sampling [47, 52]. This paper differs from existing work in that we identify a distribution shift problem inherent to prompting that has been overlooked in prior papers.
Toxicity Detection.Recent studies have shown that toxicity and social biases in training data are acquired by large pre-trained language models [3, 16, 28, 34, 40, 42, 59]. There has also been a wealth of work on detecting toxicity in text [2, 17, 56, 57]. This paper contributes to the existing literature by formalizing some of the challenges in the training and deployment of automatic toxicity evaluation.
Invariant Learning.This paper builds on a growing literature on invariant learning, which describes the problem of learning a representation that is generalizable across different distributions [1, 36, 41]. These methods have been applied in diverse settings such as natural science [21, 32, 36], causal estimation [43, 54], computer vision [1, 27], and NLP [48, 15, 49]. This paper complements existing work, as we identify controlled generation as a useful application area for invariant learning.
## 6 Experiments
We empirically investigate distribution shifts in controlled text generation and assess the effectiveness of invariance methods. This paper studies a filtering-based approach to controlled generation, where each method corresponds to a different classifier. Thus, the effectiveness of these methods is determined by the predictive performance of the classifier under distribution shifts. The study includes two settings: an idealized setting involving synthetic data where the distribution shift is known, and another with real world data where a distribution shift is induced but its exact form is unknown.
Training Data and Predictors.For both settings, we use training data from CivilComments [5], a
dataset of comments submitted to an online news platform. The comments are annotated for toxicity and other semantic features such as mention of identity attributes (e.g., race or religion). We compare empirical risk minimization (ERM, Eq. 5) to invariance-based approaches. In the idealized settings, we use one invariance method, V-REx (Eq. 12). In the real world setting, we additionally include MMD [29] and CORAL [46]. We fine-tune BERT [11] on a subset of CivilComments to optimize each objective. Dataset, training, and hyperparameter details are in App. B.
Metrics.To measure predictor performance, we use three classification metrics: accuracy, F1 score, and expected calibration error (ECE). We follow Wald et al. [49] in including ECE, as calibration across multiple environments can imply better out-of-distribution generalization. In Section 6.2, we report loss instead of accuracy, as we found accuracy to be similar across settings.
### Idealized Setting
In the idealized setting, we create a semi-synthetic corpus such that the training and deployment distributions of text differ. The training data contains a spurious correlation between label and text that does not hold in the deployment distribution. Crucially, we construct the spurious correlation so that we know its form and can control its strength. Within this idealized setting, we include two experiments that induce different spurious correlations: one involving a special token concatenated to each text sequence and the other based on manipulating the text's grammatical correctness. In both settings, the training data is resampled to balance the classes and true labels are flipped for 25% of examples so the spurious correlation has more signal.
Special Token.In the special token experiment, we begin by using real text and toxicity labels. Then, a special token is noisily sampled based on the toxicity label and concatenated to the initial text. Data is split in a way such that the strength of the relationship between the special token and output differs across environments. Specifically, let \(y\in\{-1,1\}\) be the toxicity label and define \(z\in\{-1,1\}\) to be the spurious feature of text, i.e., the special token. An example in each training environment is sampled as: \(x,y\sim p_{D}(x,y)\) and \(z=y\cdot s\), where \(s\sim\text{Rad}(\pi)\) is a random variable that is \(1\) with probability \(\pi\) and \(-1\) with probability \(1-\pi\). A special token indicating \(z\) is then prepended to each text sequence. Each environment is parameterized by the value of \(\pi\in[0,1]\), which controls the strength of the correlation between \(y\) and \(z\). We construct two equal-size training environments with \(\pi_{1}=0.9\) in the first environment and \(\pi_{2}=0.99\) in the second, resulting in \(\text{corr}(y,z)=0.72\) and \(\text{corr}(y,z)=0.88\), respectively. We evaluate on multiple test environments with different values of \(\pi\). Figure 1 plots test environment \(\text{corr}(y,z)\) against test loss and other metrics.
Grammar.In the other idealized experiment, we manipulate the grammatical correctness of text so it is spuriously correlated with toxicity. To induce a correlation between grammar and toxicity, we prompt GPT-3 to rewrite comments by inserting grammatical mistakes; more details on the generated dataset are in App. B.2. In the training dataset, toxic comments are rewritten to be less gramatically correct, while in the deployment dataset, the non-toxic comments are rewritten. We construct training data environments for the invariance-based approaches using grammatical correctness of the rewritten comments. Specifically, we compute the number of errors for each comment (as given by the open-source grammar checker LanguageTool). We then partition training environments based on whether each example's number of errors is above or below the median. As a baseline, we randomly
Figure 1: Invariant predictors are more robust when the relationship between a spurious feature and the label changes. The dotted vertical line is the correlation level in the training data (i.e., a setting with no distribution shift).
assign environments and report the best hyperparameter. The results are in Table 1.
In these idealized settings, the invariance methods achieve better performance across evaluation metrics in the presence of distribution shifts. Additionally, we find that the best invariance regularizer weight depends on the deployment distribution. As shown in Fig. 1, when a significant shift in the distribution occurs, although all predictors become worse at generalizing, increasing the strength of the invariance regularizer leads to improved performance. When the distribution shift is not significant, the choice of invariance regularizer weight has less impact on the model performance. This is congruent with the findings in Dranker et al. [12].
### Real World Setting
We now move to a real world setting where the distribution shift is unknown. The training data for these experiments consists of a random subset of CivilComments data, while the deployment data consists of text generated by GPT-3. Unlike the idealized experiments, where training and deployment data were constructed based on a known spurious correlation, here we do not artificially create data based on a spurious correlation. Instead, the deployment data is constructed by generating outputs from varying prompts.
#### 6.2.1 Experiment Setup
We create test data by using GPT-3 to sample potentially toxic text from prompts. We use two datasets for prompts: one containing naturally occurring sentences from the web, and another constructed using a template. For both datasets, we obtain automated and human toxicity labels. The automated labels are from Perspective API, a widely used toxicity classifier. The human labels are from Surge AI annotators. Due to budget constraints, we randomly sample half the examples from each dataset for labeling and evaluation. Additional generation details, examples of generated text, and annotator instructions are in App. B.3.
RealToxicityPrompts.In the first evaluation dataset, we prompt GPT-3 using prompts from RealToxicityPrompts (RTP) [16]. RTP is a dataset with \(100\)K sentence-level prompts derived from Reddit outbound links. Each prompt is paired with a toxicity score summarizing the toxicity of the language in the prompt (as computed by Perspective API). In this study, we use \(50\) prompts from each quartile of toxicity score and generate \(5\) continuations from each prompt using GPT-3.
Personification.In the second evaluation dataset, we use a prompt template to generate potentially toxic continuations. Specifically, we use the following template: "In a heated conversation about [context], the [profession] made the following creative, but [modifier] statement," where \(25\) professions are randomly selected from the list of professions in Zhao et al. [60], context is selected from {relationships; politics; sports; religion}, and modifier is selected from {controversial; hateful, offensive, and aggressive}. We use each possible template combination to construct prompts and generate \(5\) outputs per prompt using GPT-3.
Comparison of automated and human labels.We calculate the agreement between automatic and human toxicity labels. We find that for RTP, the agreement between Perspective API and human annotators, as measured by Cohen's Kappa, is 0.36, while it is 0.15 for the personification dataset.
\begin{table}
\begin{tabular}{l c|c c c} \hline \hline Env & \(\beta\) & Acc \(\uparrow\) & F1 \(\uparrow\) & ECE \(\downarrow\) \\ \hline ERM & – & 0.06 & 0.05 & 0.68 \\ Random & 100 & 0.08 & 0.05 & 0.63 \\ \hline Grammar & 10 & 0.09 & 0.10 & 0.63 \\ Grammar & 20 & 0.12 & 0.17 & 0.59 \\ Grammar & 50 & 0.12 & 0.10 & **0.51** \\ Grammar & 100 & **0.16** & **0.21** & **0.51** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Increasing the invariance regularizer weight improves model generalization when there is a significant shift in distribution. The table reports the out-of-distribution model performance for ERM and invariant predictors with different regularizer strengths.
Figure 2: Different personification prompts result in different distributions of text. The figure shows the deployment loss of ERM and the best invariant predictor for each test environment. The invariant predictor has a more stable performance across test environments.
This difference reinforces the notion that these two datasets contain different distributions of text.
If the human labels are more accurate than automatic ones, an increase in disagreement can be interpreted as a decrease in Perspective API's performance in predicting the correct toxicity label. Several factors could contribute to this difference. One possible reason is that the RTP dataset may align more closely with the deployment setting of Perspective API. Perspective API is specifically designed to evaluate text from online forums, and the RTP dataset contains prompts derived from Reddit outbound links. In contrast, the personification dataset is generated using a set of hand-curated prompts, and the generated text may not necessarily resemble the type of text commonly found in online forums.
#### 6.2.2 Evaluation
We now evaluate the effectiveness of invariance methods in mitigating unknown distribution shifts. Since the form of the spurious correlation is unknown, it is unclear how to effectively partition training data into environments. We consider partitioning based on metadata and using EviAN to create environments (Section 4). We consider two metadata features: comment created date and the comment's number of identity attribute mentions ("identity attribute sum"). For EviAN, we consider two different ways of corrupting the data. The first is word order scrambling; the second is by only retaining the metadata. We split the data into two environments based on the values of the predictions. As a baseline, we also split the data into two random environments.
For the invariance regularizer strength, we consider \(\beta=1,5,10\) for V-REx, \(\beta=0.25,0.5,1\) for MMD, and \(\beta=0.5,1,5,10\) for CORAL. For each dataset, invariance method, and environment split, we consider two ways of selecting \(\beta\). The first is based on loss from leave-one-environment-out validation [19]. Specifically, only for selecting \(\beta\), we split the data into three environments by dividing the training data into terciles and holding out the middle tercile. The second is selecting hyperparameters based on the F1 score computed on validation samples drawn from the deployment distribution. This approach reveals oracle results that can only be achieved when the deployment distribution is known a priori; however, it aligns with the methodology used in existing invariance literature [19]. All evaluations are against human labels.
Different prompts induce different distributions of text.We use the personification dataset to illustrate that different prompts induce different distribution of text, even if the prompts differ by only a few phrases. Figure 2 shows the loss of ERM and an invariant predictor across the deployment distributions. The loss for ERM varies significantly across distributions, while the loss for the invariant predictor is more stable.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline & & \multicolumn{4}{c}{**RealToxicityPrompts**} & \multicolumn{2}{c}{**Personification**} \\ Model & Environment & \(\beta\) & Loss \(\downarrow\) & F1 \(\uparrow\) & ECE \(\downarrow\) & Loss \(\downarrow\) & F1 \(\uparrow\) & ECE \(\downarrow\) \\ \hline ERM & – & – & 0.64 (.01) & 0.54 (.02) & 0.10 (.01) & 0.99 (.06) & 0.16 (.02) & 0.31 (.01) \\ \hline \multirow{4}{*}{V-REx} & Random & 10 & 0.64 (.01) & 0.53 (.01) & 0.11 (.00) & 0.99 (.04) & 0.17 (.01) & 0.31 (.00) \\ & Identity attribute sum & 5 & 0.64 (.01) & 0.54 (.02) & 0.11 (.01) & 0.99 (.05) & 0.18 (.01) & 0.31 (.01) \\ & Created date & 5 & 0.65 (.01) & 0.53 (.03) & 0.11 (.00) & 1.02 (.03) & 0.17 (.01) & 0.32 (.00) \\ & EviAN – Scramble & 10 & 0.67 (.01) & 0.54 (.01) & 0.12 (.02) & 1.08 (.05) & 0.19 (.01) & 0.32 (.01) \\ & EviAN – Metadata & 1 & 0.63 (.01) & 0.57 (.03) & 0.09 (.00) & 1.01 (.05) & 0.16 (.02) & 0.31 (.01) \\ \hline \multirow{4}{*}{MMD} & Random & 0.25 & 0.65 (.01) & 0.55 (.01) & 0.11 (.01) & 1.04 (.06) & 0.17 (.01) & 0.32 (.01) \\ & Identity attribute sum & 0.5 & 0.65 (.01) & 0.55 (.02) & 0.11 (.01) & 0.92 (.02) & 0.18 (.01) & 0.30 (.00) \\ & Created date & 0.5 & 0.65 (.01) & 0.53 (.03) & 0.11 (.00) & 1.03 (.05) & 0.16 (.04) & 0.32 (.01) \\ & EviAN – Scramble & 0.25 & 0.67 (.01) & 0.55 (.02) & 0.12 (.01) & 1.05 (.03) & 0.17 (.02) & 0.32 (.00) \\ & EviAN – Metadata & 0.5 & 0.64 (.01) & 0.52 (.01) & 0.11 (.01) & 0.89 (.01) & 0.17 (.01) & 0.29 (.00) \\ \hline \multirow{4}{*}{CORAL} & Random & 0.5 & 0.65 (.02) & 0.53 (.05) & 0.11 (.01) & 1.04 (.06) & 0.16 (.03) & 0.32 (.01) \\ & Identity attribute sum & 1 & 0.66 (.01) & 0.56 (.01) & 0.12 (.01) & 0.98 (.04) & 0.19 (.02) & 0.31 (.01) \\ \cline{1-1} & Created date & 0.5 & 0.65 (.01) & 0.55 (.01) & 0.11 (.01) & 1.01 (.04) & 0.18 (.01) & 0.31 (.01) \\ \cline{1-1} & EviAN – Scramble & 10 & 0.67 (.01) & 0.53 (.01) & 0.13 (.01) & 1.02 (.06) & 0.17 (.02) & 0.31 (.01) \\ \cline{1-1} & EviAN – Metadata & 0.5 & 0.65 (.02) & 0.53 (.02) & 0.11 (.01) & 0.99 (.08) & 0.18 (.02) & 0.31 (.01) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of predictors on the GPT-3 prompted datasets using leave-one-environment-out validation to select \(\beta\). In this setting, none of the invariance methods studied improve significantly on ERM. We report the mean of five runs with different random seeds, with standard deviations in parentheses.
Analysis on leave-one-environment-out validation.Table 2 reports the performance of ERM and the invariant predictors trained with different algorithms and environment splits. The regularizer strength \(\beta\) is selected based on leave-one-environment-out validation. The performance of invariance methods varies depending on the environment split, dataset, and regularizer strength. For both datasets, we do not see significant improvement of invariance methods over ERM.
The lack of improvement in Table 2 is unsurprising since the invariant predictor is validated on a training environment. This validation process favors predictors that are likely to generalize well to the held-out training environment. However, in this setup, the training and deployment environments are significantly different, making it an especially challenging generalization task.
Analysis on oracle validation.We now consider the setting where we have access to samples from a subset of the deployment distribution (this sample differs from the one used for evaluation). Table 3 reports the performance of ERM and the invariant predictors using oracle validation.
As expected, random environment partitions do not lead to improved out-of-distribution generalization compared to ERM. This finding is consistent with the theory that invariance methods should only show improvement when the environment split is informed. For RTP, we do not observe a statistically significant improvement from the use of invariance methods. In contrast, for personification, the V-REx (EviaN - Metadata) method demonstrates a significant improvement over alternative baselines. This contrast in performance is in line with the fact that personification exhibits a more noticeable distribution shift compared to RTP.
The effectiveness of invariance methods in the real world setting depends on the environment split, invariance algorithm, and regularizer strength. When relying on the training data for model selection and hyperparameter tuning (without access to the deployment distribution), we do not find a significant improvement over ERM. However, when there is data from the deployment distribution that can guide the selection of hyperparameters, we find that invariance methods can improve out-of-distribution generation.
These findings highlight the promise and challenges of using invariance methods to address distribution shift in controlled generation. However, there is currently no turnkey solution for selecting an appropriate invariance method or set of hyperparameters. Future research on model selection is needed to improve the viability of invariance methods for real world distribution shifts.
\begin{table}
\begin{tabular}{l l l l l l|l l l l} \hline \hline & & \multicolumn{4}{c}{**RealToxicityPrompts**} & \multicolumn{4}{c}{**Personification**} \\ Model & Environment & \(\beta\) & Loss \(\downarrow\) & F1 \(\uparrow\) & ECE \(\downarrow\) & \(\beta\) & Loss \(\downarrow\) & F1 \(\uparrow\) & ECE \(\downarrow\) \\ \hline ERM & – & – & 0.65 (.02) & 0.53 (.03) & 0.12 (.01) & – & 1.02 (.06) & 0.14 (.03) & 0.32 (.01) \\ \hline \multirow{4}{*}{V-REx} & Random & 5 & 0.65 (.01) & 0.53 (.01) & 0.12 (.01) & 1 & 1.04 (.05) & 0.15 (.02) & 0.32 (.00) \\ & Identity attribute sum & 10 & 0.61 (.01) & 0.57 (.02) & 0.09 (.01) & 10 & 0.88 (.07) & 0.22 (.04) & 0.29 (.01) \\ & Created date & 1 & 0.65 (.01) & 0.53 (.04) & 0.12 (.01) & 1 & 1.07 (.04) & 0.15 (.03) & 0.33 (.01) \\ & EviN – Scramble & 5 & 0.66 (.02) & 0.53 (.02) & 0.12 (.01) & 10 & 1.11 (.05) & 0.17 (.02) & 0.32 (.01) \\ & EviN – Metadata & 5 & 0.62 (.01) & 0.56 (.02) & 0.09 (.01) & 10 & 0.69 (.04) & 0.18 (.11) & 0.21 (.02) \\ \hline \multirow{4}{*}{MMD} & Random & 0.25 & 0.65 (.01) & 0.54 (.01) & 0.13 (.01) & 0.25 & 1.07 (.06) & 0.15 (.02) & 0.33 (.01) \\ & Identity attribute sum & 0.5 & 0.65 (.01) & 0.54 (.01) & 0.12 (.01) & 1 & 0.89 (.02) & 0.16 (.02) & 0.29 (.00) \\ & Created date & 0.25 & 0.66 (.01) & 0.54 (.03) & 0.13 (.01) & 0.25 & 1.05 (.05) & 0.17 (.03) & 0.32 (.01) \\ & EviN – Scramble & 0.25 & 0.67 (.01) & 0.53 (.02) & 0.13 (.01) & 0.25 & 1.08 (.04) & 0.15 (.02) & 0.33 (.00) \\ & EviN – Metadata & 0.25 & 0.65 (.02) & 0.52 (.02) & 0.13 (.01) & 0.25 & 0.95 (.06) & 0.16 (.02) & 0.31 (.01) \\ \hline \multirow{4}{*}{CORAL} & Random & 5 & 0.66 (.02) & 0.53 (.01) & 0.13 (.01) & 5 & 1.05 (.08) & 0.15 (.02) & 0.32 (.01) \\ & Identity attribute sum & 1 & 0.66 (.01) & 0.54 (.01) & 0.13 (.01) & 1 & 1.01 (.04) & 0.17 (.02) & 0.32 (.01) \\ \cline{1-1} & Created date & 0.5 & 0.65 (.01) & 0.54 (.02) & 0.12 (.01) & 0.5 & 1.04 (.04) & 0.17 (.02) & 0.32 (.01) \\ \cline{1-1} & EviN – Scramble & 5 & 0.68 (.02) & 0.52 (.01) & 0.14 (.01) & 1 & 1.10 (.11) & 0.15 (.03) & 0.33 (.01) \\ \cline{1-1} & EviN – Metadata & 0.5 & 0.65 (.02) & 0.52 (.03) & 0.12 (.01) & 5 & 0.90 (.03) & 0.15 (.02) & 0.30 (.01) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of predictors on the GPT-3 prompted datasets using an oracle to select \(\beta\). The invariance regularizer strength is selected based on a validation set that is from the same distribution as the deployment set. EviN – Metadata demonstrates a significant improvement over ERM in the personification dataset. We report the mean of five runs with different random seeds, with standard deviations in parentheses.
## 7 Limitations & Potential Risks
There are two main limitations to this work. First, we focus on the "filtering" approach to controlled generation. While this formulation clarifies what a distribution is, it can be computationally expensive to do rejection sampling in practice. A promising area of future research is the application of these invariance principles to the design of large language models. Second, achieving true invariance, i.e., generalizing to any arbitrary distribution of text, is a challenging open problem. The purpose of this paper is not to solve this problem. Rather, we illustrate that controlled generation is an important application area for invariance methods. An exciting area of future work is to use prompted language models to construct well-defined distribution shift benchmarks for domain generalization methods.
Controlled text generation has the potential to have large impacts on society, both positive and negative. One potential source of risk is misuse. Although we focus on the detection and removal of toxicity, the method we developed can also be applied to the generation of dangerous and toxic content. In addition, this paper does not address other biases (such as gender or social bias) that may already be present in language models. The use of a toxicity filter may compound the problem of decreased diversity in generated text if there is a correlation between social biases and toxicity.
## 8 Acknowledgements
We thank Tiffany Cai, Nino Scherrer, and the reviewers for their thoughtful comments and suggestions, which have greatly improved the paper. This work is supported by NSF grant IIS 2127869, ONR grants N00014-17-1-2131 and N00014-15-1-2209, the Simons Foundation, and Open Philanthropy.
| 制御生成とは、興味のある文体や意味的属性を含むテキストを作成することの課題です。多くのアプローチは、この課題を、望ましい属性を予測するトレーニングモデルに落とし込みます。例えば、非毒性のあるコンテンツを生成する大規模言語モデルを運用する研究者は、生成テキストを分類するために、毒性分類器を使用します。実際には、分類される生成テキストは、ユーザーの指示によって決定されますが、その分布は幅広く存在します。この論文では、ユーザーの指示に応じたテキストの分布が予測モデルがトレーニングされた分布と異なる場合、制御生成の性能が低いことを示しています。この問題に対処するため、制御生成を分布シフトというインvariant学習問題として捉えます。最も効果的な予測モデルは、複数のテキスト環境においても安定した性能を維持する必要があります。この問題の解決のために、分布シフトという特徴を分析し、自然な解決策を提案し、自然な環境を選択 |
2309.08341 | Road Boundary Estimation Using Sparse Automotive Radar Inputs | This paper presents a new approach to detecting road boundaries based on
sparse radar signals. We model the roadway using a homogeneous model and derive
its conditional predictive model under known radar motion. Using the
conditional predictive model and model radar points using a Dirichlet Process
Mixture Model (DPMM), we employ Mean Field Variational Inference (MFVI) to
derive an unconditional road boundary model distribution. In order to generate
initial candidate solutions for the MFVI, we develop a custom Random Sample and
Consensus (RANSAC) variant to propose unseen model instances as candidate road
boundaries. For each radar point cloud we alternate the MFVI and RANSAC
proposal steps until convergence to generate the best estimate of all candidate
models. We select the candidate model with the minimum lateral distance to the
radar on each side as the estimates of the left and right boundaries. We have
implemented the proposed algorithm in C++. We have tested the algorithm and it
has shown satisfactory results. More specifically, the mean lane boundary
estimation error is not more than 11.0 cm. | Aaron Kingery, Dezhen Song | 2023-09-15T11:52:58 | http://arxiv.org/abs/2309.08341v1 | # Road Boundary Estimation Using Sparse Automotive Radar Inputs
###### Abstract
This paper presents a new approach to detecting road boundaries based on sparse radar signals. We model the roadway using a homogeneous model and derive its conditional predictive model under known radar motion. Using the conditional predictive model and model radar points using a Dirichlet Process Mixture Model (DPMM), we employ Mean Field Variational Inference (MFVI) to derive an unconditional road boundary model distribution. In order to generate initial candidate solutions for the MFVI, we develop a custom Random Sample and Consensus (RANSAC) variant to propose unseen model instances as candidate road boundaries. For each radar point cloud we alternate the MFVI and RANSAC proposal steps until convergence to generate the best estimate of all candidate models. We select the candidate model with the minimum lateral distance to the radar on each side as the estimates of the left and right boundaries. We have implemented the proposed algorithm in C++. We have tested the algorithm and it has shown satisfactory results. More specifically, the mean lane boundary estimation error is not more than 11.0 cm.
## I Introduction
Detection of road boundaries is important for Advanced Driver Assistance Systems (ADAS) and autonomous driving. Significant efforts have been dedicated toward their robust and accurate detection, including proposals to integrate radar reflectors into roadway infrastructure to make road boundaries more visible to the sensor [1][2]. Vision and lidar approaches have been the popular sensor choices, including camera-radar fusion [3][4] and lidar-radar fusion [5], of which [6] provides a survey. However, vision and lidar are easily affected by severe weather conditions, leading to poor performance. On the other hand, radars are less sensitive to environmental conditions. Developing a radar-based approach complements existing vision and lidar-based approaches and will offer a backup solution for vehicles.
The signals of a typical automotive radar are pretty sparse after target detection and hence are typically relegated only to the detection of dynamic objects such as other vehicles, bicycles, and pedestrians. In this paper, we propose a new method to detect road boundaries based on sparse radar signals. By sparse signals, we refer to the filtered output of common automotive radars instead of the raw reflectivity images that are only available to radar developers. Fig. 1 illustrates the radar inputs and our problem.
We model the roadway using a homogeneous four-element parameterized arc model and derive its conditional predictive form under known radar motion. Using the conditional predictive model and model radar points using a Dirichlet Process Mixture Model (DPMM), we employ Mean Field Variational Inference (MFVI) to derive an unconditional road boundary model distribution. In order to generate initial candidate solutions for the MFVI, we develop a custom Random Sample and Consensus (RANSAC) variant to propose unseen model instances as candidate road boundaries. For each radar point cloud we alternate the MFVI and RANSAC proposal steps until convergence to generate the best estimate of all candidate models. We select the candidate model with the minimum lateral distance to the radar on each side as the estimates of the left and right boundaries.
We have implemented the proposed algorithm in C++ as a module in the Robot Operating System (ROS). We tested the algorithm on data collected from a Continental ARS430 automotive Doppler radar and it has shown satisfactory results. More specifically, the mean lane boundary estimation error is not more than 11.0 cm, which is reasonable when considering radar wavelength.
## II Related Work
In general, when a radar is used in robot/vehicle perception, there are, broadly, two forms of output to consider: radar reflectivity images and radar target detection point clouds.
Radar reflectivity images are analogous to a camera image but with cells instead of pixels and with cell dimensions typically including at least range and azimuth. For road boundary estimation in this case, Nikolova and Hero [7] model the boundaries in the polar coordinate space of the radar reflectivity image and identify them as the edges of continuous, homogeneous regions in the image with constant width and curvature. Kaliyaperumal et al. [8] propose a deformable model for the boundaries, a likelihood function
Fig. 1: An example of the radar road boundary estimation problem. On the left, the scenario is shown from the camera perspective, with results from the radar frame projected onto the camera image. On the right, a top down orthographic perspective of the scene. Grey ellipses are radar target detections, and the blue and red curves are the left and right road boundaries, estimated from the radar, respectively. The grid has 1 meter spacing.
to match the model with the edges in the imate, and use the Metropolis-Hastings algorithm with simulated annealing to find the optimal match. Guo et al. [9] present the stripe Hough transform which is capable of detecting lines in the reflectivity image when there are orthogonal deviations from the boundary. Werber et al. [10] estimate straight line landmarks from the reflectivity image and associate them over time for vehicle localization.
For most commercially available automotive radars, like the one used for this work, the reflectivity image is not an available output, instead the image is downsampled into a point cloud of target detections generally representing points of peak reflectivity in the image. This radar point cloud data, while similar in format to lidar data, is comparatively sparse, noisy, and unstructured. As a consequence, lidar-based boundary detection approaches are generally not applicable to radar point clouds. A common technique to overcome these issues for radar point clouds is to temporally'stack' the point clouds by transforming the points into a static coordinate frame using some known localization (e.g. GPS), merging them, and then estimating using this more dense stacked point cloud. Lundquist et al. [11] present two such approaches, estimation from an occupancy grid map and estimation from a Quadratic Program (QP) with prior sorting of target detections into left and right sets and outlier rejection. Xu et al. [12] generate an occupancy grid from temporally stacked radar observations and combine edge detection detection and RANSAC in order to identify linear boundaries.
A significant weakness of the stacking based approach is that it relies upon robust and accurate localization over the stacking window. When the localization is not accurate, especially when traversing turns and corners, the stacked radar point cloud often exhibits a distinct'smearing' of the points which can degrade or destroy the estimation quality. For this reason, the algorithm we propose tracks the road boundaries over time, applying new observations to update the existing estimate in a Bayesian manner as opposed to the stacking based approach. Among such approaches, Lee et al. [13] provide an instantaneous estimate of the road curvature and Lundquist et al. [11] track points and quadratic curve segments along the roadway as extended objects via an Extended Kalman Filter (EKF), but neither approach estimates the road boundaries. Lundquist et al. [14], as a prior step towards radar road intensity mapping, perform a K-means regression clustering of cubic curves in highway scenarios. Compared with this work, our algorithm directly estimates the primary road boundaries themselves, allows a dynamic number of candidate tracks, and makes a probabilistic, as opposed to binary, assignment of target detections to tracks, giving the algorithm robustness in the presence of the obfuscating clutter common in non-highway scenarios.
## III Problem Definition
Consider that we have a vehicle equipped with automotive radar traveling along a roadway and we would like to estimate the location of the left and right road boundaries relative to the sensor at the time \(t\) of each measurement. Let us define \(\{\mathcal{D}_{t}\}\) as the dynamic radar ego coordinate system at time \(t\) that moves with the radar so that the \(X\)-axis is forward and the \(Y\)-axis is to the right.
### _Assumptions_
1. The radar is mounted such that its forward direction is parallel with the vehicle's longitudinal axis.
2. The radar is positioned such that it is located between the left and right road boundaries.
3. The primary left and right road boundaries within the field of view (i.e. ignoring discontinuities at intersections, driveways, etc.) can be approximated as a circular arc or line in the Cartesian coordinate space.
4. The motion of the radar between two subsequent time steps is known or estimated. For this purpose, we use the instantaneous radar ego-velocity estimation [15], however, other odometry modalities (e.g. inertial) should also be sufficient.
### _Sensor Model_
We will follow the sensor model in our previous work with these radars [15]. For completeness, we reiterate it here. Our sensor is a 77GHz automotive Doppler radar which periodically transmits a pulse and reports a set of the received reflections as estimated target detections. We use \(\mathbf{Z}_{t}=\{\mathbf{z}_{t,1},\ldots,\mathbf{z}_{t,n}\}\) to denote the set of target detections received as a datagram from a transmitted radar pulse. We consider each detection \(\mathbf{z}_{t,i}\) as a noisy observation of some ground truth measurement source \(\mathbf{s}_{t,i}\). Each target detection is reported in polar coordinates in the radar coordinate frame \(\{\mathcal{D}_{t}\}\) and consists of the measurements.
\[\mathbf{z}_{t,i}=\begin{bmatrix}r_{t,i}&\theta_{t,i}\end{bmatrix}^{\mathsf{ T}}=\mathbf{s}_{t,i}+\mathbf{v}_{t,i}, \tag{1}\]
where, \(i\in\{1,...,N\}\) indicates the target index, \(r_{t,i}\in[0,r_{\max}]\) is the \(i\)-th target range, \(\theta_{t,i}\in[\theta_{\min},\theta_{\max}]\) is the \(i\)-th target azimuth where \(\theta_{t,i}=0\) indicates the forward direction, \(\theta_{t,i}<0\) indicates a target to the left, and \(\theta_{t,i}>0\) indicates a target to the right, and \(\mathbf{v}_{t,i}\) is the observation noise such that \(\mathbf{v}_{t,i}\sim\text{Normal}(\mathbf{0},\boldsymbol{\Sigma}_{t,i})\) where \(\boldsymbol{\Sigma}_{t,i}=\begin{bmatrix}\sigma_{r,i}^{2}&0\\ 0&\sigma_{\theta,i}^{2}\end{bmatrix}.\) The Cartesian parameterization of the target position in \(\{\mathcal{D}_{t}\}\) is defined in the usual manner, \(\begin{bmatrix}x_{t,i}&y_{t,i}\end{bmatrix}^{\mathsf{T}}=\begin{bmatrix}r_{t,i }\cos(\theta_{t,i})&r_{t,i}\sin(\theta_{t,i})\end{bmatrix}^{\mathsf{T}}.\)
### _Road Boundary Model_
In the Cartesian coordinate space, roadways are generally constructed as a series of circular arcs connected by linear segments, which may be considered as circular arcs with infinite radius. We desire to use a single model which can represent both, and, for this reason, we make use of the quadratic representation of a conic with constant curvature,
\[\beta_{1}(x^{2}+y^{2})+\beta_{2}x+\beta_{3}y+\beta_{4}=0, \tag{2}\]
where \(\beta_{1}\neq 0\) represents a circle and \(\beta_{1}=0\) a line.
As the radar makes observations in polar coordinates, it is desirable to express the model in polar coordinates as well so we have,
\[\beta_{1}r^{2}+\beta_{2}r\cos(\theta)+\beta_{3}r\sin(\theta)+\beta_{4}=0. \tag{3}\]
From this we define the road boundary as the set of points
\[\{\mathbf{s}\mid\boldsymbol{\beta}^{\mathsf{T}}\boldsymbol{\phi}(\mathbf{s})=0, \mathbf{s}\in[0,r_{\max}]\times[\theta_{\min},\theta_{\max}]\} \tag{4}\]
where \(\mathbf{s}=\begin{bmatrix}r&\theta\end{bmatrix}^{\mathsf{T}}\) is a point expressed in polar coordinates, \(\boldsymbol{\phi}(\mathbf{s})=\begin{bmatrix}r^{2}&r\cos(\theta)&r\sin(\theta )&1\end{bmatrix}^{\mathsf{T}}\) are the model basis functions, and \(\boldsymbol{\beta}=\begin{bmatrix}\beta_{1}&\beta_{2}&\beta_{3}&\beta_{4} \end{bmatrix}^{\mathsf{T}}\) are the model coefficients. We note that the model is homogeneous, i.e. \(\boldsymbol{\beta}\equiv c\boldsymbol{\beta}\), for all \(c\in\mathbb{R}\).
### _Problem Definition_
Given a series of radar datagrams \(\mathbf{Z}_{0},\ldots,\mathbf{Z}_{t},\) we will estimate the left and right road boundaries at time \(t\), \(\boldsymbol{\beta}_{t,l}\) and \(\boldsymbol{\beta}_{t,r},\) as the coefficients of our road boundary model.
## IV Algorithm
Our algorithm operates in three primary phases: model prediction from known radar motion, model estimation alternating candidate updates from the radar observations and proposing new candidates, and algorithm termination where we output the boundary estimates and prepare for subsequent radar observations. We provide a graphical overview of the flow of our algorithm in Fig. 2.
### _The Roadway as a Dirichlet Process Mixture Model_
As described in III-B, we model each target detection \(\mathbf{z}_{t,i}\) as the detection of some true source point \(\mathbf{s}_{t,i}\) with zero-mean Gaussian observation noise \(\mathbf{v}_{t,i}\). We model the radar points as being generated by a DPMM, where the points are classified into two groups: outliers whose source is not necessarily modeled and inliers which are generated by a candidate model. Therefore, we will have \(K+1\) models where model index \(k=0\) indicates the outlier model and \(k\in\{1,\ldots,K\}\) indicates a candidate model. Each observed target \(\mathbf{z}_{t,i}\) is associated with the latent variable \(c_{t,i}\sim\text{Categorical}(\mathbf{\pi}_{t})\) representing which model produced the target detection, where \(\boldsymbol{\pi}_{t}=\begin{bmatrix}\pi_{t,0}&\cdots&\pi_{t,K}\end{bmatrix} ^{\mathsf{T}}\sim\text{Dirichlet}(\boldsymbol{\alpha}_{t})\) are the mixture weights and \(\boldsymbol{\alpha}_{t}=\begin{bmatrix}\alpha_{t,0}&\cdots&\alpha_{t,K}\end{bmatrix} ^{\mathsf{T}}\) is the prior mixture concentration.
Ideally, one would represent the source point as its own latent variable described by a Spatial Distribution Model (SDM) on the source object. This is difficult in practice as such an SDM is heavily dependent upon the geometry of the scene and would need to account for occlusions, object shape, etc. Additionally, estimating the source point as a latent variable significantly complicates posterior inference unless the SDM is in the exponential family. Instead, we rely upon a Greedy Association Model (GAM) as in [16], and we will show that this produces a very straightforward algorithm for inference. The result is that we convert \(\mathbf{z}_{t,i}\) into a pseudo-observation of the model \(\boldsymbol{\beta}_{t,k}\) via the implicit shape function of our model as in [17]. Propagating the measurement uncertainty, we can say that \(h(\mathbf{z}_{t,i},\boldsymbol{\beta}_{t,k})\sim\text{Normal}(0,\sigma_{ik}^ {2})\) where \(\sigma_{ik}^{2}=\boldsymbol{\beta}_{t,k}^{\mathsf{T}}\boldsymbol{\Phi}(\mathbf{ z}_{t,i})\boldsymbol{\Sigma}_{t,i}\boldsymbol{\Phi}(\mathbf{z}_{t,i})^{ \mathsf{T}}\boldsymbol{\beta}_{t,k}\) with \(\boldsymbol{\Phi}(\mathbf{z}_{t,i})\) being the Jacobian of \(\boldsymbol{\phi}(\mathbf{z}_{t,i})\). We note that this GAM results in a biased estimation for curved shapes for which [16] identifies and prescribes a correction. However, in the case of road boundary estimation with radar, the measurement uncertainty is not typically large enough relative to the curvature for this bias to have a significant effect. Thus, we consider \(E[h(\mathbf{z}_{t,i},\boldsymbol{\beta}_{t,k})]=0\) to be a reasonable approximation in this case. Finally, we model the outlier targets simply as observations from a uniform SDM over the field of view, \(\mathbf{z}_{t,i}\mid c_{t,i}=0\sim\text{Uniform}([0,r_{\max}]\times[\theta_{ \min},\theta_{\max}])\).
Due to the fact that our road boundary model is homogeneous, it is necessary to choose a scale for the coefficients. Setting the scale such that \(\boldsymbol{\beta}_{t,k}^{\mathsf{T}}\boldsymbol{\beta}_{t,k}=1\) is a natural choice. Consequently, the coefficients of each candidate model is distributed such that \(\boldsymbol{\beta}_{t,k}\sim\text{Bingham}(\mathbf{C}_{t,k}^{-1})\), which is the analogue to the normal distribution but conditioned on the unit hypersphere [18]. The Bingham distribution has found related use in Quaternion Bingham Filters e.g. [19, 20, 21], which provide more detail on the properties of the Bingham distribution. Note that the Bingham distribution is typically parameterized by the eigen decomposition \(\mathbf{C}_{t,k}^{-1}=\mathbf{V}\mathbf{\Lambda}\mathbf{V}^{\mathsf{T}}\), however, for our purposes it will be more natural to work with \(\mathbf{C}_{t,k}^{-1}\) directly. While ideally it should be the case that \(p(\boldsymbol{\beta}_{t,k})=p(c\boldsymbol{\beta}_{k})\) where \(c\in\mathbb{R}\), for our distribution, it is only true that \(p(\boldsymbol{\beta}_{t,k})=p(-\boldsymbol{\beta}_{t,k}),\) i.e. the distribution is antipodally symmetric, but it will serve as a reasonable approximation in return for useful mathematical properties, namely that the Bingham distribution is in the exponential family, making the inference more tractable. Finally, it is the case that \(E[\boldsymbol{\beta}_{t,k}]=\mathbf{0}\), however, this trivial solution will not be useful for inference. Therefore, we will instead use the mode of the distribution which is the eigenvector corresponding to the minimum eigenvalue of the eigenvalue problem \(\mathbf{C}_{t,k}^{-1}\boldsymbol{\beta}_{t,k}=\lambda\boldsymbol{\beta}_{t,k}\). Thus, with some abuse of notation, we will say that \(E[\boldsymbol{\beta}_{t,k}]\) is equal to this mode.
In summary, we consider the generative model of radar detections to be
* \(N_{t}\): Number of detections.
* \(K_{t}\): Number of candidate models.
* \(\boldsymbol{\alpha}_{t}\): prior concentration.
* \(\boldsymbol{\pi}_{t}\sim\text{Dirichlet}(\boldsymbol{\alpha}_{t})\).
* \(c_{t,i}\sim\text{Categorical}(\boldsymbol{\pi}_{t})\).
* \(\mathbf{z}_{t,i}\sim\text{Uniform}([0,r_{\max}]\times[\theta_{\min},\theta_{ \max}])\) if \(c_{t,i}=0\).
* \(\boldsymbol{\beta}_{t,k}^{\mathsf{T}}\boldsymbol{\phi}(\mathbf{z}_{t,i}) \sim\text{Normal}(0,\sigma_{ik}^{2})\) if \(c_{t,i}=\{1,\ldots,K_{t}\}\).
* \(\boldsymbol{\beta}_{t,k}\sim\text{Bingham}(\mathbf{C}_{t,k}^{-1})\).
### _Model Prediction_
Let us say that from time \(t-1\) to \(t\) the radar frame is rotated about its vertical axis by \(\Delta_{\psi}\) and its position is translated \(\begin{bmatrix}\Delta_{x}&\Delta_{y}\end{bmatrix}^{\mathsf{T}}\) then the transformation matrix for the radar coordinate frame is
\[\mathbf{T}_{t}=\begin{bmatrix}\cos\Delta_{\psi}&-\sin\Delta_{\psi}&\Delta_{x }\\ \sin\Delta_{\psi}&\cos\Delta_{\psi}&\Delta_{y}\\ 0&0&1\end{bmatrix}, \tag{5}\]
and a static point is predicted to be transformed in the radar coordinate frame such that
\[\begin{bmatrix}x_{t|t-1}\\ y_{t|t-1}\\ 1\end{bmatrix}=\mathbf{T}_{t}^{-1}\begin{bmatrix}x_{t-1}\\ y_{t-1}\\ 1\end{bmatrix}. \tag{6}\]
While the Bingham distribution is only closed under orthonormal transformations, the transition can instead be applied to the underlying normal distribution. Applying the transformation to given road boundary model coefficients \(\boldsymbol{\beta}_{t-1}\) and applying additive noise \(\mathbf{Q}_{t}\) results in
\[\boldsymbol{\beta}_{t|t-1}=\mathbf{F}_{t}\boldsymbol{\beta}_{t-1}+\mathbf{Q}_ {t} \tag{7}\]
where
\[\mathbf{F}_{t}=\] \[\begin{bmatrix}1&0&0&0\\ 2(\Delta_{x}\cos\Delta_{\psi}+\Delta_{y}\sin\Delta_{\psi})&\cos\Delta_{\psi}& \sin\Delta_{\psi}&0\\ 2(\Delta_{y}\cos\Delta_{\psi}-\Delta_{x}\sin\Delta_{\psi})&-\sin\Delta_{\psi} &\cos\Delta_{\psi}&0\\ \Delta_{x}^{2}+\Delta_{y}^{2}&\Delta_{x}&\Delta_{y}&1\end{bmatrix}. \tag{8}\]
Thus the predicted road boundary model becomes
\[\boldsymbol{\beta}_{t|t-1}\sim\text{Bingham}((\mathbf{F}_{t}\mathbf{C}_{t-1, k}\mathbf{F}_{t}^{\mathsf{T}}+\mathbf{Q}_{t})^{-1}). \tag{9}\]
### _Variational Inference_
While Markov Chain Monte Carlo (MCMC) methods, especially Gibbs sampling, are often used to estimate the posterior of such models, in an online application, the uncertainty of the burn-in period and mixing quality makes their convergence unreliable. Instead, we will use Mean Field Variational Inference (MFVI) in order to estimate the posterior distribution of the DPMM, at a small cost to accuracy in return for much more reliable convergence [22].
For simplicity of notation, let us drop subscript \(t\) from the variables \(\mathbf{c}_{t}\), \(\boldsymbol{\pi}_{t}\), and \(\mathbf{B}_{t}\). The objective of MFVI then is to find an approximating joint distribution \(q\) of the true joint distribution \(p\) such that
\[q(\mathbf{c},\boldsymbol{\pi},\mathbf{B})\approx p(\mathbf{Z},\mathbf{c}, \boldsymbol{\pi},\mathbf{B}), \tag{10}\]
where \(q(\mathbf{c},\boldsymbol{\pi},\mathbf{B})=q(\mathbf{c})q(\boldsymbol{\pi})q( \mathbf{B})\) and \(p(\mathbf{Z},\mathbf{c},\boldsymbol{\pi},\mathbf{B})=p(\mathbf{Z}\mid \mathbf{c},\mathbf{B})p(\mathbf{c}\mid\boldsymbol{\pi})p(\mathbf{z})p( \mathbf{B})\). In practice, this is solved by minimizing the KL-Divergence between \(p\) and \(q\). For a given factor, the log of the optimal approximating distribution \(q^{*}\) is known to be proportional to the expected value of the log of the true joint distribution \(p\) over the other factors of the distribution. For example, \(\log q^{*}(\mathbf{c})\propto E_{\boldsymbol{\pi},\mathbf{B}}[\log p(\mathbf{ Z},\mathbf{c},\boldsymbol{\pi},\mathbf{B})]\). For this latent variable model, the result is an iterative procedure analogous to the Expectation-Maximization (EM) algorithm. In each iteration we will first compute an expectation step (E-Step), followed by a maximization step (M-Step). Iteration proceeds until the parameter estimation converges. A derivation of the E-Step and M-Step is provided in Appendices II and III respectively.
**E-Step:** We solve for \(q^{*}(\mathbf{c})\) which gives,
\[\gamma_{ik}=E_{\mathbf{c}}[\lfloor c_{i}=k\rfloor]=\frac{\rho_{ik}}{\sum_{j=0 }^{K}\rho_{ij}} \tag{11}\]
where \([c_{i}=k]\) is the Iverson bracket which is equal to one when \(c_{i}=k\) and zero otherwise and
\[\rho_{ik}=E_{\boldsymbol{\pi}}[\pi_{k}]\begin{cases}\mathcal{U}(\mathbf{z}_{i };[0,r_{\max}]\times[\theta_{\min},\theta_{\max}])&\text{if }k=0\\ \mathcal{N}(\hat{\boldsymbol{\beta}}_{k}^{\mathsf{T}}\boldsymbol{\phi}( \mathbf{z}_{i});0,\sigma_{ik}^{2})&\text{if }k>0\end{cases} \tag{12}\]
with \(\hat{\boldsymbol{\beta}}_{k}=E[\boldsymbol{\beta}_{k}]\) (see end of Sec. IV-A) and \(\mathcal{U}\) and \(\mathcal{N}\) representing uniform and normal probability density functions respectively. Equation (11) is also equivalent to \(p(c_{i}=k\mid\mathbf{z}_{i})\).
**M-Step:** We are then able to solve for the other parameters, \(q^{*}(\boldsymbol{\pi})\) which gives
\[E_{\boldsymbol{\pi}}[\pi_{k}]=\frac{\alpha_{k}+\sum_{i=1}^{N}\gamma_{ik}}{\sum_ {j=0}^{K}\left(\alpha_{j}+\sum_{i=1}^{N}\gamma_{ij}\right)}, \tag{13}\]
and \(q^{*}(\mathbf{B})\) which gives
\[\boldsymbol{\beta}_{k}\sim\text{Bingham}\left(\mathbf{C}_{k}^{-1}+\sum_{i=1}^ {N}\gamma_{ik}\frac{\boldsymbol{\phi}(\mathbf{z}_{i})\boldsymbol{\phi}( \mathbf{z}_{i})^{\mathsf{T}}}{\sigma_{ik}^{2}}\right). \tag{14}\]
### _Candidate Model Generation_
Due to the fact that the MFVI converges towards local optima and not necessarily the global optimum, the quality of the results is heavily dependent upon the initial conditions. Therefore, it is necessary to generate high quality initial estimates of the candidate models to be included in the inference. In order to accomplish this, we take inspiration from the structure of instantaneous multi-model fitting approaches, especially Progressive-X [23], wherein, a RANSAC model proposal step and a model optimization step are alternately applied. In our case, the model optimization is accomplished via the MFVI and the model proposal is accomplished by a custom RANSAC variant. A single iteration of our RANSAC variant algorithm consists of two steps: sampling and scoring.
**Sampling:** We instantiate a proposal model \(\boldsymbol{\beta}^{\prime}\) exactly from a sample of 3 detections i.e. 4 coefficients minus 1
Fig. 2: Algorithm Flow Chart.
for homogeneity. The random sample is generated from the Reservoir Sampling algorithm [24], wherein each detection \(\mathbf{z}_{i}\) is included in the sample with probability proportional to \(\gamma_{i0}\) i.e. the probability that \(\mathbf{z}_{i}\) belongs to the outlier class. Let \(\mathbf{B}^{\prime}=\mathbf{B}\cup\boldsymbol{\beta}^{\prime}\). Additionally, Let \(\boldsymbol{\alpha}^{\prime}=\begin{bmatrix}\boldsymbol{\alpha}^{\mathsf{T}}& 3\end{bmatrix}^{\mathsf{T}}\), then \(\boldsymbol{\pi}^{\prime}=\frac{\boldsymbol{\alpha}^{\prime}}{\|\boldsymbol{ \alpha}^{\prime}\|_{1}}\), where \(\|\cdot\|_{1}\) is the L1 norm.
**Scoring:** Given \(\mathbf{B}^{\prime}\) and \(\boldsymbol{\pi}^{\prime}\) we compute a single E-Step as described by equation (11) for \(\gamma_{ik}^{\prime}\). We define the proposal score as the difference in the expected number of points in the outlier class, given the proposal,
\[\xi=\sum_{i=1}^{N}\gamma_{i0}^{\prime}-\gamma_{i0}. \tag{15}\]
We maintain the best proposal \(\boldsymbol{\beta}^{*}\) which has the greatest score \(\xi^{*}\).
The confidence of the best proposal at iteration \(j\) is
\[1-\left(1-\left(\frac{\xi^{*}}{\sum_{i=1}^{N}\gamma_{i0}}\right)^{3}\right)^{ j}, \tag{16}\]
and we terminate the RANSAC algorithm when the confidence is greater than some threshold e.g. 99%. If \(\xi^{*}\) is greater than some acceptance threshold, then \(\boldsymbol{\beta}^{*}\) is added to the set of candidate models. The acceptance threshold may be tuned depending upon how conservative we would like to be in adding new candidate models wherein the higher the threshold, the less likely we are to accept the proposal. Generally, this threshold should be greater than the minimum sample size of 3, otherwise it is nearly guaranteed to accept the proposal.
### _Algorithm Termination_
Given that the view of the scene changes with time, we cannot simply use the posterior model concentration \(\boldsymbol{\alpha}_{t}\) as the prior for the next radar measurement at timestep \(t+1\). Instead, we let the next prior be a moving average of the expected number of points assigned to each model,
\[\alpha_{t,k}=(1-c)\alpha_{t-1,k}+c\sum_{i=1}^{N}\gamma_{ik}, \tag{17}\]
where \(c\in[0,1]\) controls how strongly the current measurement affects the concentration prior wherein \(c=1\) indicates that the current iteration completely determines the prior for the next iteration and \(c=0\) indicates that we maintain a constant prior assigned in candidate generation.
Given \(\alpha_{t,k}\), we eliminate any models which no longer sufficiently contribute to the DPMM. Simply, if \(\alpha_{t,k}\) is below a chosen threshold, the candidate model \(k\) is removed from the DPMM, otherwise, the values of \(\alpha_{t,k}\), \(\mathbf{C}_{t,k}^{-1}\), and \(\boldsymbol{\beta}_{t,k}\) are used as a prior for the next radar measurement.
Lastly, we choose the candidates to be output as \(\boldsymbol{\beta}_{t,l}\) and \(\boldsymbol{\beta}_{t,r}\). Given we are attempting to recognize the primary road boundaries to the left and right, we expect them to intercept the radar y-axis. We separate candidates into left and right groups depending on the sign of this y-intercept and select one from each group with y-intercept nearest to the radar origin to be \(\boldsymbol{\beta}_{t,l}\) and \(\boldsymbol{\beta}_{t,r}\) respectively.
## V Experiments
The proposed algorithm is developed as a Robotic Operating System (ROS) module in C++. We test the algorithm on data collected from a Continental ARS430 automotive Doppler radar, and the algorithm is validated against ground truth measurements of the boundary location represented as discrete points along the boundary with approximately 1 meter spacing. The vehicle drives each trajectory at roughly the posted speed limit of the given roadway.
In order to quantify the results of our algorithm, at each time step we compute the Mean Absolute Error (MAE) of the estimated left and right road boundaries relative to their associated ground truth points. Let us consider a given estimated road boundary \(\boldsymbol{\beta}\) and an associated set of ground truth points \(\mathbf{S}=\{\mathbf{s}_{1},\dots,\mathbf{s}_{N}\}\). We find that the ground truth points are somewhat conservative, being approximately 20 cm closer to the interior of the roadway than the radar measured points. Therefore, in order to account for this and any GPS offset, we first calculate the mean error over the entire trajectory, which we consider to be the bias due to
Fig. 4: Results from Trajectory 2. (a) Sample camera view. (b) Sample top-down orthographic view. (c) MAE vs time over the trajectory.
Fig. 5: Results from Trajectory 3. (a) Sample camera view. (b) Sample top-down orthographic view. (c) MAE vs time over the trajectory.
Fig. 3: Results from Trajectory 1. (a) Sample camera view and (b) sample top-down orthographic view of scene where grey ellipses are the radar target detections, green squares are ground truth boundary points, and the blue and red curves are the estimated left and right boundaries respectively. Grid has 1 meter spacing. (c) The MAE of the estimated road boundary models vs time, where the dashed line represent the time of the sample in (a) and (b).
the ground truth and GPS offsets. The mean error of the estimated boundary is \(e=\frac{1}{N}\sum_{i=1}^{N}d(\mathbf{s}_{i},\boldsymbol{\beta})\) where \(d(\mathbf{s}_{i},\boldsymbol{\beta})\) is the signed geometric distance of the ground truth point \(\mathbf{s}_{i}\) from the boundary \(\boldsymbol{\beta}\). We then compute the mean error \(\overline{e}\) and error standard deviation \(\sigma_{e}\) over the entire trajectory, excluding any individual estimates where \(|e-\overline{e}|>3\sigma_{e}\), which we consider to be failure cases. The mean absolute error of each individual estimate is then \(e_{\text{MAE}}=\frac{1}{N}\sum_{i=1}^{N}|d(\mathbf{s}_{i},\boldsymbol{\beta}) -\overline{e}|\). Additionally, we calculate the failure rate for the estimation of each the left and right boundaries to be the ratio of the number of timesteps where \(|e-\overline{e}|>3\sigma_{e}\) or no estimate is given to the total number of timesteps.
We demonstrate three different trajectories where we have ground truth data, for which summarized results are presented in Table I. Trajectory 1, demonstrated in Fig. 3, is along a straight, curbed roadway with minimal clutter near the boundary. Similarly, Trajectory 2, demonstrated in Fig. 4, is along a curbed roadway with minimal clutter near the boundary, but transitions between straight and curved segments. Trajectory 3, demonstrated in Fig. 5, presents a more challenging case along a straight, curbed roadway, featuring clutter on the left roadside, obfuscating the boundary, as well as a driveway intersecting the right boundary.
We consider Trajectory 1 to be the most ideal case for our algorithm, and, consequently, it produces the most stable estimation with no failure cases outside the first few timesteps as the estimation stabilizes. Trajectory 2 results in the lowest estimation accuracy, largely due to the road curvature transitions at \(t\approx 11\,\mathrm{s}\) and \(t\approx 21\,\mathrm{s}\) which temporarily violate Assumption 3. However, even in this case, the estimation is not considered to fail. For Trajectory 3, the mean MAE is roughly consistent with the ideal case, but with increased variance. Most generally, the estimation accuracy is typically better for the left boundary than the right boundary. This is due to the smaller angle of incidence to the left boundary, providing a greater number of detections. We demonstrate that the algorithm is capable of successful estimation under various challenging conditions, including when there is no physical boundary in Figs. 1 and 6.
Finally, we identify the common failure modes of the algorithm. The most common, demonstrated in Fig. 7(a), occurs when non-boundary detections with high leverage cause the estimation to deviate at long range. This failure mode occurs in Trajectory 2 at \(t\approx 22\,\mathrm{s}\). The other common failure mode, demonstrated in Fig. 7(b), occurs when the primary boundary deviates to join with the boundary of an intersecting road or driveway. This failure mode occurs in Trajectory 3 at \(t\approx 14\,\mathrm{s}\). Lastly, the boundary can be occluded by passing vehicles, which occurs once in Trajectory 2 at \(t\approx 15\,\mathrm{s}\) when a vehicle passes to the left.
## VI Conclusion
We presented a novel road boundary detection method that is solely based on a sparse radar point cloud. The method was built on a homogeneous boundary model and we derived a probability distribution of road boundary models based on the radar point cloud using variational inference. In order to generate initial candidate models, we developed a custom RANSAC variant to propose unseen model instances as candidate road boundaries. By alternating variational inference and RANSAC proposals until convergence we generated the best estimate of all candidate models. We selected the candidate model with the minimum lateral distance to the radar on each side as the estimates of the left and right road boundaries. The algorithm has been implemented as a ROS module and tested under real radar data. The results are satisfactory.
In the future, we will investigate global representations for road boundary mapping and develop a sensor-fusion approach to further improve robustness.
## Acknowledgement
The authors thank Di Wang, Shuangyu Xie, and Fengzhi Guo for their input and feedback.
Fig. 6: A sample of successful road boundary estimations under varying scenarios.
Fig. 7: A sample of common failure modes. (a) High leverage points from clutter can disrupt the boundary estimation. (b) Deviation of boundary at intersection or driveway. | この論文では、sparseなレーダー信号に基づく道路境界検出のための新しいアプローチを示しています。道路を均一モデルを用いてモデル化し、既知のレーダーモーション下で条件予測モデルを導出します。条件予測モデルを用いて、Dirichlet Process Mixture Model (DPMM) でレーダー点をモデル化し、Mean Field Variational Inference (MFVI) を用いて無条件の道路境界モデル分布を導出します。MFVIのための初期候補解を生成するために、カスタム Random Sample and Consensus (RANSAC) の変形を用いて、未見のモデルインスタンスを候補道路境界として提案します。各レーダー点群に対して、MFVI と RANSAC 提案ステップを交互に行い、コンバージェンスに至るまで候補モデルの生成を行います。候補モデルの中で、レーダーの両側における最小lateral距離を持つものを左と右の境界として選定しました。このアルゴリ |
2309.07072 | The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in
Deep Learning | In this work, we assess the theoretical limitations of determining guaranteed
stability and accuracy of neural networks in classification tasks. We consider
classical distribution-agnostic framework and algorithms minimising empirical
risks and potentially subjected to some weights regularisation. We show that
there is a large family of tasks for which computing and verifying ideal stable
and accurate neural networks in the above settings is extremely challenging, if
at all possible, even when such ideal solutions exist within the given class of
neural architectures. | Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou | 2023-09-13T16:33:27 | http://arxiv.org/abs/2309.07072v1 | # The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning
###### Abstract
In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures.
Keywords:AI stability AI verifiability AI robustness deep learning.
## Notation
\(\mathbb{R}\) denotes the field of real numbers, \(\mathbb{R}_{\geq 0}=\{x\in\mathbb{R}|\ x\geq 0\}\), and \(\mathbb{R}^{n}\) denotes the \(n\)-dimensional real vector space, \(\mathbb{N}\) denotes the set of natural numbers; \((x,y)=\sum_{k}x_{k}y_{k}\) is the inner product of \(x\) and \(y\), and \(\|x\|=\sqrt{(x,x)}\) is the standard Euclidean norm in \(\mathbb{R}^{n}\); \(\mathbb{B}_{n}\) denotes the unit ball in \(\mathbb{R}^{n}\) centered at the origin \(\mathbb{B}_{n}=\{x\in\mathbb{R}^{n}\ |\ \|x\|\leq 1\}\), \(\mathbb{B}_{n}(r,y)\) is the ball in \(\mathbb{R}^{n}\) centred at \(y\) with radius \(r\geq 0\): \(\mathbb{B}_{n}(r,y)=\{x\in\mathbb{R}^{n}\ |\ \|x-y\|\leq r\}\); \(\mathrm{Cb}(\ell,y)\) is the cube in \(\mathbb{R}^{n}\) centered at \(y\) with side-length \(\ell\geq 0\): \(\mathrm{Cb}(\ell,y)=\left\{x\in\mathbb{R}^{n}\ |\ \|x-y\|_{\infty}\leq\frac{\ell}{2}\right\}\); \(\mathbb{S}_{n-1}(r,y)\) is the sphere in \(\mathbb{R}^{n}\) centred at \(y\) with radius \(r\): \(\mathbb{S}_{n-1}(r,y)=\{x\in\mathbb{R}^{n}\ |\ \|x-y\|=r\}\); \(\mathrm{sign}(\cdot):\mathbb{R}\to\mathbb{R}_{\geq 0}\) denotes the function such that \(\mathrm{sign}(s)=1\) for all \(s\in\mathbb{R}_{\geq 0}\) and \(\mathrm{sign}(s)=0\) otherwise; \(\mathcal{K}_{\theta}\) is the class of real-valued functions defined on \(\mathbb{R}\) which are continuous, strictly monotone on \([\theta,\infty)\), and constant on \((-\infty,\theta)\); \(\mathbf{1}_{n}\) denotes the vector \((1,\ldots,1)\in\mathbb{R}^{n}\).
## 1 Introduction
Data-driven AI systems and neural networks in particular have shown tremendous successes across a wide range of applications, including automotive, healthcare, gaming, marketing, and more recently natural language processing. Fuelled by high and growing rates of adoption of the new technology across sectors, robustness and stability are vital characterisations of AI performance.
The importance of AI stability and robustness is exemplified by the discovery of adversarial perturbations [12] - imperceptible changes of input data leading to misclassifications. These perturbations can be universal [8] (i.e. triggering misclassifications for many inputs), limited to a single attribute [11], or masquerading as legitimate inputs [2]. Sometimes, such AI instabilities can be typical [14], [10]. Moreover, instabilities can also be induced by perturbations of the AI structure [13].
The issue of AI robustness is non-trivial and cannot be considered in isolation from other measures of AI performance: a model returning the same output regardless of the inputs is perfectly robust yet useless. A theoretical framework to approach the problem has recently been proposed in [1]. It has been shown in [1] that (i) there is an uncountably large family of distributions such that for an appropriately large data sample drawn from a distribution from this family there is a feed-forward neural network showing excellent performance on this sample, although (ii) this same network becomes inevitably unstable on some subset of the training and validation sets. Moreover, (iii) for the same distribution and the same data, there is a stable network possibly having a different architecture.
Here we show that the stability-accuracy issues have other unexplored dimensions and could be significantly more pronounced than previously thought. Our main result, Theorem 1 shows that there exist large families of well-behaved data distributions for which even networks achieving zero training and validation error may be highly unstable with respect to almost any small perturbation on nearly half of the training or validation data. Yet, for the same data samples and distributions, there exist stable networks _with the same architecture as the unstable network_ which also minimise the loss function. Strikingly, there exist infinitely many pairs of networks, in which one network is stable and accurate and the other is also accurate but unfortunately unstable, whose weights and biases could be made arbitrarily close to each other. What is even more interesting, all this happens and persists when the values of weights and biases are made small.
This result reveals a fundamental issue at the heart of current data-driven approaches to learning driven by minimising empirical risk functions, even in the presence of weight regularisation, in distribution-agnostic settings. The issues is that such learning algorithms could be structurally incapable of distinguishing between stable and unstable solutions.
The rest of the paper is organised as follows. In Section 2 we introduce notation and problem setting. In Section 3 we state our main results along with discussion, interpretation, and comparison to the literature. Section 4 concludes the paper.
## 2 Preliminaries, assumptions, and problem settings
Following [1], by \(\mathcal{NN}_{\mathbf{N},L}\) we denote the class of neural networks with \(L\) layers and dimension \(\mathbf{N}=\{N_{L},N_{L-1},N_{L-2},\ldots,N_{1},N_{0}=n\}\), where \(n\) is the input dimension, and \(N_{L}=1\) is the dimension of the network's output. A neural network with dimension \((\mathbf{N},L)\) is a map
\[\phi=G^{L}\sigma G^{L-1}\sigma\cdots\cdots\sigma G^{1},\]
where \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is a coordinate-wise activation function, and \(G^{l}:\mathbb{R}^{N_{l-1}}\rightarrow\mathbb{R}^{N_{l}}\) is an affine map defined by \(G^{l}x=W^{l}x+b^{l}\), where \(W^{l}\in\mathbb{R}^{N_{l}\times N_{l-1}}\), \(b^{l}\in\mathbb{R}^{N_{l}}\) are the corresponding matrices of weights and biases. By \(\Theta(\phi)\) we denote the vector of all weights and biases of the network \(\phi\).
In general, the activation functions \(\sigma\) do not have to be the same for all components and all layers, although here we will assume (unless stated otherwise) that this is indeed the case. In what follows we will consider feed-forward networks with activation functions in their hidden layers computing mappings from the following broad class:
\[\sigma=g_{\theta},\ g_{\theta}\in\mathcal{K}_{\theta},\ \theta\in\mathbb{R}. \tag{1}\]
Popular functions such as ReLU are contained in this class (that is the class of functions which are continuous, strictly monotone on \([\theta,\infty)\) and constant on \((-\infty,\theta)\)). The condition of strict monotonicity of \(g_{\theta}\) over \([\theta,\infty)\) can be reduced to strict monotonicity over some \([\theta,\theta_{1}]\), \(\theta_{1}>\theta\), with \(g_{\theta}\) being merely monotone on \([\theta_{1},\infty)\). This extension won't have any affect on the validity of the theoretical statements below, but will enable the inclusion of leaky ReLU activations (since then activation functions satisfying (1) can be constructed as a difference of a leaky ReLU function and its shifted/translated copy, and the results below therefore still follow) as well as "sigmoid"-like piecewise linear functions.
We will suppose that all data are drawn from some unknown probability distribution belonging to a family \(\mathcal{F}\), and each element \(\mathcal{D}\in\mathcal{F}\) of this family is supported on \([-1,1]^{n}\times\{0,1\}\). For any given \(\mathcal{D}\in\mathcal{F}\), we will assume that the training and testing algorithms have access to samples \((x^{j},\ell^{j})\), \(j=1,\ldots,s+r\), \(s,r\in\mathbb{N}\), independently drawn from \(\mathcal{D}\), and which can be partitioned into training
\[\mathcal{T}=\{(x^{1},\ell^{1}),\ldots,(x^{r},\ell^{r})\}\]
and validation/testing
\[\mathcal{V}=\{(x^{r+1},\ell^{r+1}),\ldots,(x^{r+s},\ell^{r+s})\}\]
(multi)-sets. Let \(M=r+s=|\mathcal{T}\cup\mathcal{V}|\) be the size of the joint training and validation (multi)-set.
Further, we impose a condition that the data distribution is sufficiently regular and does not possess hidden instabilities and undesirable accumulation points which could otherwise trivialise our statements and results. In particular, for \(\delta\in(0,2\sqrt{n}]\) we will only consider those distributions \(\mathcal{D}_{\delta}\in\mathcal{F}\) which satisfy:
If \((x,\ell_{x}),(y,\ell_{y})\sim\mathcal{D}_{\delta}\) with \(\ell_{x}\neq\ell_{y}\), then, with probability \(1\), \(\|x-y\|\geq\delta\). (2)
Finally, we introduce the family of loss functions
\[\mathcal{CF}_{\mathrm{loc}}= \{\mathcal{R}:\ \mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}_{ \geq 0}\cup\{\infty\}\ |\ \mathcal{R}(v,w)=0\ \ \Longleftrightarrow\ \ v=w\} \tag{3}\]
which will be used to define the corresponding empirical loss functions for the model outputs \(h:\mathbb{R}^{n}\rightarrow\{0,1\}\) on samples \(\mathcal{S}\sim\mathcal{D}_{\delta}\) drawn from \(\mathcal{D}_{\delta}\)
\[\mathcal{L}(\mathcal{S},h)=\sum_{(x^{i},\ell^{i})\in\mathcal{S}}\mathcal{R}(h( x^{i}),\ell^{i}). \tag{4}\]
The subscript "loc" in (3) emphasises that the loss functions \(\mathcal{R}\) are evaluated on single data points and in this sense are "local". It provides an explicit connection with the classical literature involving empirical risk minimisation, allowing us to exploit the conventional interpretation of the generalisation error as a deviation of the empirical risk from the expected value of the loss over the distribution generating the data.
## 3 Main results
Having introduced all relevant notation, are now ready to state the main result of the contribution.
Theorem 3.1 (Inevitability, typicality and undetectability of instability): _Consider the class of networks with architecture_
\[\mathbf{N}=(N_{L}=1,N_{L-1},\ldots,N_{1},N_{0}=n),\ \ L\geq 2,\ n\geq 2,\]
_where \(N_{1}\geq 2n\) and \(N_{2},\ldots,N_{L-1}\geq 1\), and activation functions \(g_{\theta}\) in layers \(1,\ldots,L-1\) satisfying conditions (1), and the \(\mathrm{sign}(\cdot)\) activation function in layer \(L\)._
_Let \(\varepsilon\in(0,\sqrt{n}-1)\) and fix \(0<\delta\leq\varepsilon/\sqrt{n}\). Then, there is an uncountably large family of distributions \(\mathcal{D}_{\delta}\in\mathcal{F}\) satisfying (2) such that for any \(\mathcal{D}_{\delta}\in\mathcal{F}\), any training and validation data \(\mathcal{T}\), \(\mathcal{V}\) drawn independently from \(\mathcal{D}_{\delta}\), and every \(\mathcal{R}\in\mathcal{CF}_{\mathrm{loc}}\), with probability 1:_
1. _There exists a network which correctly classifies the training data_ \(\mathcal{T}\) _and generalises to the test data_ \(\mathcal{V}\)_, satisfying_ \[f\in\operatorname*{arg\,min}_{\varphi\in\mathcal{N}\mathcal{N}_{\mathbf{N},L}} \mathcal{L}(\mathcal{T}\cup\mathcal{V},\varphi)\] _with_ \(\mathcal{L}(\mathcal{T}\cup\mathcal{V},f)=0\)_._
2. _Yet, for any_ \(q\in(0,1/2)\)_, with probability greater than or equal to_ \[1-\exp(-2q^{2}M)\] _there exists a multi-set_ \(\mathcal{U}\subset\mathcal{T}\cup\mathcal{V}\) _of cardinality at least_ \(\lfloor(1/2-q)M\rfloor\) _on which_ \(f\) _is unstable in the sense that for any_ \((x,\ell)\in\mathcal{U}\) _and any_ \(\alpha\in(0,\varepsilon/2)\)_, there exists a perturbation_ \(\zeta\in\mathbb{R}^{n}\) _with_ \(\|\zeta\|\leq\alpha/\sqrt{n}\) _and_ \[|f(x)-f(x+\zeta)|=1.\] (5)
_Moreover, such destabilising perturbations are_ typical _in the sense that if vectors_ \(\zeta\) _are sampled from the equidistribution in_ \(\mathbb{B}_{n}(\alpha/\sqrt{n},0)\)_, then for_ \((x,\ell)\in\mathcal{U}\)_, the probability that (_5_) is satisfied is at least_ \[1-\frac{1}{2^{n}}.\] _Furthermore, there exist_ universal _destabilising perturbations, in the sense that a single perturbation_ \(\zeta\) _drawn from the equidistribution in_ \(\mathbb{B}_{n}(\alpha/\sqrt{n},0)\) _destabilises_ \(m\leq|\mathcal{U}|\) _points from the set_ \(\mathcal{U}\) _with probability at least_ \[1-\frac{m}{2^{n}}.\]
2. _At the same time, for the same distribution_ \(\mathcal{D}_{\delta}\) _there is a robust network with the same architecture as_ \(f\)_, satisfying_ \[\tilde{f}\in\operatorname*{arg\,min}_{\varphi\in\mathcal{N}\mathcal{N}_{ \text{\tiny{\bf{N}},L}}}\mathcal{L}(\mathcal{T}\cup\mathcal{V},\varphi)\] _with_ \(\mathcal{L}(\mathcal{T}\cup\mathcal{V},\tilde{f})=0,\) _which is robust in the sense that for all_ \((x,\ell)\in\mathcal{T}\cup\mathcal{V}\)__ \[\tilde{f}(x)=\tilde{f}(x+\zeta)\] _for any_ \(\zeta\in\mathbb{R}^{n}\) _with_ \(\|\zeta\|\leq\alpha/\sqrt{n}\)_, even when_ \(|\mathcal{T}\cup\mathcal{V}|=\infty\)_. Moreover, there exist pairs of unstable and robust networks,_ \(f_{\lambda},\tilde{f}_{\lambda}\) _and_ \(f_{\Lambda},\tilde{f}_{\Lambda}\)_, satisfying the statements above such that the maximum absolute difference between their weights and biases is either arbitrarily small or arbitrarily large. That is, for any_ \(\lambda>0,\Lambda>0\)_:_ \[\|\Theta(f_{\lambda})-\Theta(\tilde{f}_{\lambda})\|_{\infty}<\lambda,\ \| \Theta(f_{\Lambda})-\Theta(\tilde{f}_{\Lambda})\|_{\infty}>\Lambda.\]
3. _However, for the above robust solution_ \(\tilde{f}\)_,_ 1. _there exists an uncountably large family of distributions_ \(\tilde{D}_{\delta}\in\mathcal{F}\) _on which_ \(\tilde{f}\) _correctly classifies both the training and test data, yet fails in the same way as stated in (_1_)._ 2. _there exists an uncountably large family of distributions_ \(\hat{D}_{\delta}\in\mathcal{F}\) _such that the map_ \(\tilde{f}\) _is robust on_ \(\mathcal{T}\cup\mathcal{V}\) _(with respect to perturbations_ \(\zeta\) _with_ \(\|\zeta\|\leq\alpha/\sqrt{n}\)_,_ \(\alpha\in(0,\varepsilon/2)\)_) with probability_ \[\left(1-\frac{1}{2^{n+1}}\right)^{Mk}\] _but is unstable to arbitrarily small perturbations on future samples with probability_ \(k/2^{n+1}\)_._
The proof of the theorem is provided in the Appendix.
### Interpretation of results
According to statement (i) of Theorem 3.1, not only are instabilities to be expected, but they can also be remarkably widespread: for sufficiently large data sets they may occur, with high probability, for nearly half of all data.
Statement (ii) of Theorem 3.1 confirms that a stable solution exists _within precisely the same class of network architectures_, although it is difficult to compute it by using only the loss functional \(\mathcal{L}\) as a measure of quality. This shows that the architecture isn't necessarily the source of the instability. Moreover, a robust solution may be found in an arbitrarily small neighborhood of the specific non-robust one in the space of network weights and biases. As the construction in the proof shows, using networks with small Lipshitz constants can, counter-intuitively, make the problem worse.
The robust solution, in turn, can also be unstable, as follows from statement (iii), part (a). This is reminiscent of a "no free lunch" principle for robust and accurate learning, although with a subtle distinction. In fact, as part b) of the statement states, there are solutions which may appear to be certifiably robust (and one can indeed certify the model on the training and validation sets), although there is no guarantee whatsoever that the certificate remains valid for future samples. To minimise the risks, one needs to certify the model on data sets which are exponentially large in \(n\). This is particularly relevant for safety-critical settings, where the risk of failure must be calculated and bounded in advance.
Finally, we note that the instabilities considered in Theorem 3.1 become particularly pronounced for networks with sufficiently high input dimension \(n\) (see statement (iii) of the theorem). Moreover, statement (ii) shows that the fraction of perturbations around unstable points \(x\) in the sample which alter the network's response approaches \(1\) as \(n\) grows. These high-dimensional effects may still be observed in networks with arbitrarily low input dimensions if such networks realise appropriate auxiliary space-filling mappings in relevant layers. The technical point that the statement of Theorem 3.1 holds with probability one is due to the fact that the proof constructs data distributions which assign probability zero to certain sets, so there may exist training samples with probability zero for which the construction does not apply.
### Discussion
#### 3.2.1 Instabilities and regularisation
The construction we used in the proof of Theorem 3.1 reveals that the instability discussed in statements (i) and (ii) of the theorem is inherent to the very definition of the binary classification problem and may not be addressed by regularisation approaches constraining norms of network's parameters and Lipschitz constants of non-threshold layers.
Indeed, consider just the first two layers of the network \(f\) constructed in the proof of the theorem, remove the sign\((\cdot)\) activation function, and introduce an
arbitrarily small positive factor \(\beta\) (cf. (13)):
\[\begin{split} f_{\text{reg}}(x)=&\sum_{i=1}^{n}g_{ \theta}(\theta)-g_{\theta}(\beta((x,e_{i})-1/\sqrt{n})+\theta)\\ &+\sum_{i=1}^{n}g_{\theta}(\theta)-g_{\theta}(\beta(-(x,e_{i})- 1/\sqrt{n})+\theta).\end{split} \tag{6}\]
If the functions \(g_{\theta}\) are Lipschitz then the Lipschitz constant of the function \(f_{\text{reg}}\) can be made arbitrarily small by setting \(\beta\) to some sufficiently small value. At the same time, the values of \(\text{sign}f_{\text{reg}}(x)\) and \(f(x)\) coincide. This implies that regardless of how well-behaved the function \(f_{\text{reg}}\) in (6) is, forced classification achieved either by the application of the sign function or, alternatively, through thresholding or softmax, brings instabilities.
In this respect, network regularisation by pruning, restricting norms of the network's weights, and forcing the network's Lipschitz constant to stay small do not always warrant robustness. Similarly, requesting that there is some non-zero margin separating the classes does not address or alleviate the problem either. The instability occurs due to the fact that the algorithm is required to produce a decision boundary, but is unaware that the data is placed directly on this boundary.
#### 3.2.2 Adversarial training
A potential way to overcome the instabilities formalised in statement (i) of Theorem 3.1 is to invoke a type of training capable of assessing that instabilities (5) do not occur. Adversarial training and data augmentation, whereby each data sample produces a set of points corresponding to perturbed data is an example of an approach which can potentially address the problem. The approach is not without its own challenges as one needs to ensure that all points in the sets \(\mathbb{B}_{n}(\alpha/n,x)\), \(\alpha\in(0,\varepsilon/2)\) are checked. The latter task can be computationally and numerically overwhelming for large \(n\).
#### 3.2.3 Dark data
The final and perhaps the most interesting point in relation to the problem of verifiability is statement (iii), which can be related to challenge of the "dark data" - the data which exists but to which we don't have access [9] or, more generally, the missing data and the data which we don't have [6]. As the theorem states, high-dimensional distributions could be a very real source of such dark data, potentially leading to instabilities or non-verifiability.
## 4 Conclusion
Deep learning networks and models have convincingly shown ample capabilities in many practical tasks. When properly engineered, these models stunningly outperform shallower architectures (see e.g. [7], [15] for examples and precise statements). Moreover, recent breakthroughs such as the emergence of Chat-GPT show exceptional power these models may bring. These models operate in
high-dimensional spaces and process and execute decisions on genuinely high-dimensional data.
At the same time, and despite these remarkable achievements, the application of these highly expressive and capable models requires special care and understanding of their fundamental limitations.
Our work, by building on [1], reveals a new set of limitations which are particularly inherent to high-dimensional data. These limitations constitute the presence of nested uncountably large families of exceptions on which even moderately-sized networks may and likely will fail. The results also show that it may be computationally hard to verify both robustness and accuracy of models within classical distribution-agnostic learning frameworks based solely on the notions of risk and empirical risk minimisation. All these call for the need to rethink standard distribution-agnostic learning frameworks and introduce more appropriate models of reality into the mathematical setting of statistical learning.
The results, by showing fundamental difficulties with guaranteeing simultaneous stability, accuracy, and verifiability, highlight the importance of mathematical theory and methods for the continuous correction of AI models [4], [5], [3].
At present, the results do not include networks with classical sigmoidal activation functions. Detailed analysis of these types of networks will be the topic of our future work.
#### 4.0.1 Acknowledgements
This work is supported by the UKRI, EPSRC [UKRI Turing AI Fellowship ARaISE EP/V025295/2 and UKRI Trustworthy Autonomous Systems Node in Verifiability EP/V026801/2 to I.Y.T., EP/V025295/2 to O.S., A.N.G., and Q.Z., EP/V046527/1 and EP/P020720/1 to D.J.H, EP/V046527/1 to A.B.].
| 本稿では、分類タスクにおけるニューラルネットワークの保証された安定性と正確性の決定の理論的限界を評価します。私たちは、一般的に分布に依存しない枠組みとアルゴリズムを検討し、実用的なリスクを最小化し、潜在的にウェイトの正規化に適応するアルゴリズムを検討しました。これらを基に、理想的な安定性と正確性を達成できるタスクの大きな家族が存在する一方で、そのような理想的な解決策が存在する場合でも、この設定では計算および検証が非常に困難であることを示します。 |
2309.10268 | Lower Gravity Demonstratable Testbed for Space Robot Experiments | In developing mobile robots for exploration on the planetary surface, it is
crucial to evaluate the robot's performance, demonstrating the harsh
environment in which the robot will actually be deployed. Repeatable
experiments in a controlled testing environment that can reproduce various
terrain and gravitational conditions are essential. This paper presents the
development of a minimal and space-saving indoor testbed, which can simulate
steep slopes, uneven terrain, and lower gravity, employing a three-dimensional
target tracking mechanism (active xy and passive z) with a counterweight. | Kentaro Uno, Kazuki Takada, Keita Nagaoka, Takuya Kato, Arthur Candalot, Kazuya Yoshida | 2023-09-19T02:44:22 | http://arxiv.org/abs/2309.10268v2 | # Lower Gravity Demonstratable Testbed for Space Robot Experiments
###### Abstract
In developing mobile robots for exploration on the planetary surface, it is crucial to evaluate their performance, demonstrating the harsh environment in which the robot will actually be deployed. Repeated experiments in a controlled testing environment reproducing various terrain and gravitational conditions are essential. This paper presents the development of a minimal and space-saving indoor testbed, which can simulate steep slopes, uneven terrain, and lower gravity, employing a three-dimensional target tracking mechanism (active xy and passive z) with a counterweight.
## I Introduction
Space robots work under different gravity (Moon: 1/6 G, Mars: 3/8 G, and small bodies or in-orbit: micro G). Thus, it is essential to have an experimental setup that can demonstrate that the robot moves under such a lower gravity. Gravity offloading techniques for such a space robot experiment are generally divided into three types: 1) Air-floating: This method removes the effect of gravity on motion in a two-dimensional plane by levitating the object on the surface using compressed air and reducing friction to as close to zero as possible [1]. By changing the tilt angle of the surface plate, arbitrary gravity can be simulated. 2) Applying a controlled external force: This technique utilizes a robot arm [2] or a tether [3] attached to the target object to externally apply the controlled forces and moments other than gravity with torque and force sensor feedback, resulting in that any different gravitational forces and external moments can be simulated for the target. In this technique, we can mimic any gravity with high accuracy, but the whole hardware system tends to be large-sized. 3) Counterweighting: In this technique, the target object is hoisted by a tether to which a counterweight is attached at the other end. A vertical upward force is applied to the centroid of the object [4]. Changing the counterweight ratio to the object's weight can simulate an arbitrary gravity environment. The third method is less accurate than the second solution because the force is not actively controlled. However, the mechanical setup and control are more straightforward.
This paper focuses on developing the indoor testbed, which can change the inclination angle of the field and reproduce various uneven terrains for space robotic experiments. The entire system is realized in a cost- and space-saving manner by means of the counterweighting solution for the gravity compensation and integration of the CoreXY mechanism [5] on the ceilings of the outer frame. This mechanism is used to actively track the moving robot's horizontal position, which helps sustain the cable of the counterweight (i.e., the direction of the gravity offload) always vertical while the robot moves.
## II System Design and Integration
The developed test field consists of an outer frame, a field plate that can change its inclination along with the outer frame (see Fig. 1), and a three-dimensional (3D) target-tracking mechanistic system on the outer cage. The field plate frame can accommodate four wooden top panels, which have holes at 60 mm intervals to attach any object (maximum payload capacity: 200 kg) such as steps, handholds, rocks, or sand tray, creating any artificial or natural rocky/sandy terrain assuming the in-orbit space station interior or the planetary surface. The field plate's inclination angle can easily vary by the experimenter's turning the attached hand-cranked winch that lifts the backside edge of the plate. In this testbench, the robot is hung by a cable pulled by a counterweight; the cable is fixed to the robot via a two degree-of-freedom (pitch and yaw) free rotating gimbal, which helps not to give an undesired moment to the robot.
The testbench has a 3D target tracking system to keep applying a vertical gravity offloading force at the moving robot's center of mass. In this system, \(z\)-position is passively adjusted by the vertically moving counterweight on the friction-less rail, and \(xy\)-position is actively tracked using
Fig. 1: Developed terrain-, inclination-, and gravity-adjustable testbed. The hoisted robot’s mass is partially (or fully) compensated by a counterweight. The cable of the counterweight is maintained to be vertical while the robot moves on the field using the CoreXY mechanism.
CoreXY mechanism [5], which is typically used in 3D printer nozzle's mobility. CoreXY mechanism enables the precise control of \(xy\)-position of the tracker by conveying the belts, which are driven by the two stepping motors. The tracker's necessary displacement to set the cable's tilt angle back to zero is derived as follows. Let \(l\) be the length of the cable from the robot fixture to the tracker, and \(\phi\) and \(\theta\) be the roll and pitch angles of the cable from the vertical state relative to the inertial frame, respectively. The cable's roll and pitch tilt angle are measured by two encoders attached to the upper gimbal through which the counterweight cable is. The amount of movement in the \(x\)- and \(y\)-axes \(\Delta x\) and \(\Delta y\) can be obtained as Eq. (1).
\[\Delta x=-l\sin\theta,\ \Delta y=l\sin\phi \tag{1}\]
In the CoreXY schematic of Fig. 1, if the two belt feeds driven by the left and right stepping motors are \(\Delta a\) and \(\Delta b\), respectively, and these can be obtained from \(\Delta x\) and \(\Delta y\) as computed in Eq. (2).
\[\Delta a=\Delta x-\Delta y,\ \Delta b=-\Delta x-\Delta y \tag{2}\]
Two timing belt feeds are determined using Eq. (1) and Eq. (2). The control input of the stepping motors is determined by PID control. The data is logged by ROS software.
## III Evaluation
In our developed test bench, the target tracking system keeps the counterweighting force precisely vertical along with the robot movement, which is essential to prevent the robot from getting unnecessary horizontal disturbance by the controlled force for gravity cancellation. An evaluation test was conducted to assess the performance of the system. For this test, a cart was attached instead of a robot, and a three-axis force sensor was inserted between the cable and the cart's top surface to measure the counter force's vector (see Fig. 2(a)). The cart was linearly almost 1 m pushed towards the direction of positive \(x\) and \(y\), and the force vector and the cart velocity were recorded by the force sensor and the motion capturing system, respectively. The compensation force value was set to 20 N or 40 N, and the test was conducted five times, respectively. A representative result is shown in Fig. 2. Through the total ten attempts, the average velocity of the cart was 34 mm/s (min: 25 mm/s, max: 43 mm/s). Throughout all trials, while the target moves, the angle from the vertical of the force acting on the target was kept between \(\pm 1.5\) deg (Fig. 2(b)), which results in horizontal components of the force being less than only 0.5 N (Fig. 2(c)), which is 1.3% of the 40 N of the offloading force. Furthermore, a space exploration robot demonstration using the testbed is showcased in Fig. 3. In this demonstration, a four-limbed cliff-climbing robot, HubRobo [6] (mass: 3.0 kg), was deployed on the 45\({}^{\circ}\) inclined irregular slope. The counterweight mass was determined to be 3.5 kg (robot gimbal mass (1.0 kg) + 5/6 of HubRobo mass), simulating the Lunar gravity. In practice, a certain percentage of the calculated value was added to the counterweight to offset the friction. By the accurate target tracking, we confirmed the force for gravity offloading was kept vertical (see the cable trajectory in Fig. 3) during the robot's locomotion.
## IV Conclusion
This paper detailed the development of an experimental environment for space robots in which the terrain unevenness, slope inclination, and simulated gravity can be adjusted. In this testbed, a 3D target tracking system (active \(xy\) and passive \(z\) adjustment) was constructed to keep applying a counter gravity vertical force for the moving target, and the performance of this system was confirmed to be satisfactory through the experiments. Future scope includes achieving more accuracy of the simulated gravity by adding a \(z\)-axis linear actuator and a force sensor to the system, which completes an active 3D control with force feedback.
| モバイルロボットの開発において、惑星の表面での探索を行うにあたっては、ロボットの性能を評価することが重要であり、実際に運用される環境における厳しい環境を評価することになります。繰り返し可能な実験を行うことができる、さまざまな地形と重力条件を再現できるコントロールされたテスト環境は必須です。この論文では、シンプルな空間効率の高い室内テストベッドの開発を報告し、傾斜面、 Uneven terrain 、低重力、を模倣するために、3次元的なターゲット追跡メカニズム(アクティブなxyと passivz)を備えたカウンターウェイトを用いて構築されています。
Please note: The sentence contains specific technical terms related to mobile robotics, such as "mobile robots," "planetary surface," "harshenvironment," "repeatable experiments," "controlled testing environment," "variousterrain," "gravitational conditions," "minimal and space-saving indoor testbed," "three-dimensional target tracking mechanism," "active xy and |
2309.07714 | Shared Telemanipulation with VR controllers in an anti slosh scenario | Telemanipulation has become a promising technology that combines human
intelligence with robotic capabilities to perform tasks remotely. However, it
faces several challenges such as insufficient transparency, low immersion, and
limited feedback to the human operator. Moreover, the high cost of haptic
interfaces is a major limitation for the application of telemanipulation in
various fields, including elder care, where our research is focused. To address
these challenges, this paper proposes the usage of nonlinear model predictive
control for telemanipulation using low-cost virtual reality controllers,
including multiple control goals in the objective function. The framework
utilizes models for human input prediction and taskrelated models of the robot
and the environment. The proposed framework is validated on an UR5e robot arm
in the scenario of handling liquid without spilling. Further extensions of the
framework such as pouring assistance and collision avoidance can easily be
included. | Max Grobbel, Balint Varga, Sören Hohmann | 2023-09-14T13:46:59 | http://arxiv.org/abs/2309.07714v1 | # Shared telemanipulation with VR controllers in an anti slosh scenario*
###### Abstract
Telemanipulation has become a promising technology that combines human intelligence with robotic capabilities to perform tasks remotely. However, it faces several challenges such as insufficient transparency, low immersion, and limited feedback to the human operator. Moreover, the high cost of haptic interfaces is a major limitation for the application of telemanipulation in various fields, including elder care, where our research is focused. To address these challenges, this paper proposes the usage of nonlinear model predictive control for telemanipulation using low-cost virtual reality controllers, including multiple control goals in the objective function. The framework utilizes models for human input prediction and task-related models of the robot and the environment. The proposed framework is validated on an UR5e robot arm in the scenario of handling liquid without spilling. Further extensions of the framework such as pouring assistance and collision avoidance can easily be included.
## I Introduction
Telemanipulation is an emerging field that aims to combine the skills of human operators with robotic systems to perform tasks in a variety of fields [1]. These fields include elder care, handling hazardous materials, and space exploration. In particular, telemanipulation has the potential to overcome a number of challenges, including inaccessibility in hazardous environments, lack of human resources, and the need for precision in certain applications.
However, the success of telemanipulation is hindered by several challenges [2]. The two main challenges are the communication delays and the high cost of haptic interfaces of the state-of-the-art applications. The reasons of such communication delays and dropouts are network latency, packet loss, or hardware failure, which can result in unpredictable and unstable behavior of the remote robot. Haptic interfaces are essential in order to provide feedback to the operator to enhance the intuition of the control [3, 4]. Since, such haptic interfaces are costly, the application of telemanipulation in various fields are limited due to this cost factor.
To address these challenges, this paper proposes a novel framework for telemanipulation using virtual reality (VR) controllers and nonlinear model predictive control (NMPC). The framework aims to provide efficient models for predicting human inputs and enabling the efficient execution of the remote tasks even in the presence of communication issues. The use of VR controllers provides a low-cost alternative to traditional haptic interfaces while maintaining the operator's intuitive control of the remote robot. Additionally, the NMPC algorithm improves the overall stability and accuracy of the telemanipulation system.
The proposed framework is validated on an UR5e robot arm with a glass of water connected to the end effector. The framework provides an anti-slosh assistance for the handling of liquid containers, which is a challenging use case due to the sloshing dynamics of the liquid.
The remainder of this paper is structured as follows: In section II we give an overview of related literature. The description of utilized system models is given in section III. Our proposed Assistive Telemanipulation Framework is presented in section IV. In section V we show the realization of the framework on a real UR5e robot.
## II Related Work
In the following, we give a brief overview of related research in the fields of bilateral telemanipulation, model predictive control in the context of robotics and anti slosh control.
### _Bilateral Telemanipulation_
Literature in the field of telemanipulation mainly focuses on setups with haptic input devices, also known as bilateral telemanipulation [2], where the human operator also receives haptic feedback and becomes part of the control loop. The goal of those approaches is to support the human with a transparent telemanipulation system [3, 4] and high immersion [5, 6] utilizing the haptic feedback. A disadvantage of bilateral telemanipulation is the high cost of the input devices (e.g. [7]).
### _Robotics and Model Predictive Control_
Model predictive control (MPC) utilizes model knowledge and an objective function to calculate optimal trajectories that satisfy the system dynamics. One challenge with MPC is the real-time capability. With increasing calculation power of modern processors and efficient solvers for optimization problems, MPCs are applied more and more in robotic applications, e.g. in [8] frequencies of \(1\mathrm{kHz}\) are accomplished. Their robot model is given in joint space, such that the inverse kinematics are solved inherently by the optimization. In telemanipulation robotics, MPCs can be utilized for collision avoidance, though they often are combined with haptic input devices [9]. In [10] the implementation of a MPC for collision avoidance on a telemanipulated UR5e is presented.
To deal with the high calculation times of online optimization problems, approximate MPCs based on Neural Networks are suggested in [11].
### _Anti Slosh Control_
When transporting liquids in an open container, the sloshing dynamics of the liquid have to be considered. In [12], the modelling of the liquid either as a pendulum or as a mass, damper spring system is compared in different scenarios. The examination of liquid in a hemispherical container with effective anti slosh control is presented in [13]. Their algorithm is based on a model of the liquid as a pendulum. The control architecture in [14] is also based on the modelling as a pendulum and implements input filter to generate slosh free movements in telemanipulation. It is worth mentioning that this work does not rely on haptic input devices, but uses motion detection with cameras as human input interface. Based on the pendulum model, [15] and [16] generate optimal trajectories which are used as references for the robot controller. The work of [17] is not based on a liquid model directly, but they enforce trajectories of the telemanipulated robot arm, such that the container does not experience any lateral acceleration.
So far, no telemanipulation framework based on MPC for anti slosh control with VR controllers as input devices has been proposed in literature.
## III Models for Assistive Telemanipulation Framework
The assistive telemanipulation framework is based on two underlying system models, namely the model of the robot arm and the model of liquid in a container, which are derived in this section.
### _Model of Robot_
A generic robot arm with \(n_{q}\) rotational joints can be described through the states \([\mathbf{q}^{T},\dot{\mathbf{q}}^{T}]^{T}\), where \(\mathbf{q}\in\mathbb{R}^{n_{q}}\) denotes all joint angles and \(\dot{\mathbf{q}}\in\mathbb{R}^{n_{q}}\) the joint angular velocities. The relation of the position and orientation of the end effector in global cartesian coordinates \(\mathbf{x}\) and the robot joints \(\mathbf{q}\) is given through the forward kinematics \(F_{K}:\mathbb{R}^{n_{q}}\rightarrow\mathbb{R}^{7},\mathbf{q}\mapsto\mathbf{x}\) consists of the cartesian coordinates of the end effector \(\mathbf{r}\) and the rotation described as an union quaternion.
The dynamics of a robot are described through the set of second order differential equations of the form [18]
\[\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{b}(\mathbf{q},\dot{\mathbf{q} })=\tau, \tag{1}\]
where \(\tau\) contains the applied torques and \(\ddot{\mathbf{q}}\) the acceleration in all joints, whereas \(\mathbf{M}(\mathbf{q})\) denotes the inertia tensor and \(\mathbf{b}(\mathbf{q},\dot{\mathbf{q}})\) all further relevant terms (dissipation, Coriolis effects, gravity).
Like [11] and [8], we assume the existence of low-level controllers such that the joint accelerations serve as control inputs \(\mathbf{u}=\ddot{\mathbf{q}},\mathbf{u}\in\mathbb{R}^{n_{q}}\) of the considered system. With this assumption, the equations of motion (1) are reduced to the linear dynamic system
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\mathbf{q}\\ \dot{\mathbf{q}}\end{bmatrix}=\mathbf{A}\begin{bmatrix}\mathbf{q}\\ \dot{\mathbf{q}}\end{bmatrix}+\mathbf{B}\mathbf{u} \tag{2}\]
with
\[\mathbf{A}=\begin{bmatrix}\mathbf{0}&\mathbb{I}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}\qquad\mathrm{and}\qquad\mathbf{B}=\begin{bmatrix} \mathbf{0}\\ \mathbb{I}\end{bmatrix},\]
where \(\mathbb{I}\in\mathbb{R}^{n_{q}\times n_{q}}\) denotes the identity matrix and \(\mathbf{0}\in\mathbb{R}^{n_{q}\times n_{q}}\).
For the evaluation, an UR5e robot with 6 degrees of freedom is utilized. Similar to [8], only the joints \(\{q_{1},q_{2},q_{3}\}\subset\{q_{0},q_{1},q_{2},q_{3},q_{4},q_{5}\}\) are actuated, thus only pure planar movement is considered in this work as shown in Fig. 1. The angles \(q_{1},q_{2}\) and \(q_{3}\) are depicted in positive orientation. The coordinate frames \(\{1\},\{2\}\) and \(\{3\}\) are connected to the robot links with length \(L_{1},L_{2}\) and \(L_{3}\), respectively. A glass \(\{4\}\) with water is connected to the end effector. The world frame \(\{0\}\) has the same origin as frame \(\{1\}\).
With \(n_{q}=3\) the system, states reduce to \([\mathbf{q}^{T},\dot{\mathbf{q}}^{T}]^{T}=[q_{1},q_{2},q_{3},\dot{q}_{1},\dot{ q}_{2},\dot{q}_{3}]^{T}\). With the lengths \(L_{1},L_{2}\) and \(L_{3}\) of the robot links, the forward kinematics for this planar configuration \(F_{K,2D}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3},\mathbf{q}\mapsto[x_{c},z_{ c},\theta_{c}]^{T}\) are described by [18, p.137]
\[x_{c} =L_{1}\cos q_{1}+L_{2}\cos(q_{1}+q_{2}) \tag{3a}\] \[+L_{3}\cos(q_{1}+q_{2}+q_{3}),\] \[z_{c} =-(L_{1}\sin(q_{1})+L_{2}\sin(q_{1}+q_{2})\] (3b) \[+L_{3}\sin(q_{1}+q_{2}+q_{3})),\] \[\theta_{c} =q_{1}+q_{2}+q_{3}. \tag{3c}\]
### _Model of Liquid in Open Container_
The transportation of liquids in open containers is discussed in [13], which is adapted for our framework. Under the assumption, that the surface of the liquid stays flat during movements, it can be modeled as an oscillating pendulum with a moving base [13]. The revolution point A of the pendulum lies on the intersection of the surface of the liquid and the middle line of the container and the deflection \(\beta\) is measured relative to the \(\mathbf{e}_{z}\) axis of the glass \(\{4\}\) (Fig. 2).
Fig. 1: Planar robot arm with three rotational joints \(q_{i}\) and a glass of water with the coordinate frame \(\{4\}\) connected to the end effector.
The surface is always perpendicular to the pendulum rod. Further, it is assumed that the intersection of the container is circular with a constant diameter.
The equation of movement of the pendulum is derived using the Lagrange Formalism. The position of the pendulum mass in the global coordinate frame \(\{0\}\) is determined by
\[x_{m}=x_{c}+h\sin(\theta)-l\sin(\theta+\beta) \tag{4a}\] \[z_{m}=z_{c}+h\cos(\theta)-l\cos(\theta+\beta) \tag{4b}\]
with the filling level \(h\) of the liquid and the virtual pendulum length \(l\). The pendulum length is a function of the viscosity of the liquid, diameter of the container and gravitation \(g\) and can be determined by measuring the natural frequency of the liquid \(\omega\) and the relation (see e.g. [14])
\[\omega=\sqrt{\frac{g}{l}}. \tag{5}\]
Assuming that the movement of the container is enforced by the robot, the pendulum angle \(\beta\) remains the only degree of freedom of the liquid subsystem. Thus, it is used as the generalized coordinate for the Lagrange Formalism. Introducing the Rayleigh dissipation function [13]
\[\mathcal{R}=\frac{1}{2}d\dot{\beta}^{2} \tag{6}\]
with the damping coefficient \(d\), the dynamics of the planar pendulum in the container of the end effector is expressed as:
\[\begin{split}\ddot{\beta}&=f_{\beta}(\beta,\dot{ \beta},\ddot{x}_{c},\ddot{z}_{c},\theta_{c},\dot{\theta}_{c},\ddot{\theta}_{c}) \\ &=\frac{1}{l}\left(-(l-h\cos(\beta))\ddot{\theta}_{c}+h\sin(\beta )\dot{\theta}_{c}^{2}\right.\\ &+\cos(\theta_{c}+\beta)\ddot{x}_{c}-\sin(\theta_{c}+\left.\beta \right)(g+\ddot{z}_{c})-\frac{d}{ml}\dot{\beta}\right),\end{split} \tag{7}\]
where the function \(f_{\beta}\) describes the dependence of the pendulum's angular acceleration on the containers position, velocity, and the acceleration. The dynamics of the liquid can thus be described through the nonlinear system
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\dot{\beta}\\ \dot{\beta}\end{bmatrix}=\begin{bmatrix}\dot{\beta}\\ f_{\beta}(\beta,\dot{\beta},\ddot{x}_{c},\ddot{z}_{c},\theta_{c},\dot{\theta}_{ c},\ddot{\theta}_{c})\end{bmatrix}. \tag{8}\]
The parameters \(d\) and \(m\) of the model depend on the properties of the liquid and the container and can be identified through experiments.
As long as the mass does not experience lateral acceleration in the local coordinate frame, the pendulum remains in the fixed point \(\beta=0\), see e.g. [17] for this assumption.
## IV Assistive Telemanipulation Framework based on MPC
In this section, our assistive telemanipulation framework is introduced. The goal of the human operator is to move the liquid container without spilling it. For our Assistive Telemanipulation Framework, we define the two objectives
* \(\mathrm{O}_{1}\): tracking the given input and
* \(\mathrm{O}_{2}\): stabilizing the liquid.
The proposed framework consists of three components: 1) the two models from section III, 2) the user interface with input mapping and human movement prediction, and 3) a MPC that combines the two aforementioned goals of the controller into a single objective function.
### _User Interface: Input Mapping and Human Movement Prediction_
The user interface for controlling the UR5e robot arm's end effector position is implemented using a _Touch controller_ for the _Meta Quest 2_[19]. The operator can switch between an active and inactive mode using a button on the controller. In inactive mode, the desired position of the end effector
Fig. 3: Mapping of user input to desired end effector position. A movement to the right an downwards of the VR controller moves the desired position of the end effector in positive \(\mathbf{e}_{x}\) and negative \(\mathbf{e}_{z}\) direction.
Fig. 2: The liquid in a container is modelled as a pendulum.
is constantly set to the current position. When the mode switches to active, the current position of the end effector and the controller are saved as reference points. Relative movements of the controller are used as relative displacements of the desired end effector position, as shown in Fig. 3. Controller movements to the right and left are mapped to the \(\mathbf{e}_{x}\) axis, and movements up and down are mapped to the \(\mathbf{e}_{z}\) axis, which is called position-position control [20].
To implement a MPC, predicting the human input over the prediction horizon \(h_{p}\) is necessary. As suggested in [21], the challenges of predicting human input due to the complexity and uncertainty of human behavior, can be overcame by the use of a simple prediction model. It has been shown that the proposed simple prediction model can reach sufficiently good results. In our MPC, the future reference positions are predicted assuming constant velocities.
### _MPC for Assistive Telemanipulation_
The system states from the previously derived state space models (2) and (8) can be combined to
\[\dot{\xi}=\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\mathbf{q}\\ \dot{\mathbf{q}}\\ \beta\\ \dot{\beta}\end{bmatrix}=\mathbf{f}_{\xi}(\xi,\mathbf{u}) \tag{9}\]
with dimension \(n_{\xi}=8\). Note, that the first 6 states are given in the joint space and the dynamics of the slosh angle \(\beta\) are stated in the task space, thus the forward kinematics (3) and its derivatives with respect to time are required.
System (9) is discretized in time with the sampling period \(\Delta t\) using Euler Forward, which is sufficient for this use case if the sampling period is sufficiently small. Assuming zero-order hold on the input \(\mathbf{u}\), the discretized system results in
\[\xi_{k+1}=\xi_{k}+\mathbf{f}_{\xi,k}\cdot\Delta t. \tag{10}\]
For the control task \(\mathrm{O}_{1}\), we define the following tracking error function
\[\mathbf{e}=\begin{bmatrix}x_{c}\\ z_{c}\\ \theta_{c}\end{bmatrix}-\begin{bmatrix}x_{c}\\ z_{c}\\ \theta_{c}\end{bmatrix}_{ref}. \tag{11}\]
Since the system dynamics of the robot (1) only contain the joint angles and the tracking error is described in the task space, the forward kinematics (3) are implicitly used.
The two control objectives \(\mathrm{O}_{1}\) and \(\mathrm{O}_{2}\) can now be combined into the discrete finite horizon objective function
\[J=\sum_{k=0}^{N-1}||\mathbf{e}_{k+1}||_{\mathbf{Q_{1}}}^{2}+||\beta_{k+1}||_{ Q_{2}}^{2}+||\mathbf{u}_{k}||_{\mathbf{R}}^{2} \tag{12}\]
with no stage cost of \(e\) and \(\beta\) at \(k=0\) and \(u_{k}\) at \(k=N\). \(\mathbf{Q_{1}}=\mathrm{diag}(Q_{1,1},Q_{1,2},Q_{1,3})\) is a diagonal matrix containing the weights for the tracking error in \(x_{c},z_{c}\) and \(\theta_{c}\). The sloshing angle \(\beta\) is weighted with \(Q_{2}\) and stabilizing weights \(\mathbf{R}=\mathrm{diag}(R_{1},R_{2},R_{3})\) are added to prevent high accelerations on the joints, especially if the robot configuration is close to singularities.
Including system constraints and input constraints as well as starting states \(\xi_{0}\), we obtain the nonlinear optimization problem
\[\min_{\xi_{1\to N},\mathbf{u}_{0\to N-1}} J(\xi_{1\to N},\mathbf{u}_{0\to N-1})\] s.t. \[\xi_{k+1}=\xi_{k}+\mathbf{f}_{\xi,k}\cdot\Delta t\] \[\mathbf{q}_{\min}\leq\mathbf{q}_{\mathbf{k}}\leq\mathbf{q}_{ \max}\quad\forall k\in[1,N]\] \[\dot{\mathbf{q}}_{\min}\leq\dot{\mathbf{q}}_{\mathbf{k}}\leq\dot{ \mathbf{q}}_{\max}\quad\forall k\in[1,N]\] \[\mathbf{u}_{\min}\leq\mathbf{u}_{\mathbf{k}}\leq\mathbf{u}_{\max} \quad\forall k\in[0,N-1]\] \[\xi_{k=0}=\xi_{0}.\]
The inequality constraints on \(\mathbf{q}_{\mathbf{k}}\) for the robot joints are set according to the robot manual. Constraints on the joint velocities \(\dot{\mathbf{q}}_{\mathbf{k}}\) and acceleration \(\mathbf{u}_{\mathbf{k}}\) can be utilized to ensure slower robot movements for safety reasons. Note that through the usage of the forward kinematics, no inverse kinematics need to be solved after planning trajectories in the task space ([8], [10]). The optimization is solved with multiple shooting, thus the optimization variables include the input \(\mathbf{u}\) as well as the system states \(\xi\).
In Fig. 4 the overall control architecture is depicted. The states \(\beta\) and \(\dot{\beta}\) are not observable in the current setup and thus controlled in open loop.
With the assumption, that the underlying tracking controller runs at a high frequency with good tracking performance and assumption, that the solution trajectories coming from the optimization satisfy the system dynamics, also the measurable states \(\mathbf{q}\) and \(\dot{\mathbf{q}}\) are only fed back if the difference between predicted and measured states exceeds the threshold \(\Delta_{max}\) to increase stability [22, chap. 3].
### _Setup of the Assistive Telemanipulation Framework_
The two control objectives \(\mathrm{O}_{1}\) and \(\mathrm{O}_{2}\) are inherently contradictory, and therefore, the weights of the objective function need to be chosen such that a suitable compromise is achieved. The selection of the objective function weights is crucial and has a significant impact on the overall performance.
With our Assistive Telemanipulation Framework, the operator can either use the remote robot with pure tracking behavior by prioritizing \(\mathrm{O}_{1}\), or receive high support with stabilizing a liquid through a high prioritization of \(\mathrm{O}_{2}\).
In this regard, multiobjective optimization can be a useful tool for identifying optimal weights. However, incorporating human factors in the optimization process can be challenging, and the trade-off between performance and user satisfaction must be carefully considered.
## V Realization and Evaluation of Assistive Telemanipulation Framework
This section presents the real-world realization of the framework and the illustrative evaluation with a human operator.
### _Technical Setup of the System_
The implementation of the proposed method was carried out using ROS2 with version Foxy [23]. For solving the optimization problem, the framework _CasADi_[24] with the solver _ipopt_ was used and ran with an update rate of \(30\mathrm{Hz}\). The framework was executed on a host PC with an Intel i7 8700 processor. The ROS2 control trajectory tracking controller was used to control the motion of the UE5e robot. The lengths \(L_{1}=0.425\,\mathrm{m}\), \(L_{2}=0.3922\,\mathrm{m}\) and \(L_{3}=0.1\,\mathrm{m}\) were taken from the UR5e manual. The liquid model parameters, including the pendulum lengths \(l=0.02\), mass \(m=1\), damping coefficient \(d=0.005\) and pendulum height \(h=0.08\), were roughly identified through video analysis of the glass with water (also compare [14] and [25]).
### _Experiment and Results_
We conducted an experiment with two parametrizations \(\mathrm{P}_{1}\) and \(\mathrm{P}_{2}\) (Tab. I) of the objective function (12) to validate our Assistive Telemanipulation Framework and investigate the influence of the contrary control objectives \(\mathrm{O}_{1}\) and \(\mathrm{O}_{2}\). To ensure comparability, we used the same starting position and recorded one human input with a length of 10 seconds, that was used with both parameter sets.
Our qualitative results showed that the Assistive Telemanipulation Framework achieved a high tracking rate with sloshing motion when using \(\mathrm{P}_{1}\), while \(\mathrm{P}_{2}\) resulted in no sloshing but a higher delay. The quantitative results confirm this observation, as the high tracking rate almost perfectly follows the given input for \(\mathrm{P}_{1}\), although some deviation is induced through the velocity constraints in the MPC as can be seen in Fig. 5. The data of \(\mathrm{P}_{2}\) shows a higher delay and a deviation in \(\theta\) can be observed as the controller stabilizes the liquid.
We also plotted the calculated sloshing angle \(\beta\) in the same figure 5 to visualize the sloshing motion and the effectiveness of the Assistive Telemanipulation Framework to support the operator with stabilizing the liquid. With \(\mathrm{P}_{2}\), basically no amplitude of the sloshing motion is achieved, whereas \(\mathrm{P}_{1}\) induces high sloshing dynamics.
In addition to the results, we also measured the calculation time of the MPC algorithm over time which can be seen in Fig. 6. The sample rate is sufficient with a mean around \(t_{\mathrm{mean}}=21\,\mathrm{ms}\) for real-time application of the Assistive Telemanipulation Framework. Regarding [24], the calculation time can be further reduced by a factor of 10.
Fig. 4: Overview of the control architecture.
Fig. 5: Plots of the two parametrizations \(\mathrm{P}_{1}\) and \(\mathrm{P}_{2}\). Depicted are the reference and the actual trajectories in the task space as well as the calculated slosh angle \(\beta\).
Overall, our experiment demonstrates the trade-offs between the contrary control objectives \(\mathrm{O}_{1}\) and \(\mathrm{O}_{2}\) and the effective support of the human operator while stabilizing the liquid through our Assistive Telemanipulation Framework.
## VI Conclusion
In this paper, a novel framework for telemanipulation is proposed using non-linear model predictive control and a virtual reality input device. Our framework addresses challenges such as insufficient transparency, low immersion, and limited feedback to the human operator. Furthermore, we provide a model for human input prediction and a model for slosh dynamics control with a liquid container, increasing the accuracy of remote tasks. The experiments are conducted on an UR5e robot arm with a glass of water connected to the end effector. The results indicated the effectiveness of the proposed framework in controlling the remote robot. The combination of non-linear model predictive control and a virtual reality input device provided a more intuitive and efficient interface for the operator to control the remote robot.
In future research, we plan to extend the framework to three-dimensional environments and investigate additional assistance features such as collision avoidance and pouring assistance.
| Tele操作は、人間知能とロボット能力を組み合わせた、リモートでタスクを実行するための有望な技術です。しかし、透明性の不足、没入度の低さ、人間操作者にフィードバックの制限など、いくつかの課題に直面しています。また、hapticインターフェイスの費用が高いため、遠隔操作技術の応用には制限があります。特に、高齢者介護分野で研究を進めています。これらの課題に対処するため、本論文では、低コストの仮想現実コントローラーを用いて非線形モデル予測制御をTele操作に適用することを提案しています。このフレームワークは、人間の入力予測モデル、ロボットと環境に関するタスク関連モデルを用いて動作します。提案されたフレームワークは、UR5eロボットを用いて、漏洩しない液体のハンドリングというシナリオで検証されています。このフレームワークの拡張には、注ぎ assistance と衝突回避などが簡単に含まれることができます。 |
2306.17382 | On the semi-infinite Deligne--Lusztig varieties for $\mathrm{GSp}$ | We prove that the structure of Lusztig's semi-infinite Deligne--Lusztig
variety for $\mathrm{GSp}$ (and its inner form) is isomorphic to an affine
Deligne--Lusztig variety at infinite level, as a generalization of
Chan--Ivanov's result. Furthermore, we show that a component of some affine
Deligne--Lusztig variety for $\mathrm{GSp}$ can be written as a direct product
of a classical Deligne--Lusztig variety and an affine space, up to perfection.
We also study varieties $X_{r}$ defined by Chan and Ivanov, and we show that
$X_{r}$ at infinite level can be regarded as a component of semi-infinite
Deligne--Lusztig varieties even in the $\mathrm{GSp}$ case. This result
interprets previous studies on representations coming from $X_{r}$ as a
realization of Lusztig's conjecture. | Teppei Takamatsu | 2023-06-30T03:09:17 | http://arxiv.org/abs/2306.17382v1 | # On the semi-infinite Deligne-Lusztig varieties for \(\mathrm{GSp}\)
###### Abstract.
We prove that the structure of Lusztig's semi-infinite Deligne-Lusztig variety for \(\mathrm{GSp}\) (and its inner form) is isomorphic to an affine Deligne-Lusztig variety at infinite level, as a generalization of Chan-Ivanov's result. Furthermore, we show that a component of some affine Deligne-Lusztig variety for \(\mathrm{GSp}\) can be written as a direct product of a classical Deligne-Lusztig variety and an affine space, up to perfection. We also study varieties \(X_{r}\) defined by Chan and Ivanov, and we show that \(X_{r}\) at infinite level can be regarded as a component of semi-infinite Deligne-Lusztig varieties even in the \(\mathrm{GSp}\) case. This result interprets previous studies on representations coming from \(X_{r}\) as a realization of Lusztig's conjecture.
## 1. Introduction
Deligne-Lusztig varieties, which are introduced in celebrated paper [1], are algebraic varieties over \(\overline{\mathbb{F}}_{q}\) whose \(\ell\)-adic cohomology realizing all representations of finite reductive groups. More precisely, a Deligne-Lusztig variety \(X\) is defined by the data consisting of a reductive group \(G\) over \(\mathbb{F}_{q}\) and a maximal \(\mathbb{F}_{q}\)-torus \(T\subset G\). Then \(X\) admits actions of \(T(\mathbb{F}_{q})\) and \(G(\mathbb{F}_{q})\), and from these actions, we can construct a (virtual) representation of \(G(\mathbb{F}_{q})\) from a character of \(T(\mathbb{F}_{q})\). Deligne-Lusztig's one of the main theorem asserts that any irreducible representation of \(G(\mathbb{F}_{q})\) appears in some virtual representation given in this way.
It is natural to expect their theory to be developed over \(p\)-adic fields, and this expectation has great significance in the perspective of the local Langlands correspondence. To this end, we can consider two ways of the generalization of Deligne-Lusztig varieties. One of them is called a semi-infinite Deligne-Lusztig variety, in the sense of Feigin-Frenkel [10], which is a subset of a semi-infinite flag manifold defined by the Deligne-Lusztig condition. Another one is called an affine Deligne-Lusztig variety, which is defined by a similar condition in an affine flag variety ([12]). Lusztig proposed the method to realize the local Langlands correspondence in homology groups of semi-infinite Deligne-Lusztig varieties ([16]). Such method is deepened by [15], [14], and [13]. See also [17], where the \(p\)-adic Deligne-Lusztig varieties are defined as the functor which is defined by following the definition of semi-infinite Deligne-Lusztig varieties.
On the other hand, affine Deligne-Lusztig varieties are also studied from the perspective of the local Langlands correspondence by [17] and [17]. Moreover, since affine Deligne-Lusztig varieties are closely related to the reduction of integral models of Shimura varieties, their structure is well-studied by many people ([18], [19], [20], [21], [10], [12], [13], [14], [15], [16], [17]). See the survey papers [1] and [1] for recent developments.
In this paper, we study the relation between semi-infinite and affine Deligne-Lusztig varieties for \(\mathrm{GSp}\). In [1], Chan and Ivanov established such a relation for \(\mathrm{GL}_{n}\). More
precisely, they describe semi-infinite Deligne-Lusztig varieties as an inverse limit of affine Deligne-Lusztig varieties of higher levels (namely, "affine Deligne-Lusztig varieties of infinite level"), and give a geometric realization of the local Langlands and Jacquet-Langlands correspondence on them. In this paper, we will give an analogue of Chan-Ivanov's result for GSp. Before stating precise results, we list what we prove roughly:
1. Some semi-infinite Deligne-Lusztig variety for GSp can be described by the inverse limit of affine Deligne-Lusztig varieties of higher-level. More precisely, we have \[(\varprojlim_{r>m}\dot{X}^{m}_{w_{r}}(b))(\overline{k})\simeq X^{(U)}_{w}(b).\] Here, the element \(b\) is a representative of a basic \(\sigma\)-conjugacy class of \(\operatorname{GSp}_{2n}(\breve{K})\), and \(w_{r}\) (resp. \(w\)) is a certain representative of an affine Weyl group (resp. a Weyl group) of \(\operatorname{GSp}_{2n}\). Moreover, the varieties \(X^{m}_{w_{r}}(b)\) and \(\dot{X}^{m}_{w_{r}}(b)\) are affine Deligne-Lusztig varieties of level \(I^{m}\) defined by \(b\) and \(w_{r}\). See Subsection 2.2 for the precise definition.
2. The structure of some affine Deligne-Lusztig variety for GSp is described. More precisely, if \(r+k\geq m+1\), a connected component of \[X^{m}_{w_{r}}(b)(\overline{k})\] is isomorphic to a product of a classical Deligne-Lusztig variety and an affine space up to perfection. Note that such a description does not hold in general (cf. [11] and [12]).
3. The scheme \(X_{h}\), defined in [10] (see also [1]) is also related to the semi-infinite Deligne-Lusztig variety \(X^{(U)}_{w}(b)\). In [1], they studied L-packets using varieties \(X_{h}\). On the other hand, in [13], Lusztig expects that we can construct supercuspidal representations by using semi-infinite Deligne-Lusztig varieties \(X^{(U)}_{w}(b)\) in some sense. This result (3) ensures that we can regard studies of representations coming from \(X_{h}\) such as [1] as a realization of Lusztig's expectation.
In [1], they studied L-packets using varieties \(X_{h}\). On the other hand, in [13], Lusztig expects that we can construct supercuspidal representations by using semi-infinite Deligne-Lusztig varieties \(X^{(U)}_{w}(b)\) in some sense. This result (3) ensures that we can regard studies of representations coming from \(X_{h}\) such as [1] as a realization of Lusztig's expectation.
We will explain our results in detail. Let \(K\) be a non-archimedean local field, and \(\breve{K}\) a completion of the maximal unramified extension of \(K\). Let \(V:=\breve{K}^{2n}\) be the vector space
with the symplectic form \(\Omega\) given by (4). We define \(w_{r}\in\operatorname{GSp}(V)\) by
\[w_{r} := \left(\begin{array}{ccccc}\varpi^{-r}&&\mbox{\Large$\mbox{\Large$ \bigcirc$}}&&(-1)^{n+1}\varpi^{r+k}&&\\ &\ddots&&&&&\mbox{\Large$\mbox{\Large$\bigcirc$}}&&\\ &&\varpi^{-r}&&&&&\\ \hline&&&&&&&\varpi^{r+k}&&\\ &&\mbox{\Large$\bigcirc$}&&&&&\ddots&\\ &&\mbox{\Large$\bigcirc$}&&&&&\mbox{\Large$\bigcirc$}&&\varpi^{r+k}\\ &&\mbox{\Large$\bigcirc$}&&&&&\mbox{\Large$\bigcirc$}&\end{array}\right),\]
and a representative \(b\in\operatorname{GSp}(V)\) of a basic \(\sigma\)-conjugacy class with \(\lambda(b)=-\varpi^{k}\) for \(k=0,1\). We also put \(w:=w_{0}\). Here, \(\lambda\) is the similitude character of \(\operatorname{GSp}(V)\). The key method in [2] is parameterizing affine (resp. semi-infinite) Deligne-Lusztig varieties by a set \(V_{b}^{\operatorname{adm}}\). Instead of \(V_{b}^{\operatorname{adm}}\), we introduce the set \(V_{b}^{\operatorname{symp}}\), defined by
\[V_{b}^{\operatorname{symp}}:=\{v\in V\mid\langle v,F(v)\rangle=\cdots= \langle v,F^{n-1}(v)\rangle=0,\langle v,F^{n}(v)\rangle\neq 0\}.\]
The parameterization is given by the following theorem.
**Theorem 1.0.1** (See Theorem 2.3.6 for the precise statements).: _If \(r+k\geq m+1\), there are the following two \(J_{b}(K)\)-equivariant commutative diagrams with bijective horizontal arrows._
\[\{v\in V_{b}^{\operatorname{symp}}\mid\alpha:=\langle v,F^{n}(v)\rangle\in K ^{\times}\}\stackrel{{ g_{b,0}}}{{\rightsquigarrow}}X_{w}^{(U)}(b)\]
\[\{v\in V_{b}^{\operatorname{symp}}\mid\frac{\sigma(\alpha)}{\alpha}\equiv 1 \mod\mathfrak{p}^{m+1}\}/\dot{\sim}_{b,m,r}\stackrel{{ g_{b,r}}}{{\rightsquigarrow}}\dot{X}_{w_{r}}^{m}(b)( \overline{k})\]
\[V_{b}^{\operatorname{symp}}/\sim_{b,m,r}\stackrel{{ g_{b,r}}}{{\rightsquigarrow}}X_{w_{r}}^{m}(b)( \overline{k})\]
_Here, \(\sim_{b,m,r}\) and \(\dot{\sim}_{b,m,r}\) are some equivalence relations._
For the definition of objects on the right-hand sides, see Subsection 2.2. By studying the relation \(\sim_{b,m,r}\) in detail, we have a desired comparison theorem
\[(\varprojlim_{r>m}\dot{X}_{w_{r}}^{m}(b))(\overline{k})\simeq X_{w}^{(U)}(b).\]
The key part of Theorem 1.0.1 is the surjectivity of horizontal arrows. Its proof is given by direct computation using elementary transformations in \(\operatorname{GSp}\), as in the proof of [2, Theorem 6.5]. However, since we work over \(\operatorname{GSp}\), elementary transformations are more complicated, and the proof needs to be carried out with greater care. To get the surjectivity for the lower diagram, we need an infinite number of elementary transformations (whose infinite product converges). This is in contrast to the proof of [2, Theorem 6.5].
Next, we will consider the structure of connected components of affine Deligne-Lusztig varieties. In the following, we fix a Coxeter-type representative \(b\) of a basic \(\sigma\)-conjugacy class (see Subsection 4.1). By using the argument in Viehmann's paper [21], we obtain the description of a connected component of an affine Deligne-Lusztig variety in terms of \(V_{b}^{\mathrm{symp}}\). More precisely, we can define a subset \(\mathcal{L}_{b}^{\mathrm{symp}}\subset V_{b}^{\mathrm{symp}}\), and the corresponding subset \(X_{w_{r}}^{m}(b)_{\mathcal{L}}\) gives a component of \(X_{w_{r}}^{m}(b)\). Moreover, one can show that this component is contained in an affine Schubert cell (see Proposition 4.2.11). Since the structure of affine Schubert cell is well-known (see [10, Lemma 4.7] for example), now we can study the structure of this component in an explicit way. The main structure theorem is the following.
**Theorem 1.0.2** (see Theorem 4.5.2).: _Suppose that \(r+k\geq 1\). Then we have a decomposition of \(\overline{\mathbb{F}}_{q}\)-schemes_
\[X_{w_{r}}^{0}(b)_{\mathcal{L}}^{\mathrm{perf}}\simeq X_{\overline{w}}^{ \overline{B},\mathrm{perf}}\times\mathbb{A}^{\mathrm{perf}}.\]
_Here, \(X_{\overline{w}}^{\overline{B}}\) is a classical Deligne-Lusztig varieties associated with some reductive group \(\overline{G}\) which depends on \(n\) and \(k\) (see Definition 4.5.1), \(\mathbb{A}\) is an affine space over \(\overline{\mathbb{F}}_{q}\), and \(\mathrm{perf}\) means the perfection._
We sketch the proof in the following. Since we know the defining equations of \(X_{w_{r}}^{0}(b)_{\mathcal{L}}\) in the affine Schubert cell, it is imaginable that the proof can be done through explicit calculations. More precisely, to obtain a decomposition as in the theorem, it suffices to solve these equations over the function ring of a classical Deligne-Lusztig variety \(X_{\overline{w}}^{\overline{B}}\). In the simplest case, the equations and solutions are constructed as follows: Consider the open subscheme \(\overline{\mathcal{L}}\subset\mathbb{A}_{x,w}\) defined by
\[xw^{q^{2}}-wx^{q^{2}}\neq 0. \tag{1}\]
We define the reduced locally closed subscheme \(\mathcal{L}_{1}\) of \(\mathbb{A}_{x,y,z,w}\) defined by (1) and the equation
\[xz^{q}-yw^{q}+zx^{q}-wy^{q}=0. \tag{2}\]
Actually, in the case where \(G=\mathrm{GSp}_{4}\), \(\overline{\mathcal{L}}\) is some inflation of classical Deligne-Lusztig variety \(X_{\overline{w}}^{\overline{B}}\), and \(\mathcal{L}_{1}\) is some inflation of a quotient of a component \(X_{w_{r}}^{0}(b)_{\mathcal{L}}\) of an affine Deligne-Lusztig variety. Therefore, in this case, an essential part of the proof is to solve the equation (2) over \(\overline{\mathcal{L}}\) up to perfection so that we have
\[\mathcal{L}_{1}^{\mathrm{perf}}\simeq\overline{\mathcal{L}}^{\mathrm{perf}} \times\mathbb{A}^{1,\mathrm{perf}}.\]
We put
\[y^{\prime} :=y,\] \[z^{\prime} :=x^{\frac{1}{q}}z-w^{\frac{1}{q}}y.\]
Then the left-hand side of (2) can be written as
\[z^{\prime q}+x^{q-\frac{1}{q}}z^{\prime}+(x^{q-\frac{1}{q}}w^{\frac{1}{q}}-w^{ q})y^{\prime}. \tag{3}\]
Since we have
\[((x^{q-\frac{1}{q}}w^{\frac{1}{q}}-w^{q})x^{\frac{1}{q}})^{q}=x^{q^{2}}w-w^{q^{ 2}}x,\]
the equation (3) can be solved over \(\overline{\mathcal{L}}^{\mathrm{perf}}\) with respect to \(y^{\prime}\). This kind of direct computation only works for \(\mathrm{GSp}_{4}\), and the general case is more subtle. See the proof of Lemma 4.3.5.
The third application is the description of \(X_{h}\) appearing in [1]. We have the following theorem, which is an analogue of [11, Equation (7.1)].
**Theorem 1.0.3** (see Remark 4.6.6).: _The variety \(X_{h}\) and components \(\dot{X}_{w_{r}}^{m}(b)_{\mathcal{L}}\) of affine Deligne-Lusztig varieties are the same at infinite level, i.e., we have_
\[\varprojlim_{h}X_{h}\simeq\mathcal{L}_{b}^{\mathrm{symp},\mathrm{rat}}\simeq \varprojlim_{r>m}\dot{X}_{w_{r}}^{m}(b)_{\mathcal{L}}\subset\varprojlim_{r>m} \dot{X}_{w_{r}}^{m}(b)\simeq X_{w}^{(U)}(b).\]
This theorem ensures that \(X_{h}\) at infinite level can be regarded as a connected component of a semi-infinite Deligne-Lusztig variety. Therefore, it reinterprets the studies of the local Langlands correspondence using \(X_{h}\) as a realization of Lusztig's expectation of [16]. The proof is given by the direct computation based on the parameterization given by Theorem 1.0.1 and Theorem 4.5.2. See Subsection 4.6 for more detail.
This paper is organized as follows. In Section 2, we prepare the setting and notations so that we state the main comparison theorem Theorem 1.0.1. In Section 3, we prove the Theorem 1.0.1 by computing elementary transformations. In Section 4, we describe a connected component of affine Deligne-Lusztig varieties by using \(\mathcal{L}_{b}^{\mathrm{symp}}\) which is motivated by the main comparison theorem and [14]'s result. Moreover, we prove Theorem 1.0.2 and Theorem 1.0.3 by studying the set \(\mathcal{L}_{b}^{\mathrm{symp}}\) in detail.
### Acknowledgements
The author is deeply grateful to Naoki Imai for his deep encouragement and helpful suggestions. The author also thanks Shou Yoshikawa for discussing the solutions of the equation (2). Moreover, the author wishes to express his gratitude to Charlotte Chan, Alexander B. Ivanov, Masao Oi, Yasuhiro Oki, and Ryosuke Shimada for helpful comments. The author was supported by JSPS KAKENHI Grant numbers JP19J22795 and JP22J00962.
## 2. Statement of the comparison theorem
### Basic Notation
Let \(K\) be a non-archimedean local field with residue characteristic \(p>0\), and \(\breve{K}\) the completion of the maximal unramified extension of \(K\). Let \(\mathcal{O}_{K}\) (resp. \(\mathcal{O}\)) be the ring of integers of \(K\) (resp. \(\breve{K}\)), \(\mathfrak{p}_{K}\) (resp. \(\mathfrak{p}\)) its maximal ideal, and \(k=\mathbb{F}_{q}\) (resp. \(\overline{K}\)) its residue field. We fix a uniformizer \(\varpi\) of \(K\). Let \(\mathrm{ord}\) be the normalized valuation of \(\breve{K}\), and \(\sigma\in\)\(\mathrm{Aut}(\breve{K}/K)\) the Frobenius morphism.
Let \(V_{K}:=K^{2n}\), \(V:=\breve{K}^{2n}\), and we denote the symplectic form on \(V_{K}\) associated with
(4) \[\Omega:=\left(\begin{array}{ccccc}&&&1\\ &\big{\r@underline{\makebox[0.0pt][l]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{ \rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{ \rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{ \rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0 pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0 pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0.0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt]{\rule{0.0pt}{0pt}}\makebox[0.0pt]{\rule{0.0pt}{0.0pt}} \makebox[0.0pt
by \(\langle\,,\rangle\). (In this paper, lines in matrices mean the central line unless otherwise noted).
Let \(G\) be \(\operatorname{GSp}_{2n}\) over \(K\) associated with the above symplectic form, i.e.
\[G:=\{g\in\operatorname{GL}_{2n}\mid\exists\lambda(g)\in\mathbb{G}_{m},\forall x,y\in V_{K},\langle gx,gy\rangle=\lambda(g)\langle x,y\rangle\}.\]
Then we have the exact sequence
\[1\xrightarrow{}\operatorname{Sp}_{2n}\xrightarrow{}\operatorname{GSp}_{2n} \xrightarrow{\lambda}\mathbb{G}_{m}\xrightarrow{}1\]
and \(\operatorname{Sp}_{2n}\) is a simply connected algebraic group. By [10, 2.a.2], the Kottwitz map \(\kappa_{G}\colon G(\breve{K})\to\pi_{1}(G)_{\Gamma}\) is written as \(\operatorname{ord}\circ\lambda\).
We take \([b]\), a basic \(\sigma\) conjugacy class of \(G(\breve{K})\), and set \(k:=\kappa_{G}([b])\in\mathbb{Z}\). In the following, we suppose that \(k\in\{0,1\}\) (note that, we may assume this condition by multiplying \([b]\) by a scalar).
We can choose a representative \(b\in G(\breve{K})\) of \([b]\) satisfying
\[\begin{split}&\lambda(b)=-\varpi^{k},\\ & b\in G(K).\end{split} \tag{5}\]
We assume (5) in the remaining of this paper. For the precise choice of \(b\), see Subsection 4.1. We write \(F\colon V\to V\) for the structure morphism of isocrystal corresponding to \(b\), i.e. \(F=b\sigma\). We also frequently apply \(F\) to matrices \(M\in M_{2n}(\breve{K})\) as
\[F(M)=b\sigma(M).\]
Similarly, we also use the notation \(F^{-1}=\sigma^{-1}b^{-1}\) and apply it to elements in \(V\) or \(M_{2n}(\breve{K})\). We write \(J_{b}\) for the associated inner form of \(G\) over \(K\), i.e. \(J_{b}\) is the reductive group whose \(R\)-valued point is given by
\[J_{b}(R):=\{g\in G(R\otimes_{K}\breve{K})\mid g^{-1}b\sigma(g)=b\}\]
for a finite type algebra \(R\) over \(K\).
### Definition of semi-infinite Deligne-Lusztig varieties and Affine Deligne-Lusztig varieties
For \(r\in\mathbb{Z}_{\geq 0}\), define
\[w_{r}:=\left(\begin{array}{ccccc}\varpi^{-r}&&\bigcirc&(-1)^{n+1}\varpi^{r+ k}\\ &\ddots&&\bigcirc&\\ &&\varpi^{-r}&&\\ \hline&&&&\varpi^{r+k}\\ &&&&&\ddots\\ &\bigcirc&\bigcirc&\varpi^{r+k}\\ &&\varpi^{-r}&&\end{array}\right).\]
Remark that \(w_{r}\) represents the Coxeter element of the Weyl group of \(G\). For simplicity, we write \(w\) for \(w_{0}\).
First, we recall the definition of a semi-infinite Deligne-Lusztig variety which is a direct analogue of a classical Deligne-Lusztig variety in [10]. Let \(B\) be an intersection of standard
Borel subgroup (upper triangular matrices) of \(\operatorname{GL}_{2n}\) with \(\operatorname{GSp}_{2n}\), and \(U\) its unipotent radical. We define semi-infinite Deligne-Lusztig varieties for \(\operatorname{GSp}\) by
\[X_{w}^{(U)}(b) := \{g\in G(\breve{K})/U(\breve{K})\mid g^{-1}b\sigma(g)\in U(\breve {K})wU(\breve{K})\},\] \[X_{w}^{(B)}(b) := \{g\in G(\breve{K})/B(\breve{K})\mid g^{-1}b\sigma(g)\in B(\breve {K})wB(\breve{K})\}.\]
A priori, these sets do not have scheme structures.
Next, We define affine Deligne-Lusztig varieties of higher Iwahori level as in [1, Subsection 3.4]. For \(m\in\mathbb{Z}_{\geq 0}\), we define subgroups \(I_{\operatorname{GL}}^{m},\dot{I}_{\operatorname{GL}}^{m}\subset\operatorname{ GL}(\breve{K})\) by
\[I_{\operatorname{GL}}^{m}:=\left(\begin{array}{ccccc}\mathcal{O}^{\times}&& \raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{./}}^{m+1}\\ &\ddots&&\\ \hline&&\ddots&\\ \raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{./}}^{m}&& \mathcal{O}^{\times}\end{array}\right),\dot{I}_{\operatorname{GL}}^{m}:= \left(\begin{array}{ccccc}1+\mathfrak{p}^{m+1}&&\raisebox{-14.226378pt}{ \includegraphics[height=14.226378pt]{./}}^{m+1}\\ &\ddots&\\ \raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{./}}^{m}&&\ddots&\\ \raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{./}}^{m}&&1+ \mathfrak{p}^{m+1}\end{array}\right).\]
We put \(I^{m}:=I_{\operatorname{GL}}^{m}\cap G(\breve{K})\) and \(\dot{I}^{m}:=\dot{I}_{\operatorname{GL}}^{m}\cap G(\breve{K})\). Define affine Deligne-Lusztig varieties as
\[X_{w_{r}}^{m}(b)(\overline{k}) := \{gI^{m}\in G(\breve{K})/I^{m}\mid g^{-1}b\sigma(g)\in I^{m}w_{r} I^{m}\},\] \[\dot{X}_{w_{r}}^{m}(b)(\overline{k}) := \{g\dot{I}^{m}\in G(\breve{K})/\dot{I}^{m}\mid g^{-1}b\sigma(g)\in \dot{I}^{m}w_{r}\dot{I}^{m}\}.\]
Remark that the above \(w_{r},I^{m},\dot{I}^{m}\) satisfies the condition of [1, Theorem 4.9] if \(r\geq m\). Therefore, by [1, Corollary 4.10], if \(\operatorname{char}K>0\), (resp. \(\operatorname{char}K=0\)), the above \(X_{w_{r}}^{m}\) and \(\dot{X}_{w_{r}}^{m}\) can be regarded as schemes which are locally of finite type (resp. locally perfectly of finite type) over \(\overline{k}\), which are locally closed subschemes of affine flag varieties \(L(G)/L^{+}(I^{m})\) and \(L(G)/L^{+}(\dot{I}^{m})\).
### Comparison theorem
Let
\[V_{b}^{\operatorname{symp}}:=\{v\in V\mid\langle v,F(v)\rangle=\cdots=\langle v,F^{n-1}(v)\rangle=0,\langle v,F^{n}(v)\rangle\neq 0\}.\]
**Lemma 2.3.1**.: \(v\in V_{b}^{\operatorname{symp}}\)_, then \(v,F(v),\ldots.F^{2n-1}(v)\) are basis of \(V\) over \(\breve{K}\)._
Proof.: Remark that for \(v_{1},v_{2}\in V,\) we have \(\langle F(v_{1}),F(v_{2})\rangle=\lambda(b)\sigma(\langle v_{1},v_{2}\rangle)\). Therefore, for \(v\in V_{b}^{\operatorname{symp}}\), we get
\[\langle F^{i}(v),F^{j}(v)\rangle\left\{\begin{array}{ll}=0&\quad\text{ when }|i-j|\leq n-1,\\ \neq 0&\quad\text{ when }|i-j|=n.\end{array}\right. \tag{6}\]
Suppose that \(\sum_{i=0}^{2n-1}c_{i}F^{i}(v)=0\) for \(c_{i}\in\breve{K}\). Applying \(\langle F^{n-1}(v),-\rangle\) on the equation, we get \(c_{2n-1}=0\) by (6).
Similarly, applying \(\langle F^{n-2}(v),-\rangle,\ldots\langle F^{-n}(v),-\rangle\), we get \(c_{2n-2}=\cdots=c_{0}=0\), and this finishes proof.
Next, we will define a morphism from \(V_{b}^{\operatorname{symp}}\) to \(\operatorname{GSp}_{2n}(\breve{K})\).
**Definition 2.3.2**.: We put
\[V_{k}:=\varpi^{k}(b\sigma)^{-1}=\varpi^{k}\sigma^{-1}b^{-1},\]
which is a \(K\)-linear morphism from \(V\) to \(V\). We also put \(\alpha_{v}:=\langle v,F^{n}(v)\rangle\in\breve{K}^{\times}\) for \(v\in V_{b}^{\mathrm{symp}}\). We fix an element \(v\in V_{b}^{\mathrm{symp}}\). We put
\[G_{1}(v):=(-1)^{n+1}\frac{\alpha_{v}}{\sigma^{-1}(\alpha_{v})}(V_{k}(v)-\frac{ \langle V_{k}(v),F^{n}(v)\rangle}{\alpha_{v}}v).\]
We also put
\[G_{i+1}(v):=\frac{\alpha_{v}}{\sigma^{-1}(\alpha_{v})}(V_{k}(G_{i}(v))-\frac{ \langle V_{k}(G_{i}(v)),F^{n}(v)\rangle}{\alpha_{v}}v)\]
for \(1\leq i\leq n-2\) inductively.
We define
\[g_{b,r}(v) := (v,\varpi^{r}F(v),\ldots,\varpi^{(n-2)r}F^{n-2}(v),\varpi^{(n-1)r }F^{n-1}(v),\] \[\varpi^{r}G_{1}(v),\varpi^{2r}G_{2}(v),\ldots,\varpi^{(n-1)r}G_{ n-1}(v),\varpi^{nr}F^{n}(v)).\]
By definition, we have \(g_{b,r}(v)\in\mathrm{GSp}_{2n}(\breve{K})\) with \(\lambda(g_{b,r}(v))=\varpi^{nr}\alpha_{v}\).
**Remark 2.3.3**.: There are other candidates of \(g_{b,r}\). We take \(v\in V_{b}^{\mathrm{symp}}\), and we put \(H_{n}(v):=F^{n}(v)\). Inductively, we define
\[H_{i}(v):=\varpi^{-k}\frac{\alpha_{v}}{\sigma(\alpha_{v})}(F(H_{i+1}(v))- \frac{\langle v,F(H_{i+1}(v))\rangle}{\alpha_{v}}F^{n}(v))\]
for \(1\leq i\leq n-1\). We put
\[h_{b}(v):= (v,\ldots,F^{n-1}(v),H_{1}(v),\ldots,H_{n}(v))\] \[\cdot\operatorname{diag}(1,\varpi^{\lceil\frac{-k}{2}\rceil}, \ldots,\varpi^{\lceil\frac{-k(i-1)}{2}\rceil},\ldots,\varpi^{\lceil\frac{-k(n -1)}{2}\rceil},\varpi^{\lceil\frac{-kn}{2}\rceil+\lfloor\frac{-k(n-1)}{2} \rfloor},\ldots,\varpi^{\lceil\frac{-kn}{2}\rceil}).\]
Then \(g_{b}^{\prime}(v)\in\mathrm{GSp}_{2n}(\breve{K})\) as well. We use this construction in Subsection 4.6.
**Lemma 2.3.4**.: _We have_
\[F(g_{b,r}(v))=g_{b,r}(v)w_{r}A_{b,r},with\;\;A_{b,r}=\left(\begin{array}{ cccccc}1&&&&b_{1}&\cdots&b_{n-1}&a_{n}\\ &\ddots&&&&a_{n-1}\\ &&\ddots&&&&\vdots\\ &&1&&&&a_{1}\\ \hline&&&&\frac{\sigma(\alpha_{v})}{\alpha_{v}}&&&&\\ &&&&\ddots&&\\ &&&&\ddots&&\\ &&&&\frac{\sigma(\alpha_{v})}{\alpha_{v}}\end{array}\right), \tag{7}\]
_where \(a_{i},b_{i}\) are an element of \(\breve{K}\) such that_
\[\mathrm{ord}\;a_{i}\,(resp.b_{i})\geq ir+\frac{k}{2}i.\]
_Note that \(A_{b,r}\in G(\breve{K})\), and so \(a_{i}=(-1)^{n+i}b_{i}\) for \(i=1,\ldots n-1\). In particular, we have_
\[A_{b,r}\in I^{m}\]
_if \(\lceil r+\frac{k}{2}\rceil(=r+k)\geq m+1\)._
_Proof_. We have
\[F(g_{b,r}(v)) = (F(v),\varpi^{r}F^{2}(v)\ldots,\varpi^{(n-2)r}F^{n-1}(v),\varpi^{ (n-1)r}F^{n}(v),\] \[\varpi^{r}F(G_{1}(v)),\varpi^{2r}F(G_{2}(v)),\ldots,\varpi^{(n-1) r}F(G_{n-1}(v)),\varpi^{nr}F^{n+1}(v))\]
and
\[g_{b,r}(v)w_{r} = (F(v),\varpi^{r}F^{2}(v),\ldots,\varpi^{(n-2)r}F^{n-1}(v),\varpi^ {(n-1)r}F^{n}(v),\] \[(-1)^{n+1}\varpi^{r+k}v,\varpi^{2r+k}G_{1}(v),\ldots,\varpi^{(n-1 )r+k}G_{n-2}(v),\varpi^{nr+k}G_{n-1}(v)).\]
Therefore, for the 1st, \(\ldots\), \(n\)-th column, the equality (7) is obvious.
Next, we compute the \((n+1),\ldots,(2n-1)\)-th column of \(A_{b,r}.\) By definition, we have
\[F(G_{1})(v)=(-1)^{n+1}\frac{\sigma(\alpha_{v})}{\alpha_{v}}(\varpi^{k}v-\frac {\sigma(\langle V_{k}(v),F^{n}(v)\rangle)}{\sigma(\alpha_{v})}F(v)). \tag{8}\]
Similarly, we have
\[F(G_{i+1}(v))=\frac{\sigma(\alpha_{v})}{\alpha_{v}}(\varpi^{k}G_{i}(v)-\frac {\sigma(\langle V_{k}(G_{i}(v)),F^{n}(v)\rangle)}{\sigma(\alpha_{v})}F(v)). \tag{9}\]
These formulas correspond to \((n+1),\ldots,2n-1\)-th column of the equality (7). Note that, we don't estimate the order of \(b_{i}\) yet.
Finally, we compute the \(2n\)-th column of \(A_{b,r}\). We can write
\[F^{n+1}(v)=\sum_{i=0}^{n}c_{i}F^{i}(x)+\sum_{j=1}^{n-1}d_{j}G_{j}(v).\]
Applying \(\langle F^{i}(v),-\rangle\) (\(i=2,3,\ldots,n\)) to this equation, we have \(d_{1}=\cdots=d_{n-2}=c_{0}=0\).
Moreover, applying \(\langle F(v),-\rangle\), we have
\[d_{n-1}=\frac{\sigma(\alpha_{v})}{\alpha_{v}}\varpi^{k}.\]
Thus we have verified the form of \(2n\)-th column and \(a_{i}=\varpi^{ir}c_{n+1-i}\) (\(i=1,\ldots,n\))
Finally, we should estimate the order of \(a_{i}\). Now we have the equation
\[F^{n+1}(v)=\sum_{i=1}^{n}c_{i}F^{i}(v)+d_{n-1}G_{n-1}(v). \tag{10}\]
Note that, \(G_{n-1}(v)\) can be written by a linear combination of \(v,V(v),\ldots V^{n-1}(v)\) by definition. By applying \(F^{n-1}\) to the equation (10), we have
\[F^{2n}(v)-\sum_{i=1}^{n}\sigma^{n-1}(c_{i})F^{i+n-1}(v)-\sigma^{n-1}(d_{n-1}) F^{n-1}(G_{n-1}(v)),\]
where \(F^{n-1}(G_{n-1}(v))\) can be written by a linear combination of \(F^{n-1}(v),\ldots,v\). Since the slope of \(b\) is \(\frac{k}{2}\), by [12, Lemma 5.2.4], we get
\[\operatorname{ord}c_{n+1-i}\geq\frac{k}{2}i.\]
Therefore, we have
\[\operatorname{ord}a_{i}=ir+\operatorname{ord}c_{n+1-i}\geq ir+\frac{k}{2}i.\]
**Remark 2.3.5**.: Let \(v\in V_{b}^{\operatorname{symp}}\). We define the non-commutative ring \(\mathcal{D}_{k}\) by
\[\mathcal{D}_{k}:=\mathcal{O}[F,V_{k}]/(FV_{k}=V_{k}F=\varpi^{k},aV_{k}=V_{k} \sigma(a),Fa=\sigma(a)F)_{a\in\mathcal{O}}.\]
By the proof of Lemma 2.3.4 (more precisely, equations (8) and (9))
\[\mathcal{L}:=\mathcal{O}v\oplus\cdots\oplus\mathcal{O}F^{n-1}(v)\oplus \mathcal{O}G_{1}(v)\oplus\cdots\oplus\mathcal{O}G_{n-1}(v)\oplus\mathcal{O}F^ {n}(v)\]
is contained in \(\mathcal{D}_{k}v\). Note that \(\mathcal{L}\) is self-dual up to constant. Since we have
\[\varpi^{k}\mathcal{L}\subset F\mathcal{L}\subset\mathcal{L}\]
by Lemma 2.3.4 again, we have
**Theorem 2.3.6**.: _Assume \(r+k\geq m+1\).. Then there exist the following two \(J_{b}(K)\)-equivariant commutative diagrams with bijective horizontal arrows._
_Here, we put_
\[v_{1}\sim_{b,m,r}(resp.\dot{\sim}_{b,m,r})\,v_{2}\iff g_{b,r}(v_{1})I^{m}(resp.\dot{I}^{m})=g_{b,r}(v_{2})I^{m}(resp.\dot{I}^{m}).\]
_for \(v_{1},v_{2}\in V_{b}^{\operatorname{symp}}\)._
The proof of this theorem is given in the next section.
## 3. Proof of comparison theorem
### Proof for the semi-infinite case
In this subsection, we verify the existence of the first diagram in Theorem 2.3.6. By Lemma 2.3.4, the well-definedness and the \(J_{b}(K)\)-equivariantness of horizontal maps are obvious. Note that for any \(c\in\breve{K}\), there exists \(A\in G(\breve{K})\) which is a diagonal, such that \(g_{b,r}(cv)=g_{b,r}(v)A\). Moreover, the injectivity of the maps is obvious too. In this section, we will prove the surjectivity in the first diagram.
To begin with, we consider the bottom map of the first diagram. Take any element \(gB(\breve{K})\in X_{w}^{(B)}(b)\), then \(g^{-1}F(g)\in B(\breve{K})wB(\breve{K}).\) Replacing \(g\) with a suitable representative, we may assume \(g^{-1}F(g)\in wB(\breve{K}).\) Hence we have \(g=F(g)C\) with
\[C\in\left(\begin{array}{ccccc|ccccc}\breve{K}&\breve{K}^{\times}&\breve{K}& \cdots&\cdots&\breve{K}&\breve{K}&\cdots&\cdots&\breve{K}&\breve{K}&\breve{K} \\ \vdots&0&\breve{K}^{\times}&\ddots&\ddots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\vdots\\ \vdots&\vdots&0&\ddots&\ddots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \vdots&\vdots&\vdots&\ddots&\ddots&\breve{K}&\vdots&\vdots&\vdots&\vdots& \vdots\\ \vdots&\vdots&\vdots&\vdots&\ddots&\breve{K}^{\times}&\vdots&\vdots& \vdots&\vdots&\vdots&\breve{K}\\ \breve{K}&\vdots&\vdots&\vdots&\vdots&0&\vdots&\vdots&\vdots&\vdots&\vdots& \breve{K}^{\times}\\ \hline\breve{K}^{\times}&\vdots&\vdots&\vdots&\vdots&\vdots&\breve{K}& \vdots&\vdots&\vdots&\vdots&\vdots&0\\ 0&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\breve{K}^{\times}&\ddots&\vdots& \vdots&\vdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&0&\ddots&\ddots&\breve{K}& \vdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\ddots&\breve{K}^{ \times}&\breve{K}&\vdots\\ 0&0&0&\cdots&\cdots&0&0&\cdots&\cdots&0&\breve{K}^{\times}&0\end{array} \right). \tag{11}\]
Note that for \(P\in G(\breve{K})\), we have
\[gP=F(g)CP=F(gP)(\sigma(P)^{-1}CP). \tag{12}\]
**Claim 3.1.1**.: _Suppose that there exists a \(P\in B(\breve{K})\) such that_
\[\sigma(P)^{-1}CP=C^{\prime}\quad\text{with}\]
\[C^{\prime}=\left(\begin{array}{ccccc|ccccc}*&*&&0&&*&\cdots&*&0\\ 0&&\ddots&&&&&*&\vdots\\ \vdots&&&*&&\mbox{\Large$\sf{0}$}&\vdots&0\\ 0&&&&*&*&*\\ \hline*&&&&*&&\mbox{\Large$\sf{0}$}&\\ \mbox{\Large$\sf{0}$}&&&&&\ddots&&\\ &&&&*&*\end{array}\right).\]
_Then the bottom map of the first diagram in Theorem 2.3.6 is surjective._
Proof.: We set \((1,2),\ldots,(n-1,n),(n,2n)\)-th entries of \(C^{\prime}\) as \(u_{1},\ldots,u_{n-1},u_{n}\). Note that, \(u_{1},\ldots,u_{n}\in\breve{K}^{\times}\) automatically. We can take a diagonal matrix \(Q=\operatorname{diag}(q_{1},\ldots,q_{2n})\in G(\breve{K})\) satisfying the following:
\[\begin{split}& q_{1}=1,\\ & q_{2}=u_{1}^{-1},\\ & q_{3}=(u_{2}\sigma(q_{2})^{-1})^{-1},\\ &\vdots\\ & q_{n}=(u_{n-1}\sigma(q_{n-1})^{-1})^{-1},\\ & q_{2n}=(u_{n}\sigma(q_{n})^{-1})^{-1}.\end{split} \tag{13}\]
Then by replacing \(P\) by \(PQ\), we may assume
\[C^{\prime}=\left(\begin{array}{ccccc|cccc}*&1&&&\text{\Large$0$}&*&\cdots&*& 0\\ 0&&\ddots&&&&*&\vdots\\ \vdots&&&&1&&\text{\Large$0$}&\vdots&0\\ 0&&&&*&1\\ \hline*&&&&\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\text{\tiny$\text{\text{\text{\text{\text{\text{\text{ \text{ \texttext{ }}}}}}}}}}}}}}} & & & \ \text{\vdots\\ \vdots&&&&1&&\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\tiny$\text{\text{\text{\text{\text{\text{\text{\text{ \texttext{ \texttext{ \text{ \text{ \text{ \text{ \text{ \text{ \text{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \text{ \texttext{ \text{ \texttext{ \text{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttext{ \texttexttext{ \texttext{ \texttexttext{ \texttext{ \texttext{ \texttexttexttext{ \texttext{ \texttexttext{ \texttexttexttext{ \texttext{ \texttexttexttext{ \texttext{ \texttexttexttext{ \texttexttext{ \texttexttexttext{ \texttexttext{ \texttexttexttexttext{ \texttexttext{ \texttexttexttexttexttext{ \texttexttext{ \texttexttexttexttexttexttext{ \texttexttext{ \texttexttexttexttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttexttexttext{ \texttexttexttexttexttexttexttext{ \texttexttexttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttexttexttext{ \text
\((2n+1)\)-th column, followed by adding \(-\sigma(c)\) times \(j\)-th row to \(i\)-th row, and adding \(\mp\sigma(c)\) times \((2n+1-i)\)-th row to \((2n+1-j)\)-th row. We denote these transformations by \(\operatorname{col}_{i\to j}\), \(\operatorname{col}_{2n+1-j\to 2n+1-i}\), \(\operatorname{row}_{j\to i}\), and \(\operatorname{row}_{2n+1-i\to 2n+1-j}.\) Moreover, by the abuse of notation, we use one of them for representing all the above transformations.
* \(P=1_{2n}+ce_{i,2n+1-i}\) (Here, we set \(1\leq i\leq n\). and so we have \(P\in B(\breve{K})\).) The \(\sigma\)-conjugation by these \(P\) act as adding \(c\) times the \(i\)-th column to the \((2n+1-i)\)-th column, followed by adding \(-\sigma(c)\) times the \((2n+1-i)\)-th row to the \(i\)-th row. As above, we call these transform \(\operatorname{col}_{i\to 2n+1-i}\), and \(\operatorname{row}_{2n+1-i\to i}\). Moreover, we use one of them for representing the above \(\sigma\)-conjugation.
We will eliminate entries of \(C\) successively by above \(P\). While these modifications, we should check that they do not restore already modified entries and that they preserve the form of \(C\) as in (11). However, In fact, we will use only
\[P\in B(\breve{K})\cap wB(\breve{K})w^{-1}=\left(\begin{array}{ccccc|ccccc} \breve{K}^{\times}&0&\cdots&0&\breve{K}&\cdots&\breve{K}&0\\ &\ddots&&&\\ &&\ddots&\breve{K}&\breve{K}&\breve{K}\\ &&&\ddots&\breve{K}&\breve{K}\\ &&&&\ddots&\breve{K}&0\\ &&&&\ddots&&\vdots\\ \mbox{\Large$\bigcirc$}&&&&\ddots&0\\ &&&&\breve{K}^{\times}\end{array}\right)\]
in the modification, so the form of \(C\) as in (11) is preserved successively. Therefore it suffices to check that each modification does not restore already modified entries.
In the following, we will denote the \((i,j)\)-th entry of matrices under transformations by \((i,j)\). First, we will modify the 3rd, \(\ldots\), \(n\)-th columns of above \(C\), as following. for the 3rd column : use \(\operatorname{col}_{2\to 3}\) to eliminate \((1,3)\).
for the 4th column : use \(\operatorname{col}_{2\to 4}\), \(\operatorname{col}_{3\to 4}\) to eliminate \((1,4)\), \((2,4)\).
\(\vdots\)
for the \(n\)-th column : use \(\operatorname{col}_{2\to n}\), \(\ldots\), \(\operatorname{col}_{n-1\to n}\) to eliminate \((1,n),\ldots,(n-2,n)\).
Here, for the \(i\)-th column, we should check that \(\operatorname{col}_{j\to i}\)\((j=2,\ldots,i-1)\) do not restore already modified entries. \(\operatorname{col}_{j\to i}\) is together with \(\operatorname{col}_{2n+1-i\to 2n+1-j}\), \(\operatorname{row}_{i\to j}\), and \(\operatorname{row}_{2n+1-j\to 2n+1-i}\). Since the \(j\)-th column is already modified, \(\operatorname{col}_{j\to i}\) affects only \((j-1,i)\). The transformations \(\operatorname{col}_{2n+1-i\to 2n+1-j}\), \(\operatorname{row}_{2n+1-j\to 2n+1-i}\) do not affect vanished entries obviously. By the form of the \(i\)-th row, \(\operatorname{row}_{i\to j}\) affects only entries on the 1st and the \(l\)-th column \((l\geq i+1)\), so it does not restore vanished entries too.
Next, we will eliminate the \(2n\)-th column without the \((n,2n)\)-th entry by using \(\operatorname{col}_{2\to 2n}\), \(\ldots\), \(\operatorname{col}_{n\to 2n}\). Here, we use \(\operatorname{col}_{j\to 2n}\) for \(2\leq j\leq n\), which are together with \(\operatorname{col}_{1\to 2n+1-j}\), \(\operatorname{row}_{2n\to j}\), and \(\operatorname{row}_{2n+1-j\to 1}\). As before, \(\operatorname{col}_{j\to 2n}\) affect only \((j-1,2n)\). Clearly, \(\operatorname{col}_{1\to 2n+1-j}\) is irrelevant to vanished entries, and \(\operatorname{row}_{2n\to j}\) only affect \((j,2n-1)\). For \(\operatorname{row}_{2n+1-j\to 1}\), the \((2n+1-j)\)-th row's non-vanishing entries lie on \((2n-j)\), \(\ldots\), \((2n-1)\)-th column (resp. the \(1st\), \((n+1),\ldots,(2n-1)\)-th column) if \(j<n\) (resp. if \(j=n\)). Especially they, lie on \(1,(n+1),\ldots,(2n-1)\)-th column, so \(\operatorname{row}_{2n+1-j\to 1}\) does not affect vanished entries.
Now we have modified the \(3\)rd, \(\ldots\), \(n\)-th, \(2n\)-th columns. Remark that \((i,j)\)\((n+1\leq i\leq j\leq 2n-1)\) are already modified now. Indeed, we have \(C\in G(\breve{K})\), so we can use
\[\begin{cases}\langle\text{the $2n$-th column vector},\text{the $j$-th column vector}\rangle=0&\text{ when $i=n+1$},\\ \langle\text{the $(2n+2-i)$-th column vector},\text{the $j$-th column vector}\rangle=0&\text{ otherwise },\end{cases}\]
to show that that the \((i,j)\)-th entries are \(0\).
Finally, we will eliminate the \((n-1)\times(n-1)\) submatrix of \(C\) lying on the \(2,\ldots,n\)-th rows and \(1,(n+1)\ldots,(2n-2)\)-th columns. It is enough to eliminate the upper left half triangular entries of this submatrix, because we have
\[\langle\text{$i$-th row vector},\text{$j$-th row vector}\rangle=0\]
for \((2\leq i,j\leq n)\) since \(G(\breve{K})\) is transpose-invariant.
We will modify the \(1\)st, \((n+1)\)-th, \(\ldots\), \((2n-2)\)-th column as following successively. For the \(1\)st column : use \(\operatorname{row}_{n+1\to 2}\), \(\ldots\), \(\operatorname{row}_{n+1\to n}\) to eliminate
\[(2,1),\ldots,(n,1).\]
For the \(n+1\)-th column : use \(\operatorname{row}_{n+2\to 2}\), \(\ldots\), \(\operatorname{row}_{n+2\to n-1}\) to eliminate
\[(2,n+1),\ldots,(n-1,n+1).\]
\[\vdots\]
For the \(2n-2\)-th column : use \(\operatorname{row}_{2n-1\to 2}\) to eliminate
\[(2,2n-2).\]
As before, we should check that these transformations do not affect vanished entries. For the \(1\)st column, \(\operatorname{row}_{n+1\to j}\) is together with \(\operatorname{row}_{2n+1-j\to n}\), \(\operatorname{col}_{j\to n+1}\), and \(\operatorname{col}_{n\to 2n+1-j}\)\((j=2,\ldots n)\). Since the \((n+1)\)-th raw is already modified, \(\operatorname{row}_{n+1\to j}\) only affects \((1,j)\). Similarly, \(\operatorname{row}_{2n+1-j\to n}\) affects only \((n,2n-j)\). Therefore, these operations do not affect vanished entries.
For the \((n+i)\)-th column \((i=1,\ldots,n-2)\), we use \(\operatorname{row}_{n+i+1\to j}\)\((2\leq j\leq n-i)\), which is together with \(\operatorname{row}_{2n+1-j\to n-i}\), \(\operatorname{col}_{j\to n+i+1}\) and \(\operatorname{col}_{n-i\to 2n+1-j}\) (resp. \(\operatorname{col}_{j\to n+i+1}\)) if \(2\leq j<n-i\) (resp. \(j=n-i\)). Since the \((2n+1-j)\)-th row vanishes without the \((2n+1-j,2n+1-j-1)\)-th entry, \(\operatorname{row}_{2n+1-j\to n-i}\) affects only \((2n+1-j-1)\)-th column. Since \((2n+1-j-1)>n+i\) if \(j<n-i\), the transformation \(\operatorname{row}_{2n+1-j\to n-i}\) does not affect \(n+i\)-th column. Clearly, \(\operatorname{col}_{j\to n+i+1}\) does not affect vanished entries. Moreover, \(\operatorname{col}_{n-i\to 2n+1-j}\)does not affect vanished entries since \(2n>2n+1-j>n+i+1\) if \(j<n-i\).
Now we complete the procedure, i.e. we have verified that there exists \(P\) satisfying the assumption of Claim 3.1.1.
Next, we consider the upper map of the first diagram in Theorem 2.3.6. Take any element \(gU(\breve{K})\in X_{w}^{(U)}(b),\) then the same arguments show that there exists \(v\in V_{b}^{\text{symp}}\) such that \(g_{b,0}(v)U(\breve{K})=gU(\breve{K}).\) Thus we get
\[g_{b,0}(v)^{-1}F(g_{b,0}(v))\in U(\breve{K})wU(\breve{K}).\]
Hence we get \(\lambda(g_{b,0}(v)^{-1}F(g_{b,0}(v)))=\lambda(w),\) and by Lemma 2.3.4, we have \(\sigma(\alpha)=\alpha,i.e.\,\alpha\in K^{\times}.\) This finishes proof for first diagram.
### Proof for the affine case
In this subsection, we prove the second diagram in Theorem 2.3.6. As before, to begin with, we will prove the surjectivity of the bottom map. Take any element \(gI^{m}\in X_{w_{r}}^{m}(b)(\breve{k}),\) then we get \(g^{-1}F(g)\in I^{m}w_{r}I^{m},\) and we can choose a representative so that we have \(g^{-1}F(g)\in w_{r}I^{m}.\) Therefore, we have \(g=F(g)C\) with
(14)
**Claim 3.2.1**.: _Suppose that there exists \(P\in I^{m}\) such that_
\[\sigma(P)^{-1}CP=C^{\prime}\quad\text{with}\]
\[C^{\prime}\in\left(\begin{array}{ccccc}*&\varpi^{r}\mathcal{O}^{\times}&0&* &\cdots&*&0\\ 0&&\ddots&&*&\vdots\\ \vdots&&&\varpi^{r}\mathcal{O}^{\times}&\mbox{\Large$\bigcirc$}&\vdots&0\\ 0&&&&*&\varpi^{r}\mathcal{O}^{\times}\\ \hline*&&&&\mbox{\Large$\bigcirc$}&\\ &&\mbox{\Large$\bigcirc$}&&\ddots&&\\ &&&&*&\end{array}\right).\]
_Then the bottom map of the second diagram in Theorem 2.3.6 is surjective._
Proof.: It can be proved by the same argument as in Claim 3.1.1. We set \(C^{\prime}\)'s the \((1,2),\ldots,(n-1,n),(n,2n)\)-th entries as \(\varpi^{r}u_{1},\ldots,\varpi^{r}u_{n-1},\varpi^{r}u_{n},\) where \(u_{i}\in\mathcal{O}^{\times}\). Then we can take a diagonal matrix \(Q\in I^{m}\) satisfying the equations 13. After replacing \(P\) with \(PQ,\) we can show that \(g^{\prime}:=gP=(g^{\prime}_{1}\ldots,g^{\prime}_{2n})\) satisfies \(g^{\prime}=g_{b,r}(g^{\prime}_{1})\) as in the proof of Claim 3.1.1.
We will construct \(P\) satisfying the assumption in Claim 3.2.1 as a product of the following kinds of matrices in \(I^{m}.\)
* \(P=1_{2n}+ce_{i,j}+(-1)^{j-i+1}ce_{2n+1-j,2n+1-i}\) (\(1\leq i<j\leq 2n,j\neq 2n+1-i\), and \(c\in\mathfrak{p}^{m}\)).
* \(P=1_{2n}+ce_{i,j}+(-1)^{j-i+1}ce_{2n+1-j,2n+1-i}\) (\(1\leq j<i\leq 2n,j\neq 2n+1-i\), and \(c\in\mathfrak{p}^{m+1}\)).
Each \(P\) as above cause the \(\sigma\)-conjugation which is simultaneous two-column addition transformations, followed by simultaneous two-row addition transformations. As before, we write \(\mathrm{col}_{i\to j}\), \(\mathrm{col}_{2n+1-j\to 2n+1-i}\), \(\mathrm{row}_{j\to i}\), and \(\mathrm{row}_{2n+1-i\to 2n+1-j}\) for these elementary transformations and \(\sigma\)-conjugation.
* \(P=1_{2n}+ce_{i,2n+1-i}\) (\(1\leq i\leq n\), and \(c\in\mathfrak{p}^{m}\)) type.
* \(P=1_{2n}+ce_{i,2n+1-i}\) (\(n+1\leq i\leq 2n\), and \(c\in\mathfrak{p}^{m+1}\)) type.
Each \(P\) as above cause the \(\sigma\)-conjugation which is a column addition transformation followed by a row addition transformation, and we write \(\mathrm{col}_{i\to 2n+1-i}\), and \(\mathrm{row}_{2n+1-i\to i}\) for these elementary operations and \(\sigma\)-conjugation.
We will eliminate the entries of \(C\) by \(\sigma\)-conjugations by above \(P\). We divide the procedure of elimination into two steps.
**Step1 Eliminate lower-left entries of \(C\).**
In this step, we modify the following entries.
\[(n+2,1),(n+3,1),\ldots,(2n,1), \tag{16}\] \[(n+3,n+1),\ldots(2n,n+1),(n+4,n+2),\ldots(2n,n+2),\ldots,(2n,2n-2),\] (17) \[(2n,2),\ldots,(n+2,2),(2n-1,3),\ldots,(n+2,3),\ldots,(n+2,n),\] (18) \[(2n,3),(2n,4),\ldots,(2n,n),\ldots,(n+3,n). \tag{15}\]
For the above entries \((i,j)\), we put
\[O_{i,j}(C):=\begin{cases}\mathrm{ord}(i,j)-(-r-k)&\text{ if }j\in\{1,n+1, \ldots,2n-1\},\\ \mathrm{ord}(i,j)-r&\text{ if }j\in\{2,\ldots,n,2n\}.\end{cases}\]
We also put
\[\gamma(C):=\min_{(i,j)\in(\ref{eq:C1}),(\ref{eq:C2}),(\ref{eq:C3}),(\ref{eq: C4})}O_{i,j}(C).\]
We will modify the above entries so that we raise \(\gamma\). We will construct such a modification as the composition of \(\sigma\)-conjugation corresponding to the elementary transformation. In this procedure, we need to check the following.
* (X) Each \(\sigma\)-conjugation preserves the form as in (14).
* (Y) Each \(\sigma\)-conjugation does not restore already modified entries mod \(\mathfrak{p}^{\gamma_{i,j}(C)}+1\).
* (Z) After each \(\sigma\)-conjugation \(C\mapsto\sigma(P)^{-1}C\sigma(P)\), \(\gamma(\sigma(P)^{-1}C\sigma(P))\geq\gamma(C)\).
Here, we put
\[\gamma_{i,j}(C)=\begin{cases}\gamma(C)+(-r-k)&\text{ if }j\in\{1,n+1,\ldots,2n-1\}, \\ \gamma(C)+r&\text{ if }j\in\{2,\ldots,n,2n\}.\end{cases}\]
Then one can show that the composition of these transformations increases \(\gamma\). Since we only use the the elementary matrices \(P\in I^{m}\cap w_{r}I^{m}w_{r}^{-1}\), the condition (X) is automatic. Therefore, we only check (Y) and (Z) in the following.
First, We will modify entries as in (15) as the following.
use \(\operatorname{row}_{n+1\to n+2}\) to eliminate \((n+2,1)\),
use \(\operatorname{row}_{n+1\to n+3}\) to eliminate \((n+3,1)\),
\(\vdots\)
use \(\operatorname{row}_{n+1\to 2n}\) to eliminate \((2n,1)\).
Here, to eliminate the \((i,1)\)-th entry (\(i=n+2,\ldots 2n\)), we use \(\operatorname{row}_{n+1\to i}\), which is together with \(\operatorname{row}_{2n+1-i\to n}\), \(\operatorname{col}_{i\to n+1}\), and \(\operatorname{col}_{n\to 2n+1-i}\). For \(n+2\leq i\leq 2n-1\), these transformations clearly satisfy (Y) and (Z). Here, to prove (Z), we use \(r-(-r-k)\geq 0\). Moreover, \(\operatorname{row}_{n+1\to 2n}\), \(\operatorname{row}_{1\to n}\) and \(\operatorname{col}_{2n\to n+1}\) clearly satisfy (Y) and (Z). On the other hand, \(\operatorname{col}_{n\to 1}\) affects \((n+2,1),\ldots,(2n,1)\), but since \(r-(-r-k)>0\) by the assumption, it satisfies (Y), and (Z).
Next, we will modify entries as in (16) as the following.
\[\text{ for the $(n+1)$-th column}\colon\text{ use $\operatorname{row}_{n+2\to n+3}$, $\ldots$, $ \operatorname{row}_{n+2\to 2n}$ to eliminate $(n+3,n+1),\ldots,(2n,n+1)$,}\]
for the \((n+2)\)-th column \(\colon\text{ use $\operatorname{row}_{n+3\to n+4}$, $\ldots$, $ \operatorname{row}_{n+3\to 2n}$ to eliminate $(n+4,n+2),\ldots(2n,n+2)$,}\)
\(\vdots\)
for the \((2n-2)\)-th column \(\colon\text{ use $\operatorname{row}_{2n-1\to 2n}$ to eliminate $(2n,2n-2)$.}\)
Here, to eliminate the \((i,j)\)-th entry (\(j=n+1,\ldots,2n-2,i=j+2,\ldots,2n\)), we use \(\operatorname{row}_{j+1\to i}\), which is together with \(\operatorname{row}_{2n+1-i\to 2n-j}\), \(\operatorname{col}_{i\to j+1}\), and \(\operatorname{col}_{2n-j\to 2n+1-i}\). Clearly, \(\operatorname{row}_{2n+1-i\to 2n-j}\) satisfies (Y), and (Z). Moreover, \(\operatorname{col}_{2n-j\to 2n+1-i}\) clearly satisfies (Y), and (Z) except for \(i=2n\). Even for \(i=2n\), \(\operatorname{col}_{2n-j\to 1}\) satisfies (Y), and (Z) since \(r-(-r-k)>0\) as in the modification of (15). Since the \((j+1)\)-th row is not modified yet, \(\operatorname{col}_{i\to j+1}\) also satisfies (Y) and (Z).
Thirdly, we will modify entries as in (17) as the following.
use \(\operatorname{col}_{2n-1\to 2}\), \(\ldots\operatorname{col}_{n+1\to 2}\) to eliminate \((2n,2),\ldots,(n+2,2)\),
use \(\operatorname{col}_{2n-2\to 3}\), \(\ldots\operatorname{col}_{n+1\to 3}\) to eliminate \((2n-1,3),\ldots(n+2,3)\),
\(\vdots\)
use \(\operatorname{col}_{n+1\to n}\) to eliminate \((n+2,n)\).
Here, to eliminate the \((i,j)\)-th entry (\(j=2,\ldots,n,i=2n+2-j\ldots,n+2\)), we use \(\operatorname{col}_{i-1\to j}\), which is together with \(\operatorname{col}_{2n+1-j\to 2n+2-i}\), \(\operatorname{row}_{j\to i-1}\), and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\) if \(i\neq 2n+2-j\) (resp. if \(i=2n+2-j\)). Since entries on the \((i-1)\)-th column have orders \(\geq-r-k+1\) except for \((i,i-1)\), \(\operatorname{col}_{i-1\to j}\) satisfies (Y) and (Z). On the other hand, since
if \(i\neq 2n+2-j\), \(\operatorname{col}_{2n+1-j\to 2n+2-i}\) satisfies (Y) and (Z). Moreover, since \(2r+k\geq 1\) by the assumption, \(\operatorname{row}_{j\to i-1}\) does not affects the \((s,t)\)-th entries modulo \(\mathfrak{p}^{\gamma_{s,t}(C)+1}\), even for \((s,t)=(i-1,j+1)\) (resp. \((s,t)=(i-1,2n)\)) if \(j\neq n\) (resp. if \(j=n\)). Thus \(\operatorname{row}_{i-1\to j}\) satisfies (Y) and (Z). Similarly, we can show \(\operatorname{row}_{2n+2-i\to 2n+1-j}\) affects only \((2n+2-j,2n+3-i)\) (resp. \((2n+2-j,2n)\)) if \(2n+2-i\neq n\) (resp. if \(2n+2-i=n\)).
Since
\[\langle\text{the $(n+i)$-th row},\text{the $(n+j)$-th row}\rangle=0\]
for \(1\leq i,j\leq n\), the all of the entries \((s,t)\) in (6) have orders \(\geq\gamma_{i,j}(C)+1\). Finally we have constructed \(P_{1}\in I^{m}\) such that \(\gamma(\sigma(P_{1})^{-1}CP_{1})>\gamma(C)\). By repeating these construction for \(C_{i}:=\sigma(P_{i})^{-1}CP_{i}\) (\(i=1,\ldots\)), we can construct the sequence \(P_{i}\in I^{m}\) (\(i=1,\ldots\)). Since \(P_{i}\) is a product of elementary matrices as above, we have
\[P_{i+1}\in(1+\mathfrak{p}^{\gamma(C_{i})}M_{2n}(V))\cap\operatorname{GSp}(V)\]
for \(i\geq 0\), where we put \(C_{0}:=C\). Therefore, we can show that the product
\[P_{\infty}:=\prod_{i=1}^{\infty}P_{i}\]
converges in \(I^{m}\). Then the entries \((3),(4),(5),(6)\) in \(\sigma(P_{\infty})^{-1}CP_{\infty}\) are \(0\), as desired.
**Step2 Eliminate other entries of \(C\).**
Here, we will eliminate other entries so that we find \(C^{\prime}\) as in Claim 3.2.1. We will eliminate the following entries in that order.
\[(2n,2n),(2n-1,2n),\ldots,(n+1,2n), \tag{20}\] \[(2n-1,2n-1),\ldots(n+1,2n-1),(2n-1,2n-2),\ldots,(n+1,n+1),\] (21) \[(n,1),\ldots,(2,1),\] (22) \[(2,n+1),\ldots,(n-1,n+1),(2,n+2),\ldots,(n-2,n+2),\ldots,(2,2n-2). \tag{19}\]
To eliminate (19), we use \(\operatorname{col}_{2n-1\to 2n}\),..., \(\operatorname{col}_{n+1\to 2n}\), \(\operatorname{col}_{1\to 2n}\) to eliminate the \((n+1,2n)\),..., \((2n,2n)\)-th entry. Here, we use that \(r-(-r-k)>0\) to the existence of such elementary matrices in \(I^{m}\). The transformation \(\operatorname{col}_{1\to 2n}\) (resp. \(\operatorname{col}_{n+i\to 2n}\)) is together with \(\operatorname{row}_{2n\to 1}\) (resp. \(\operatorname{col}_{1\to n+1-i}\), \(\operatorname{row}_{2n\to n+i}\), and \(\operatorname{row}_{n+1-i\to 1}\)), which preserve the form as in (14). Moreover, they clearly do not restore vanished entries. Note that, after eliminating (19), the \((n+1,2),\ldots,(n+1,2n)\)-th entries vanished since
\[\langle\text{the $i$-th column},\text{the $2n$-th column}\rangle=0\]
for \(i=2,\ldots,n\).
To eliminate (20), we use \(\operatorname{col}_{1\to j}\) to eliminate the \((n+1,j)\)-th entry, and \(\operatorname{col}_{i-1\to j}\) to eliminate the \((i,j)\)-th entry for \(i\geq n+1\). The transformation \(\operatorname{col}_{1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n}\), \(\operatorname{row}_{j\to 1}\), and \(\operatorname{row}_{2n\to 2n+1-j}\). They do not affect vanished entries since the \((i,2n+1-j)\)-th entry is already vanished for \(i\geq n+2\). On the other-hand, \(\operatorname{col}_{i-1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n+2-i}\), \(\operatorname{row}_{j\to i-1}\), and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\). Clearly, \(\operatorname{col}_{2n+1-j\to 2n+2-i}\) and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\) does not restore already vanished entries. Moreover, \(\operatorname{row}_{j\to i-1}\) does not restore \(\operatorname{col}_{i-1\to j}\).
**Step3.**
We will eliminate the \((n+1,j)\)-th entry for \(i\geq n+1\). The transformation \(\operatorname{col}_{1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n}\), \(\operatorname{row}_{j\to 1}\), and \(\operatorname{row}_{2n+1-j}\). They do not affect vanished entries since the \((i,2n+1-j)\)-th entry is already vanished for \(i\geq n+2\). On the other-hand, \(\operatorname{col}_{i-1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n+2-i}\), \(\operatorname{row}_{j\to i-1}\), and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\). Clearly, \(\operatorname{col}_{2n+1-j\to 2n+2-i}\) and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\) does not restore already vanished entries. Moreover, \(\operatorname{row}_{j\to i-1}\) does not restore \(\operatorname{col}_{i-1\to j}\).
**Step4.**
We will eliminate the \((n+1,j)\)-th entry for \(i\geq n+1\). The transformation \(\operatorname{col}_{1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n}\), \(\operatorname{row}_{j\to 1}\), and \(\operatorname{row}_{2n+1-j}\). They do not affect vanished entries since the \((i,2n+1-j)\)-th entry is already vanished for \(i\geq n+2\). On the other-hand, \(\operatorname{col}_{i-1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n+2-i}\), \(\operatorname{row}_{j\to i-1}\), and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\). Clearly, \(\operatorname{col}_{2n+1-j\to 2n+2-i}\) and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\) does not restore already vanished entries. Moreover, \(\operatorname{row}_{j\to i-1}\) does not restore \(\operatorname{col}_{i-1\to j}\).
**Step5.**
We will eliminate the \((n+1,j)\)-th entry for \(i\geq n+1\). The transformation \(\operatorname{col}_{1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n}\), \(\operatorname{row}_{j\to 1}\), and \(\operatorname{row}_{2n+1-j}\). They do not affect vanished entries since the \((i,2n+1-j)\)-th entry is already vanished for \(i\geq n+2\). On the other-hand, \(\operatorname{col}_{i-1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n+2-i}\), \(\operatorname{row}_{j\to i-1}\), and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\). Clearly, \(\operatorname{col}_{2n+1-j\to 2n+2-i}\) and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\) does not restore already vanished entries. Moreover, \(\operatorname{row}_{j\to i-1}\) does not restore \(\operatorname{col}_{i-1\to j}\).
**Step6.**
We will eliminate the \((n+1,j)\)-th entry for \(i\geq n+1\). The transformation \(\operatorname{col}_{1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n}\), \(\operatorname{row}_{j\to 1}\), and \(\operatorname{row}_{2n+1-j}\). They do not affect vanished entries since the \((i,2n+1-j)\)-th entry is already vanished for \(i\geq n+2\). On the other-hand, \(\operatorname{col}_{i-1\to j}\) is together with \(\operatorname{col}_{2n+1-j\to 2n+2-i}\), \(\operatorname{row}_{j\to i-1}\), and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\). Clearly, \(\operatorname{col}_{2n+1-j\to 2n+2-i}\) and \(\operatorname{row}_{2n+2-i\to 2n+1-j}\) does not restore already vanished entries. Moreover, \(\operatorname{row}_{j\to i-1}\) does not restore \(\operatorname{col}_{i-1\to j}\).
not restore already vanished entries too, since the \((j,j),\ldots,(j,2n)\)-th entries are already vanished.
To eliminate (21), we use \(\operatorname{row}_{n+1\to i}\) to eliminate the \((i,1)\)-th entry. They are together with transformations \(\operatorname{row}_{2n+1-i\to n}\), \(\operatorname{col}_{i\to n+1}\), and \(\operatorname{col}_{n\to 2n+1-i}\). They clearly do not restore the already vanished entries since the \((n+1,2),\ldots,(n+1,2n)\)-th entries vanished.
To eliminate (22), we use \(\operatorname{row}_{j+1\to i}\) to eliminate the \((i,j)\)-th entry. They are together with transformations \(\operatorname{row}_{2n+1-i\to 2n-j}\), \(\operatorname{col}_{i\to j+1}\), and \(\operatorname{col}_{2n-j\to 2n+1-i}\). The transformation \(\operatorname{row}_{2n+1-i\to 2n-j}\) affects only the \((2n-j,2n-i)\)-th entry, which is not vanished yet since \(2n-i\geq j+1\) if \(i+j+1\neq 2n+1\). The transformation \(\operatorname{col}_{i\to j+1}\) affects only the \((1,j+1),\ldots,(i-1,j+1)\)-th entry, which are not vanished yet. Similarly, the transformation \(\operatorname{col}_{2n-j\to 2n+1-i}\) affects only \((1,2n+1-i),\ldots,(n,2n+1-i)\), which are not vanished yet since \(2n+1-i\geq j+2\) as above.
It is clear that after eliminating entries (15),..., (22), we obtain a matrix of desired form, i.e., we have constructed \(P\) satisfying the assumption in Claim 3.2.1. It finishes the proof of the surjectivity of the lower map of the first diagram.
For the upper map of the second diagram, the proof is similar to that for the first diagram in Subsection 3.1.
### The relation \(\sim_{b,m,r}\) and \(\dot{\sim}_{b,m,r}\)
Recall that we write \(\alpha_{v}\) for \(\langle v,F^{n}(v)\rangle\).
**Proposition 3.3.1**.: _Assume \(r+k\geq m+1\). Let \(x,y\in V_{b}^{\operatorname{symp}}\). The following are equivalent._
1. \(x\sim_{b,m,r}y\)_._
2. \(x\in g_{b,r}(y)^{t}\big{(}\ \mathcal{O}^{\times}\ \ \mathfrak{p}^{m}\ \ \cdots\ \ \mathfrak{p}^{m}\ \ \bigsqcup\mathfrak{p}^{m}\ \ \cdots\ \ \cdots\ \ \ \mathfrak{p}^{m}\ \big{)}\)_._
_Moreover, if we assume_
\[\sigma(\alpha_{x})/\alpha_{x}\equiv 1,\sigma(\alpha_{y})/\alpha_{y}\equiv 1\mod \mathfrak{p}^{m+1}\]
_and replacing \(\mathcal{O}^{\times}\) with \(1+\mathfrak{p}^{m+1}\), the same statement for \(\dot{\sim}_{b,m,r}\) is true._
Proof.: First, by Lemma 2.3.4, we can show the following (in this part, we do not need the assumption \(r+k\geq m+1\)).
* We have (23) \[\varpi^{r}F(g_{b,r}(y))=g_{b,r}(y)A,\] where \[A=\left(\begin{array}{ccccccccc}&&&&(-1)^{n+1}\frac{\sigma(\alpha_{y})}{ \alpha_{y}}\varpi^{2r+k}&&&&\\ 1&&&&b_{1}&b_{2}&\cdots&b_{n-1}&a_{n}\\ &\ddots&&&&\vdots\\ &&1&&&&a_{2}\\ \hline&&&&\frac{\sigma(\alpha_{y})}{\alpha_{y}}\varpi^{2r+k}&&&&\\ &&&&\ddots&&\\ &&&&\ddots&&\\ &&&&&&&\frac{\sigma(\alpha_{y})}{\alpha_{y}}\varpi^{2r+k}\\ &&1&&&&a_{1}\\ \hline\end{array}\right).\]
Here, we use the same notation as in Lemma 2.3.4.
* We have (24) \[\varpi^{r}V_{k}(g_{b,r}(y))=g_{b,r}(y)B,\] where \[B =\operatorname{diag}(1,\ldots,1,\frac{\sigma^{-1}(\alpha_{y})}{ \alpha_{y}},\ldots,\frac{\sigma^{-1}(\alpha_{y})}{\alpha_{y}})\] \[\times\left(\begin{array}{cccc|cccc}b^{\prime}_{1}&\varpi^{2r+k} &&b^{\prime}_{2}&\cdots&b^{\prime}_{n-1}&a^{\prime}_{n}\\ &&\ddots&&\vdots&\\ &&&\varpi^{2r+k}&&&&\vdots\\ &&&&&a^{\prime}_{1}&\varpi^{2r+k}\\ \hline(-1)^{n+1}&&&&\\ &&1&&\\ &&&&\ddots&&\\ &&&&&\ddots&\\ &&&&&1\end{array}\right).\] Here, as above, \[a^{\prime}_{i},b^{\prime}_{j}\in\breve{K}\] satisfy \[\operatorname{ord}a^{\prime}_{i}\geq ir+\tfrac{k}{2}i\] and \[\operatorname{ord}b^{\prime}_{j}\geq jr+\tfrac{k}{2}j.\]
Note that (i) \(\Rightarrow\) (ii) is trivial. Therefore, we will show (ii) \(\Rightarrow\) (i). We need to show the inclusion
\[g_{b,r}(x)\in g_{b,r}(y)I^{m}. \tag{25}\]
First, we will show the 1st, \(\ldots\), \(n\)-th column of (25). Clearly, the 1st column of (25) is no other than the assumption (ii). Applying \(\varpi^{r}F\) and using (23) to the assumption (ii), we have
\[x\in g_{b,r}(y)^{t}\big{(}\begin{array}{cccc|cccc}\mathfrak{p}^{m+1}& \mathcal{O}^{\times}&\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m}\end{array} \begin{array}{cccc|cccc}\mathfrak{p}^{m}&\cdots&\cdots&\cdots&\cdots& \mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array} \begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c}\cdots& \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p} ^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\cdots&\cdots& \mathfrak{p}^{m}\end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m} \end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m} \end{array}\begin{array}{c}\cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\cdots&\cdots& \mathfrak{p}^{m}\end{array}\begin{array}{c}\mathfrak{p}^{m}\end{array}\begin{array}{c} \cdots&\cdots
Finally, we will verify the \((n+1)\)-th, \(\ldots\)\((2n-1)\)-th column of (25). Note that, by \(F^{-1}(23)\) for \(r=0\), we have
\[\begin{split}& G_{1}(v)=(-1)^{n+1}\frac{\alpha_{v}}{\sigma^{-1}( \alpha_{v})}\varpi^{k}F^{-1}(v)+c_{1}v,\\ & G_{i}(v)=\frac{\alpha_{v}}{\sigma^{-1}(\alpha_{v})}(\varpi^{k }F^{-1}(G_{i-1}(v)))+c_{i}v\quad(2\leq i\leq n-1),\end{split} \tag{26}\]
where \(c_{j}\in\breve{K}\) (\(1\leq j\leq n-1\)) depend on \(v\) and satisfy
\[\operatorname{ord}c_{j}\geq\frac{k}{2}j. \tag{27}\]
In the following, we put \(v:=x\). By applying \(\varpi^{r}V_{k}\) and using (24) to the assumption (ii), and using \(\lceil r+\frac{k}{2}\rceil=r+k\geq m+1\), we have
\[\varpi^{r}V_{k}(x)\in g_{b,r}(y)^{t}\big{(}\begin{array}{cccc}\mathfrak{p}^ {m+1}&\mathfrak{p}^{m+1}&\cdots&\mathfrak{p}^{m+1}\end{array}\begin{array}{ ccccc}\mathcal{O}^{\times}&\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m}\end{array} \begin{array}{c}\mathcal{O}^{\times}&\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m }\end{array}\begin{array}{c}\mathcal{O}^{\times}\\ \end{array}\begin{array}{c}\mathcal{O}^{\times}\end{array}\begin{array}{ ccccc}\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c} \mathcal{O}^{\times}\end{array}\begin{array}{c}\mathfrak{p}^{m}&\cdots& \mathfrak{p}^{m}\end{array}\begin{array}{c}\mathcal{O}^{\times}\end{array} \begin{array}{c}\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m}\end{array}\begin{array} []{c}\mathcal{O}^{\times}\end{array}\begin{array}{c}\mathfrak{p}^{m}& \cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathcal{O}^{\times} \end{array}\begin{array}{c}\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m}\end{array} \begin{array}{c}\mathcal{O}^{\times}\end{array}\begin{array}{c}\mathfrak{p}^{ m}&\cdots&\mathfrak{p}^{m}\end{array}\begin{array}{c}\mathcal{O}^{\times}\end{array} \begin{array}{c}\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m}\end{array}\begin{array} []{c}\mathcal{O}^{\times}\end{array}\begin{array}{c}\mathcal{O}^{\times} \end{array}\begin{array}{c}\mathfrak{p}^{m}&\cdots&\mathfrak{p}^{m}\end{array} \begin{array}{c}\mathcal{O}^{\times}\end{array},\]
which implies the \((n+1)\)-th column of (25). Applying \(\varpi^{r}V_{k}\) and (24) to (28) and using (27) repeatedly, we can show the \(i\)-th column of (25) for \(n+2\leq i\leq 2n-1\), and it finishes the proof. The assertion for \(\dot{\sim}_{b,m,r}\) can be shown in the same way.
**Remark 3.3.2**.: Suppose that \(r+k\geq m+1\). By Proposition 3.3.1, for \(x,y\in V_{b}^{\rm symp}\), we have
\[x\sim_{b,m,r+1}y\Rightarrow x\sim_{b,m,r}y.\]
(The same statement for \(\dot{\sim}_{b,m,r}\) holds true.) Therefore, by Theorem 2.3.6, we have morphisms of sets
\[\begin{split}& X_{w_{r+1}}^{m}(b)(\overline{k})\to X_{w_{r}}^{m}(b)( \overline{k}),\\ &\dot{X}_{w_{r+1}}^{m}(b)(\overline{k})\to\dot{X}_{w_{r}}^{m}(b)( \overline{k}).\end{split} \tag{29}\]
In Remark 3.3.3, we will show that these morphisms come from morphisms of schemes. Moreover, we have an isomorphism of sets
\[(\varprojlim_{\stackrel{{\longleftarrow}}{{r>m}}}\dot{X}_{w_{r}}^ {m}(b))(\overline{k})\simeq X_{w}^{(U)}(b)\]
by Theorem 2.3.6. Indeed, for any \((\overline{v_{r,m}})_{r,m}\) in the left-hand side (where \(v_{r,m}\in V_{b}^{\rm symp}\)), \(v_{r,m}\) converges as an element of \(V_{b}^{\rm symp}\) as follows. Fix \((r_{0},m_{0})\). Let \(M\) be a minimum of order of entries of \(g_{b,r_{0}}(v_{r_{0},m_{0}})\). Then, for any \((r_{1},m_{1})\) and \((r_{2},m_{2})\) with \(r_{i}>r_{j}\) and \(m_{i}>m_{j}\) for \(i>j\), we have
\[\begin{split} v_{r_{2},m_{2}}&\in g_{b,r_{1}}(v_{r_{1},m_{1 }})^{t}\big{(}\begin{array}{cccc}1+\mathfrak{p}^{m_{1}+1}&\mathfrak{p}^{m_{1 }}&\cdots&\mathfrak{p}^{m_{1}}\end{array}\begin{array}{cccc}\mathfrak{p}^{m _{1}}&\cdots&\cdots&\mathfrak{p}^{m_{1}}\end{array}\begin{array}{c} \mathcal{O}^{\times}\\ \end{array}\begin{array}{c}\mathcal{O}^{\times}\\ \end{array}\begin{array}[]
**Remark 3.3.3**.: Actually, we can prove that maps (29) is induced by the transition map between perfect schemes,
\[X^{m}_{w_{r+1}}(b)^{\operatorname{perf}} \to X^{m}_{w_{r}}(b)^{\operatorname{perf}},\] \[\dot{X}^{m}_{w_{r+1}}(b)^{\operatorname{perf}} \to\dot{X}^{m}_{w_{r}}(b)^{\operatorname{perf}}. \tag{30}\]
To this end, we should introduce a functorial variant of Theorem 2.3.6 and Proposition 3.3.1. For perfect algebra \(R\) over \(\overline{k}\), we put \(K_{R}=\mathbb{W}(R)[\frac{1}{\varpi}]\), and \(V_{R}:=\mathbb{W}(R)[\frac{1}{\varpi}]^{2n}\) with the symplectic form associated with \(\Omega\) as in (4). Here, we put
\[\mathbb{W}(R):=\begin{cases}R\widehat{\otimes}_{\overline{k}}\mathcal{O}& \text{ if }\operatorname{char}K=p,\\ W(R)\otimes_{W(\overline{k})}\mathcal{O}&\text{ if }\operatorname{char}K=0. \end{cases}\]
Then we can define
\[V^{\operatorname{symp}}_{b,R}:=\left\{v\in V_{R}\left|\langle v,F(v)\rangle =\cdots=\langle v,F^{n-1}(v)\rangle=0,\langle v,F^{n}(v)\rangle\in\mathbb{W}( R)\frac{1}{\varpi}\right\}.\]
Also, we can define \(g_{b,r}\) as before, and we can prove the analogue of Theorem 2.3.6 in the following sense; We define \(X^{m}_{r}(R)\) and \(\dot{X}^{m}_{r}(R)\) by
\[X^{m}_{r}(R) :=\{g\in G(K_{R})/I^{m}\mid g^{-1}b\sigma(g)\in I^{m}w_{r}I^{m}\},\] \[\dot{X}^{m}_{r}(R) :=\{g\in G(K_{R})/\dot{I}^{m}\mid g^{-1}b\sigma(g)\in\dot{I}^{m}w_ {r}\dot{I}^{m}\}. \tag{31}\]
Then we can show that \(g_{b,r}\) defines surjections \(V^{\operatorname{symp}}_{b,R}\twoheadrightarrow X^{r,m}_{R}\) and \(V^{\operatorname{symp}}_{b,R}\twoheadrightarrow\dot{X}^{r,m}_{R}\) if \(r+k\geq m+1\). Moreover, we can show the analogue of Proposition 3.3.1 so that we have transition maps
\[X^{m}_{r+1}(R) \to X^{m}_{r}(R),\] \[\dot{X}^{m}_{r+1}(R) \to\dot{X}^{m}_{r}(R). \tag{32}\]
Finally,we will define the transition map (30). For simplicity, we only define the former case. Let \(R\) be any perfect algebra over \(\overline{k}\) as above. For any \(a\in X^{m}_{w_{r+1}}(b)(R)\), by [16, Lemma 1.3.7] and [16, proof of Lemma 1.3], there exists an etale cover \(R^{\prime}\) such that \(a|_{\operatorname{Spec}R^{\prime}}\in X^{m}_{r+1}(R^{\prime})\subset X^{m}_{w_ {r+1}}(b)(R^{\prime})\). Then we can define \(\iota(a|_{\operatorname{Spec}R^{\prime}})\in X^{m}_{r}(R^{\prime})\) by the image via (32). By etale descent, we can show that \(\iota(a|_{\operatorname{Spec}R^{\prime}})\) descends to an element in \(X^{m}_{w_{r}}(b)(R)\). Since this construction is functorial, this defines a morphism \(X^{m}_{w_{r+1}}(b)\to X^{m}_{w_{r}}(b)\) as desired.
## 4. decription of connected components
In this section, we describe the connected component of semi-infinite (resp, affine) Deligne-Lusztig varieties by following the method of [14] (see also [14]).
### Representatives of \(b\)
In the following, we define two kinds of representative of \(b\) to describe the connected components of affine Deligne-Lusztig varieties. We put
\[b_{0}:=\left(\begin{array}{cccc|cccc}&&&(-1)^{n+1}&&\\ 1&&&&\\ &\ddots&&&&\\ &&1&&\\ \hline&&&&1&\\ &&&&&&\ddots&\\ &&1&&&&1\\ \end{array}\right),\]
and
\[A_{k}:=\operatorname{diag}(1,\varpi^{k},\ldots,1,\varpi^{k}).\]
We put \(b:=b_{0}A_{k}\), and we call this representative the Coxeter type representative of \(b\) with \(\kappa(b)=k\). Note that \(\lambda(b)=-\varpi^{k}\) holds as before.
On the other hand, we put
\[b_{\mathrm{sp}}:=\left\{\begin{pmatrix}\operatorname{diag}(1,\ldots,1|-1, \ldots,-1)&&\text{if $k=0$,}\\ \left(\begin{array}{cc}\left(\begin{array}{cc}0&\varpi\\ 1&0\end{array}\right)&&\\ &&\ddots&\\ &&&&\left(\begin{array}{cc}0&\varpi\\ 1&0\end{array}\right)\end{array}\right)&\text{if $k=1$,}\end{pmatrix}\right.\]
and we call this representative the special type representative of \(b\) with \(\kappa(b)=k\).
Let \(\mathcal{A}^{\mathrm{red}}\) be the apartment of the reduced building of \(\operatorname{GSp}_{2n}\) over \(\check{K}\) which corresponds to the maximal split torus consists of diagonal matrices in \(\operatorname{GSp}_{2n}(\check{K})\). We can show that \(b\) acts on \(\mathcal{A}^{\mathrm{red}}\) with the unique fixed point \(x\). More precisely, \(x\) is given by
\[\left\{\begin{array}{ll}\frac{1}{2}k\alpha_{2}^{\vee}+\frac{1}{2}k\alpha_{3 }^{\vee}+\cdots+\frac{m-1}{2}k\alpha_{n-2}^{\vee}+\frac{m-1}{2}k\alpha_{n-1}^{ \vee}+\frac{m}{2}k\beta^{\vee}&\text{if $n$ is even,}\\ \frac{-1}{4}k(\alpha_{1}^{\vee}+\alpha_{3}^{\vee}+\cdots+\alpha_{n-2}+\beta^ {\vee})&\text{if $n$ is odd.}\end{array}\right.\]
Here, we put \(m:=n/2\) if \(n\) is even. Moreover, \(\alpha_{1}^{\vee}\ldots\alpha_{n-1}^{\vee},\beta^{\vee}\) is the usual simple coroots, given by
\[\alpha_{i}^{\vee}:=\ \operatorname{diag}(1,\ \ \ldots,\ \ t,\ \ t^{-1},\ \ \cdots\ \ \ \ t,\ \ \ t^{-1},\ \ \cdots\ \ \ \ 1),\]
\[\beta^{\vee}:=\ \operatorname{diag}(1,\ \ \ldots,\ \ t,\ \ \ t^{-1},\ \ \cdots\ \ \ 1).\]
We denote the corresponding compact open subgroup of \(\operatorname{GSp}_{2n}(\check{K})\) by \(G_{x,0}\).
\[\left(\begin{array}{ccc|ccc}\mathcal{O}&\mathfrak{p}&\mathcal{O}&\mathcal{O}\\ \mathcal{O}&\mathcal{O}&\mathfrak{p}^{-1}&\mathcal{O}\\ \hline\mathfrak{p}&\mathfrak{p}&\mathcal{O}&\mathfrak{p}\\ \mathcal{O}&\mathfrak{p}&\mathcal{O}&\mathcal{O}\end{array}\right),\left( \begin{array}{ccc|ccc}\mathcal{O}&\mathfrak{p}&\mathcal{O}&\mathfrak{p}& \mathcal{O}&\mathfrak{p}\\ \mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}\\ \mathcal{O}&\mathfrak{p}&\mathcal{O}&\mathfrak{p}&\mathcal{O}&\mathfrak{p}\\ \mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}\\ \mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}&\mathcal{O}\end{array}\right)\]
The shape of \(G_{x,0}\) when \(k=1\), for \(2n=4,6\)
### Linear algebraic description of connected components
In this subsection, we study the \(J_{b}(K)\)-action on \(V_{b}^{\mathrm{symp}}\).
**Definition 4.2.1**.: We put
\[\mathcal{L}:=\begin{cases}\mathcal{O}e_{1}+\cdots+\mathcal{O}e_{n}+\mathfrak{ p}^{k}e_{n+1}+\mathcal{O}e_{n+2}+\mathfrak{p}^{k}e_{n+3}+\cdots\mathfrak{p}^{k}e_{2n-1}+ \mathcal{O}e_{2n}&\text{if $n$ is even},\\ \mathcal{O}e_{1}+\cdots+\mathcal{O}e_{2n}&\text{if $n$ is odd},\end{cases}\]
where \(e_{1},\ldots,e_{2n}\) is the standard basis of \(V\). We define
\[\mathcal{L}_{b}^{\mathrm{symp}}:=\{v\in V_{b}^{\mathrm{symp}}\cap\mathcal{L} \mid\langle v,F^{n}(v)\rangle\in\varpi^{\lfloor\frac{kn}{2}\rfloor}\mathcal{O }^{\times}\}.\]
We also define \(\mathcal{L}_{b_{\mathrm{sp}}}^{\mathrm{symp,rat}}\) in the same way.
**Proposition 4.2.2**.:
1. _We have the decomposition_ \[V_{b}^{\mathrm{symp}}=\bigsqcup_{j\in J_{b}(K)/J_{b,0}}j\mathcal{L}_{b}^{ \mathrm{symp}}.\]
2. _We have the decomposition_ \[V_{b_{\mathrm{sp}}}^{\mathrm{symp}}=\bigsqcup_{j\in J_{b_{\mathrm{sp}}}(K)/J_ {b_{\mathrm{sp}},0}}j\mathcal{L}_{b_{\mathrm{sp}}}^{\mathrm{symp}}.\]
3. _There exists an element_ \(g\in G_{x,0}\) _such that_ \[g^{-1}b_{\mathrm{sp}}\sigma(g)=b.\]
_Here, we put \(J_{b,0}:=J_{b}(K)\cap G_{x,0}\) (resp. \(J_{b_{\mathrm{sp}},0}:=J_{b_{\mathrm{sp}}}(K)\cap G_{x,0}\))._
To prove this proposition, we need to define the reduced version of \(g_{b,r}\).
**Definition 4.2.3**.: For any \(v\in V\), we put
\[g_{b}^{\mathrm{red}}(v):=g_{b,0}(v)\cdot\mathrm{diag}(1,\varpi^{\lceil\frac{- k}{2}\rceil},\ldots,\varpi^{\lceil\frac{-k(i-1)}{2}\rceil},\ldots,\varpi^{ \lceil\frac{-k(n-1)}{2}\rceil},\varpi^{\lceil\frac{-kn}{2}\rceil+\lfloor\frac {k(n-1)}{2}\rfloor},\ldots,\varpi^{\lceil\frac{-kn}{2}\rceil}),\] \[g_{b_{\mathrm{sp}}}^{\mathrm{red}}(v):=g_{b_{\mathrm{sp}},0}(v) \cdot\mathrm{diag}(1,\varpi^{\lceil\frac{-k}{2}\rceil},\ldots,\varpi^{ \lceil\frac{-k(i-1)}{2}\rceil},\ldots,\varpi^{\lceil\frac{-k(n-1)}{2}\rceil},\varpi^{\lceil\frac{-kn}{2}\rceil+\lfloor\frac{k(n-1)}{2}\rfloor},\ldots, \varpi^{\lceil\frac{-kn}{2}\rceil}).\]
Then we can confirm that \(g_{b}^{\rm red}(v)\in G_{x,0}\) (resp. \(g_{b_{\rm sp}}^{\rm red}(v)\in G_{x,0}\)) for any \(v\in\mathcal{L}_{b}^{\rm symp}\) (resp. \(v\in\mathcal{L}_{b_{\rm sp}}^{\rm symp}\)).
Proof.: First, we will give a short remark. Let \(v\in\mathcal{L}_{b}^{\rm symp}\) and \(j\in J_{b}(K)\). We suppose that \(jv\in\mathcal{L}_{b}^{\rm symp}\). Then we have
\[g_{b}^{\rm red}(jv)=jg_{b}^{\rm red}(v)\in G_{x,0}\]
and thus we have \(j\in G_{x,0}.\) Therefore, to prove (1) of this proposition, we need to show that, for any \(v\in V_{b}^{\rm symp}\), there exists \(j\in J_{b}(K)\) such that \(jv\in\mathcal{L}_{b}^{\rm symp}\). The same argument for \(b_{\rm sp}\) works.
Next, we deduce (1) and (3) from (2). Suppose that (2) holds true. Let \(g\in\operatorname{GSp}(\breve{K})\) be an element such that
\[g^{-1}b_{\rm sp}\sigma(g)=b.\]
Take \(v\in\mathcal{L}_{b}^{\rm symp}\). Then we have \(gv\in V_{b_{\rm sp}}^{\rm symp}\). By (2), there exists \(j\in J_{b_{\rm sp}}(K)\) such that
\[jg(v)\in\mathcal{L}_{b_{\rm sp}}^{\rm symp}.\]
Then we have
\[g_{b_{\rm sp}}^{\rm red}(jgv)=jg_{b_{\rm sp}}^{\rm red}(gv)=jgg_{b}^{\rm red} (v)\in G_{x,0}.\]
Now we have \(jg\in G_{x,0}\) and \((jg)^{-1}b_{\rm sp}\sigma(jg)=b^{\prime}\). It finishes the proof of (3).
By using (3) and (2), we can easily show (1).
Therefore, it suffices to show (2). In the following, we denote \(b_{\rm sp}\sigma\) by \(F\). First, we will consider the case where \(n\) is even. We put \(n:=2m.\) For any \(v\in V_{b_{\rm sp}}^{\rm symp}\), by the \(\operatorname{GL}_{n}\)-case ([1, Lemma 6.11]), there exists a \(j_{0}\in J_{b_{\rm sp},0}^{(\operatorname{GL}_{n})}\) such that
\[j_{0}p_{0}(v)\in\mathcal{L}_{b_{\rm sp},0}^{\rm adm}.\]
Here, we consider the decomposition \(V=V_{0}\oplus V_{1}\), where \(V_{0}\) is a vector space spanned by \(e_{1},\ldots e_{n}\), and \(V_{1}\) is a vaector space spaned by \(e_{n+1},\ldots,e_{2n}\). We denote the projection from \(V\) to \(V_{i}\) by \(p_{i}\). Moreover, \(b_{\rm sp,i}\) is the restriction \(b_{\rm sp}|_{V_{i}}\), and we denote the corresponding inner forms by \(J_{b_{\rm sp,i}}^{(\operatorname{GL}_{n})}\). For the definition of \(\mathcal{L}_{b_{\rm sp},0}^{\rm adm}\), see [1, Definition 6.10]. Then we can show that
\[j:=\left(\begin{array}{c|c}j_{0}&\\ \hline&\omega^{t}j_{0}^{-1}\omega\end{array}\right)\]
is contained in \(J_{b_{\rm sp}}(K)\). Here, we put
\[\omega:=\left(\begin{array}{ccccc}&&&1\\ &&&-1&\\ &&\iddots&&\\ &1&&\\ -1&&\end{array}\right),\]
which is the \(n\times n\) matrix. By multiplying \(j\) by some power of \(\operatorname{diag}(1,\ldots,1,\varpi,\ldots,\varpi)\) from the left, we have the following.
* \(j\in J_{b_{\rm sp}}(K)\).
* \(p_{0}(jv)\in\mathcal{L}_{b_{\rm sp},0}^{\rm adm}\).
* \(\langle jv,F^{n}(jv)\rangle\in\varpi^{m}\mathcal{O}^{\times}\).
By Lemma 4.2.6, it suffices to show the following claim.
**Claim 4.2.4**.: _Suppose \(n\) is even as above. Let \(v,\widetilde{v}\in V_{b_{\mathrm{sp}}}^{\mathrm{symp}}\) be elements satisfying the following._
1. \(p_{0}(v)=p_{0}(\widetilde{v})\)_._
2. \(\langle v,(b_{\mathrm{sp}}\sigma)^{n}(v)\rangle=\langle\widetilde{v},(b_{ \mathrm{sp}}\sigma)^{n}\widetilde{v}\rangle\)_._
_Then there exists \(j\in J_{b}(K)\) such that \(jv=\widetilde{v}\)._
Here we give the proof of Claim 4.2.4 (this is essentially the same as in the proof of [25, Lemma 5.1]). We put \(v_{0}:=p_{0}(v)=p_{0}(\widetilde{v})\), \(v_{1}:=p_{1}(v)\), and \(\widetilde{v}_{1}:=p_{1}(\widetilde{v})\). Then by the assumption, we have
\[\langle v_{0}+\widetilde{v}_{1}-v_{1},(b_{\mathrm{sp}}\sigma)^{i} (v_{0}+\widetilde{v}_{1}-v_{1})\rangle =\langle v_{0},(b_{\mathrm{sp}}\sigma)^{i}(\widetilde{v}_{1}-v_{ 1})\rangle+\langle\widetilde{v}_{1}-v_{1},(b_{\mathrm{sp}}\sigma)^{i}(v_{0})\rangle\] \[=\langle\widetilde{v},(b_{\mathrm{sp}}\sigma)^{i}(\widetilde{v}) \rangle-\langle v,(b_{\mathrm{sp}}\sigma)^{i}(v)\rangle=0\]
for \(i=1,\ldots,n\). Therefore, for any \(\phi\in\mathcal{D}_{k}\), we have
\[\langle v_{0}+\widetilde{v}_{1}-v_{1},\phi(v_{0}+\widetilde{v}_{1}-v_{1}) \rangle=0. \tag{33}\]
Note that, by the dimensional reason, \(\phi(v_{0}+\widetilde{v}_{1}-v_{1})\) can be spanned by \((b_{\mathrm{sp}}\sigma)^{i}(v_{0}+\widetilde{v}_{1}-v_{1})\) for \(-n+1\leq i\leq n\). For any \(\mathcal{D}_{k}\supset\mathrm{Ann}\,v_{0}\ni A\) and any \(\phi\in\mathcal{D}_{k}\), we have
\[\langle v_{0},\phi A(\widetilde{v}_{1}-v_{1})\rangle=0.\]
Since \(v_{0}\in\mathcal{L}_{b_{\mathrm{sp}},0}^{\mathrm{adm}}\), we have
\[A(\widetilde{v}_{1}-v_{1})=0.\]
Now we define \(j\in\mathrm{GL}(V)\) by
* \(j(v_{0}):=v_{0}+\widetilde{v}_{1}-v_{1}\),
* \(j(\phi(v_{0})):=\phi(j(v_{0}))\),
* \(j|_{V_{1}}:=\mathrm{id}\),
for any \(\phi\in\mathcal{D}_{k}\). By the above consideration, the above \(j\) is well-defined. By definition, \(j\) commutes with \(b_{\mathrm{sp}}\sigma\). Moreover, for any \(\phi,\phi^{\prime}\in\mathcal{D}_{k}\) and any \(w_{1},w_{1}^{\prime}\in V_{1}\), we have
\[\langle j(\phi(v_{0})+w_{1}),j(\phi^{\prime}(v_{0})+w_{1}^{ \prime})\rangle\] \[= \langle\phi(v_{0}+\widetilde{v}_{1}-v_{1})+w_{1},\phi^{\prime}(v _{0}+\widetilde{v}_{1}-v_{1})+w_{1}^{\prime}\rangle\] \[= \langle\phi(v_{0}),w_{1}^{\prime}\rangle+\langle w_{1},\phi^{ \prime}(v_{0})\rangle\] \[= \langle\phi(v_{0})+w_{1},\phi^{\prime}(v_{0})+w_{1}^{\prime}\rangle.\]
Here, in the second equality, we use the equality (33). Therefore, we have \(j\in J_{b_{\mathrm{sp}}}(K)\). It finishes the proof of Claim 4.2.4, and the proof of Proposition 4.2.2 in the case where \(n\) is even.
Next, we will consider the case where \(n\) is odd. If \(k=0\), then we can decompose the isocrystal \(V\) as \(V_{0}\oplus V_{1}\) as in the case where \(n\) is even, and we can proceed with the same proof. Therefore, we may assume that \(k=1\). We put \(n:=2n^{\prime}+1\). In this case, we can decompose the isocrystal \(V\) as
\[V=V_{0}\oplus V_{\frac{1}{2}}\oplus V_{1},\]
where \(V_{0}\) is spanned by \(e_{1},\ldots e_{n-1}\), \(V_{\frac{1}{2}}\) is spanned by \(e_{n},e_{n+1}\), and \(V_{1}\) is spanned by \(e_{n+2},\ldots e_{2n}\). As in the even case, we can find \(j_{0}\in J_{b_{\mathrm{sp},0}}^{(\mathrm{GL}_{2m})}\) such that
\[j_{0}p_{0}(v)\in\mathcal{L}_{b_{\mathrm{sp},0}}^{\mathrm{adm}}.\]
Then we can show that
\[j:=\left(\begin{array}{c|c|c}j_{0}&&&\\ \hline&1&&\\ &&1&\\ \hline&&&\omega^{t}j_{0}^{-1}\omega\end{array}\right)\]
is contained in \(J_{b_{\mathrm{sp}}}(K)\).
Moreover, by multiplying \(j\) from the left by some power of
\[\left(\begin{array}{ccccccccc}1&&&&&&&\\ &\ddots&&&&\\ &&1&&&&&\\ &&&-\varpi&&&&\\ &&1&&&&&\\ &&&&\varpi&&\\ &&&&\ddots&\\ &&&&\varpi\end{array}\right),\]
we have the following.
* \(j\in J_{b_{\mathrm{sp}}}(K)\).
* \(p_{0}(jv)\in\mathcal{L}_{b_{\mathrm{sp},0}}^{\mathrm{adm}}\).
* \(\langle jv,F^{n}(jv)\rangle\in\varpi^{m}\mathcal{O}^{\times}\).
By [20, Proposition 4.3], we may further assume the following;
\[A\mathcal{D}_{k}jv_{\frac{1}{2}}=\mathcal{D}_{k}Ajv_{\frac{1}{2}},\]
where \(A\in\mathcal{D}_{k}\) is a generator of \(\mathrm{Ann}(jv_{0})\) as in [20, Lemma 2.6]. By Lemma 4.2.6, it suffices to show the following claim.
**Claim 4.2.5**.: _Let \(v,\widetilde{v}\in V_{b_{\mathrm{sp}}}^{\mathrm{symp}}\) be elements satisfying the following._
1. \(p_{0}(v)=p_{0}(\widetilde{v})\)_._
2. \(p_{\frac{1}{2}}(v)=p_{\frac{1}{2}}(\widetilde{v})\)_._
_Then there exists \(j\in J_{b}(K)\) such that \(jv=\widetilde{v}\)_
This claim can be proved in the same way as in Claim 4.2.4.
To complete the proof, we need the following lemma.
**Lemma 4.2.6**.: _We put \(F:=b_{\mathrm{sp}}\sigma\) as above._
1. _Suppose that_ \(n\) _is even, or_ \(n\) _is odd and_ \(k=0\)_. Let_ \(v_{0}\in\mathcal{L}_{b_{\mathrm{sp},0}}^{\mathrm{adm}}\) _and_ \(c\in\varpi^{m}\mathcal{O}^{\times}\)_. Then there exists_ \(v\in\mathcal{L}_{b_{\mathrm{sp}}}^{\mathrm{symp}}\) _such that_ \(p_{0}(v)=v_{0}\) _and_ \[\langle v,F^{n}(v)\rangle=c.\]
2. _Suppose that_ \(n\) _is odd and_ \(k=1\)_. Let_ \(v_{0}\in\mathcal{L}^{\mathrm{adm}}_{b_{\mathrm{sp},0}}\) _and_ \(v_{\frac{1}{2}}\in\mathcal{L}^{\mathrm{adm}}_{b_{\mathrm{sp},\frac{1}{2}}}\)_. We further suppose that_ \[A\mathcal{D}v_{\frac{1}{2}}=\mathcal{D}Av_{\frac{1}{2}},\] _where we use the same notation as in_ _[_25_, Proposition 4.3 (2)]__. Then there exists_ \(v\in\mathcal{L}^{\mathrm{symp}}_{b_{\mathrm{sp}}}\) _such that_ \(p_{0}(v)=v_{0}\) _and_ \(p_{\frac{1}{2}}(v)=v_{\frac{1}{2}}\)_._
Proof.: The proof is essentially done by [25, Proposition 5.3]. First, we will prove the case (1). For simplicity, we suppose that \(k=1\) (the case where \(k=0\) can be proved by a similar argument). We put \(n^{\prime}:=\frac{n}{2}\). Let \(v_{1}\in V_{1}\) be an arbitrary element. We put
\[\xi_{i}:=\langle v_{1},F^{i}(v_{0})\rangle\]
for \(i=1,\ldots,n\). Then we have
\[(\xi_{1},\ldots,\xi_{n})\operatorname{diag}(1,1,\varpi^{-1}, \varpi^{-1},\ldots,\varpi^{-(n^{\prime}-1)},\varpi^{-(n^{\prime}-1)})(\sigma(g ^{\mathrm{red}}_{b_{\mathrm{sp},0}}(v_{0})))^{-1}\] \[=(a_{2n-1},-\varpi a_{2n},\ldots,a_{n+1},-\varpi a_{n+2}),\]
where \(g^{\mathrm{red}}_{b_{\mathrm{sp},0}}\) is defined as in [1, Subsection 6.3], and we put
\[v=^{t}(a_{1},\ldots,a_{2n}).\]
Therefore, the vector \(v_{1}\) is determined uniquely by \(\xi_{1},\ldots,\xi_{n}\). We define \(q_{ij}\in\dot{K}\) by
\[F^{-i}(v_{0})=\sum_{j=1}^{n}q_{ij}F^{j}(v_{0})\]
for \(i=1,\ldots,n\). We will find an element \(v\) of the form \(v_{0}+v_{1}\). We note that
\[\langle v,F^{i}(v)\rangle=\langle v_{0}+v_{1},F^{i}(v_{0}+v_{1})\rangle=\xi_{ i}+(-1)^{i}\varpi^{i}\sigma^{i}(\langle F^{-i}(v_{0}),v_{1}\rangle)=\xi_{i}- \sum_{j=1}^{n}(-1)^{i}\varpi^{i}\sigma^{i}(q_{ij})\sigma^{i}(\xi_{j}).\]
Therefore, to give an element \(v\in\mathcal{L}^{\mathrm{symp}}_{b_{\mathrm{sp}}}\) satisfying the condition in (1), it suffices to give \((\xi_{1},\ldots\xi_{n})\) satisfying the following.
* \[\operatorname{ord}\xi_{1}\geq 1,\operatorname{ord}\xi_{2}\geq 1,\operatorname{ ord}\xi_{3}\geq 2,\operatorname{ord}\xi_{4}\geq 2,\ldots,\operatorname{ord}\xi_{n-1 }\geq n^{\prime},\operatorname{ord}\xi_{n}\geq n^{\prime}.\]
* \[\xi_{i}-\sum_{j=1}^{n}(-1)^{i}\varpi^{i}\sigma^{i}(q_{ij})\sigma^{i}(\xi_{j}) \begin{cases}=0&(i=1,\ldots,n-1),\\ =c&(i=n).\end{cases}\]
Here, we note that we have \(\operatorname{ord}q_{ij}\geq\frac{i+j}{2}\) by [1, Lemma 5.2.4]. We can find \((\xi_{1},\ldots\xi_{n})\) as above by solving the equation by the same argument as in [25, Lemma 5.4].
Next, we will prove (2). We put \(n^{\prime}:=\frac{n-1}{2}\). We note that it suffices to find \(v\in V^{\mathrm{symp}}_{b_{\mathrm{sp}}}\cap\mathcal{L}\) such that \(p_{0}(v)=v_{0}\) and \(p_{\frac{1}{2}}(v)=v_{\frac{1}{2}}\), since we have \(v\in\mathcal{L}^{\mathrm{symp}}_{b_{\mathrm{sp}}}\) in this case by [25, Lemma 4.3] and Remark 2.3.5. We put
\[\xi_{i}:=\langle v_{1},F^{i}(v_{0})\rangle.\]
We also put
\[h_{i}:=\langle v_{\frac{1}{2}},F^{i}(v_{\frac{1}{2}})\rangle\]
for \(i=1,\ldots n-1\). As before, we can put
\[F^{-i}(v_{0})=\sum_{j=1}^{n-1}q_{ij}F^{i}(v_{0})\]
with \(\operatorname{ord}q_{ij}\geq\frac{i+j}{2}\). If we put \(v=v_{0}+v_{\frac{1}{2}}+v_{1}\), then we have
\[\langle v,F^{i}(v)\rangle=\xi_{i}-\sum_{j=1}^{n-1}(-1)^{i}\varpi^{i}\sigma^{i} (q_{ij})\sigma^{i}(\xi_{j})+h_{i}.\]
Therefore, to give a desired element \(v\), it suffices to give \((\xi_{1},\ldots,\xi_{n-1})\) satisfying the following.
* \[\operatorname{ord}\xi_{1}\geq 0,\operatorname{ord}\xi_{2}\geq 1,\operatorname{ ord}\xi_{3}\geq 1,\operatorname{ord}\xi_{4}\geq 2,\ldots,\operatorname{ord}\xi_{n- 2}\geq n^{\prime}-1,\operatorname{ord}\xi_{n-1}\geq n^{\prime}.\]
* \[\xi_{i}-\sum_{j=1}^{n-1}\varpi^{i}\sigma^{i}(q_{ij})\sigma^{i}(\xi_{j})=-h_{i}.\]
Since we have \(\operatorname{ord}h_{i}\geq\frac{i}{2}\), we can solve this equation by the same argument as in [20, Lemma 5.4]. It finishes the proof.
In the following, we suppose that \(b=b_{\operatorname{sp}}\) is the special representative (by Proposition 4.2.2 (3), it is not essential). By Proposition 4.2.2, we can describe a connected component of affine Deligne-Lusztig varieties.
**Definition 4.2.7**.: For any \(j\in J_{b}(K)/J_{b}(\mathcal{O}_{K})\), we put
\[X_{w_{r}}^{m}(b)_{j(\mathcal{L})}:=g_{b,r}(j(\mathcal{L}_{b}^{\operatorname{ symp}})/\sim_{b,m,r}),\]
which is a subset of \(X_{w_{r}}^{m}(b)(\overline{k})\), Here, we use the same notation as in Theorem 2.3.6. We will equip this subset with a scheme structure in Proposition 4.2.8. We also define
\[\dot{X}_{w_{r}}^{m}(b)_{j(\mathcal{L})}:=g_{b,r}(j(\mathcal{L}_{b}^{ \operatorname{sym,rat}})/\dot{\sim}_{b,m,r}),\]
which is a subset of \(X_{w_{r}}^{m}(b)(\overline{k})\).
The following is an analogue of [1, Proposition 6.12].
**Proposition 4.2.8**.: _Assume that \(r+k\geq 1\). For any \(j\in J_{b}(K)\), the set \(X_{w_{r}}^{0}(b)_{j(\mathcal{L})}\) is an open and closed subset of \(X_{w_{r}}^{0}(b)(\overline{k})\). Therefore, we can equip \(X_{w_{r}}^{0}(b)_{j(\mathcal{L})}\) with the scheme structure such that_
\[X_{w_{r}}^{0}(b)=\bigcup_{j\in J_{b}(K)/J_{b}(\mathcal{O}_{K})}X_{w_{r}}^{0}(b )_{j(\mathcal{L})}\]
_is a scheme-theoretic disjoint union._
Proof.: We will show that there exists a constant \(C\) which depends only \(n,k,r\) such that
\[X_{w_{r}}^{0}(b)_{\mathcal{L}}=\{(\mathcal{M}_{i})_{i=0}^{2n-1}\in X_{w_{r}}^{ 0}(b)(\overline{k})\mid\mathcal{M}_{0}\subset\mathcal{L}\text{ and }\mathcal{M}_{0}^{\vee}=C^{-1}\mathcal{M}_{0}\}. \tag{34}\]
More precisely, \(C\) is defined by the following equation;
\[C:=\varpi^{\operatorname{ord}\lambda(g_{b,r}(v))-\operatorname{ord}\lambda(g_{b}^{ \operatorname{red}}(v))},\]
for some (any) \(v\in V_{b}^{\operatorname{symp}}\). We note that this definition of \(C\) is equivalent to saying that
\[C=\varpi^{\operatorname{ord}\lambda(g_{b,r}(v))}=\varpi^{\lfloor\frac{n}{2} \rfloor},\]
for some (any) \(v\in\mathcal{L}_{b}^{\operatorname{symp}}\). We also note that, here, we regard the element in
\[X_{w_{r}}^{0}(b)(\overline{k})\subset G(\breve{K})/I\]
as a lattice chain which is self-dual up to constant, via the correspondence
\[gI\mapsto g(\mathcal{L}_{0},\mathcal{L}_{1},\ldots,\mathcal{L}_{2n-1}),\]
where \(\mathcal{L}_{0}\supset\cdots\supset\mathcal{L}_{2n-1}\) is a standard lattice chain, i.e. we put
\[\mathcal{L}_{i}=\bigoplus_{1\leq j\leq 2n-i}\mathcal{O}e_{j}\oplus\bigoplus_{2 n-i+1\leq j\leq 2n}\mathfrak{p}e_{j}.\]
We note that \(\mathcal{L}_{0}\) is possibly different from \(\mathcal{L}\) in contrast to [11, Proposition 6.12]. Now we will start the proof of (34). First, we take an element \(g_{b,r}(v)I\) (\(v\in\mathcal{L}_{b}^{\operatorname{symp}}\)) from the left-hand side. Then the corresponding lattice \(\mathcal{M}_{0}\) is generated by \(w_{1},\ldots,w_{2n}\), where \(w_{i}\) is the \(i\)-th column vector of \(g_{b,r}(v)\). Therefore, by definition of \(g_{b,r}\) and \(\mathcal{L}_{b}^{\operatorname{symp}}\), we have
\[\mathcal{M}_{0}\subset\mathcal{L}.\]
Moreover, by definition of \(C\) and the fact that \(\operatorname{ord}(\lambda(g_{b}^{\operatorname{red}}(v))=0\) for \(v\in\mathcal{L}_{b}^{\operatorname{symp}}\), we have \(\mathcal{M}_{0}^{\vee}=C^{-1}\mathcal{M}_{0}\).
On the other hand, we take an element \((\mathcal{M}_{i})_{i=0}^{2n-1}\) from the right-hand side of (34). By Theorem 2.3.6, there exits \(v\in V_{b}^{\operatorname{symp}}\) such that \(\mathcal{M}_{i}=g_{b,r}(v)\mathcal{L}_{i}\). Since \((g_{b,r}(v)\mathcal{L}_{0})^{\vee}=\lambda(g_{b,r}(v))^{-1}g_{b,r}(v) \mathcal{L}_{0}\), we have
\[\operatorname{ord}(\lambda(g_{b,r}(v)))=\operatorname{ord}(\lambda(g_{b,r}(v )))-\operatorname{ord}(\lambda(g_{b}^{\operatorname{red}}(v))).\]
Therefore, we have \(v\in\mathcal{L}_{b}^{\operatorname{symp}}\), and it finishes the proof of (34).
Now we can prove the desired statement by the same argument as in the proof of [11, Proposition 6.12].
We define
\[n_{0}:=\begin{cases}1&\text{if }k=0,\\ 2&\text{if }k=1.\end{cases}\]
We also put \(n^{\prime}:=\lfloor n/n_{0}\rfloor\) as before.
We will study the structure of affine Deligne-Lusztig varieties in this case as an analogue of [11, Theorem 6.17].
First, we give a short remark.
**Remark 4.2.9**.:
1. Suppose that \(n\) is even and \(k=1\). For any \(v=^{t}(v_{1},\ldots,v_{2n})\in\mathcal{L}_{b}^{\operatorname{symp}}\), we put \[v_{i,0}:=v_{i}\mod\mathfrak{p}\]
for \(1\leq i\leq 2n\). Note that, we have \(v_{n+1,0}=\cdots=v_{2n-1,0}=0.\) We define the matrix \(A=(a_{i,j})_{1\leq i,j\leq n}\in M_{n}(\overline{k})\) by
\[a_{i,j}:=\text{the image of }\begin{cases}b_{2i-1,2j-1}&(\text{if }1\leq i\leq n ^{\prime}\text{ and }1\leq j\leq n^{\prime}),\\ b_{2i-1,2j}&(\text{if }1\leq i\leq n^{\prime}\text{ and }n^{\prime}+1\leq j\leq 2n^{ \prime}),\\ b_{2i.2j-1}&(\text{if }n^{\prime}+1\leq i\leq 2n^{\prime}\text{ and }1\leq j\leq n^{ \prime}),\\ b_{2i,2j}&(\text{if }n^{\prime}+1\leq i\leq 2n^{\prime}\text{ and }n^{\prime}+1\leq j\leq 2n^{ \prime}),\end{cases}\]
i.e., the submatrix corresponding to the 1st, 3rd,..., \((n-1)\)-th, \((n+2)\)-th,..., \((2n)\)-th rows and columns. Here, we put \(g_{b}^{\text{red}}(v)=(b_{i,j})_{1\leq i,j\leq 2n}\) Then we have \(A\in\operatorname{GSp}_{2n}(\overline{\mathbb{F}}_{q})\) with respect to the symplectic form associated with
\[\overline{\Omega}:=\left(\begin{array}{ccccc}&&&1\\ &\text{\Large$\mathbb{0}$}&&\text{\Large$\mathbb{.}$}&\\ &&&1&&\\ \hline&&&-1&&\\ &\text{\Large$\mathbb{.}$}&&\\ -1&&&\text{\Large$\mathbb{0}$}&\\ \end{array}\right).\]
By definition of \(g_{b}^{\text{red}}\), we have the following.
* For \(1\leq j\leq n^{\prime}\), we have \[(a_{1j},\dots,a_{nj})=(v_{1,0}^{q^{2j}},v_{3,0}^{q^{2j}},\dots,v_{n-1,0}^{q^{ 2j}},v_{n+2,0}^{q^{2j}},v_{n+4,0}^{q^{2j}},\dots,v_{2n,0}^{q^{2j}}).\]
* For \(n^{\prime}+1\leq j\leq 2n^{\prime}\), the vector \((a_{1j},\dots,a_{nj})\) is a \(\overline{k}\)-linear combination of \[(v_{1,0}^{q^{-2k}},v_{3,0}^{q^{-2k}},\dots,v_{n-1,0}^{q^{-2k}},v_{n+2,0}^{q^{ -2k}},v_{n+4,0}^{q^{-2k}},\dots,v_{2n,0}^{q^{-2k}})_{1\leq k\leq j-n^{\prime}}\] and \[(a_{11},\dots,a_{n1}).\] Therefore, by [2, Subsection 6.3], we can show that \[(v_{1,0},v_{3,0}\dots,v_{n-1,0},v_{n+2,0},v_{n+4,0}\dots,v_{2n,0})\] are linearly independent over \(\mathbb{F}_{q^{2}}.\)
* Suppose that \(n\) is odd and \(k=1\). For any \(v=^{t}(v_{1},\dots,v_{2n})\in\mathcal{L}_{b}^{\text{symp}}\), we define the matrices \(A=(a_{i,j})_{1\leq i,j\leq n},A^{\prime}=(a^{\prime}_{i,j})_{1\leq i,j\leq n} \in M_{n}(\overline{k})\) by \[a_{i,j} := \text{the image of }b_{2i-1,2j-1},\] \[a^{\prime}_{i,j} := \text{the image of }b_{2i,2j}.\] Here, we put \(g_{b}^{\text{red}}(v)=(b_{i,j})_{1\leq i,j\leq 2n}\) Then we have \[{}^{t}\!A^{\prime}HA=\overline{\lambda(v)}H,\] where we put \[H:=\left(\begin{array}{ccccc}&&&1\\ &\text{\Large$\mathbb{.}$}&\\ 1&&\end{array}\right).\]
Since we have \(A\in\operatorname{GL}_{n}(\overline{k})\), we can show that \[(v_{1,0},v_{3,0},\dots,v_{2n-1,0})\] are linearly independent over \(\mathbb{F}_{q^{2}}\) by the same argument as in (1).
3. Suppose that \(k=0\). We can show that the matrix \(A\in M_{2n}(\overline{k})\) defined as the image of \(g_{b}^{\operatorname{red}}(v)\) for \(v\in\mathcal{L}_{b}^{\operatorname{symp}}\) is contained in \(\operatorname{GSp}_{2n}(\overline{\mathbb{F}}_{q})\) with respect to the symplectic form associated with \(\Omega\). Therefore, by the same argument as in (1) again, \[(v_{1,0},v_{2,0},\dots,v_{2n,0})\] are linearly independent over \(\mathbb{F}_{q}\) for any \(v=^{t}(v_{1},\dots,v_{2n})\in\mathcal{L}_{b}^{\operatorname{symp}}\).
**Definition 4.2.10**.:
1. Suppose that \(k=1\). We define the permutation \(\tau:\{1,\dots,2n\}\to\{1,\dots,2n\}\) as follows. \[\tau(i)=\begin{cases}\lceil\dfrac{i}{2}\rceil&\text{if $1\leq i\leq n$ and $\lfloor\dfrac{i}{2}\rfloor$ is even,}\\ n+\lceil\dfrac{i}{2}\rceil&\text{if $1\leq i\leq n$ and $\lfloor\dfrac{i}{2}\rfloor$ is odd,}\\ 2n+1-\tau(2n+1-i)&\text{if $n+1\leq i\leq 2n$.}\end{cases}\] We also define the function \(\phi_{r}\colon\{1,\dots,2n\}\to K\) by \[\phi_{r}(i)=\begin{cases}(-1)^{\tau(i)-i}\varpi^{(2\lceil\frac{i}{4}\rceil-1 )r+(\lceil\frac{i}{4}\rceil-1)}&\text{if $1\leq i\leq n$ and $i$ is even,}\\ (-1)^{\tau(i)-i}\varpi^{(2\lceil\frac{i-1}{4}\rceil)r+\lceil\frac{i-1}{4} \rceil}&\text{if $1\leq i\leq n$ and $i$ is odd,}\\ \varpi^{nr+n^{\prime}-\operatorname{ord}\phi_{r}(2n+1-i)}&\text{if $n+1\leq i\leq 2n$.} \end{cases}\] Let \(E_{i,j}\in\operatorname{GL}(V)\) be the matrix whose entries are \(0\) except that the \((i,j)\)-th entry is \(1\). We put \(x_{r}:=\sum_{i=1}^{2n}\phi_{r}(i)E_{i,\tau(i)}\), which is an element of \(\operatorname{GSp}(V)\) by definition. Note that, the order of each non-zero entry of \(x_{r}\) is the same as the order of the entry lying in the same place of \(g_{b,r}(v)\) for \(v\in\mathcal{L}_{b}^{\operatorname{symp}}\). We also put \[\varphi_{r}(i):=\operatorname{ord}\phi_{r}(i).\]
2. Suppose that \(k=0\). We define \[x_{r}:=\operatorname{diag}(1,\varpi^{r},\dots,\varpi^{r(n-1)},,\varpi^{r}, \dots,\varpi^{rn})\in\operatorname{GSp}(V).\] As in (1), we define \(\tau\) and \(\phi_{r}\) by the formula \(x_{r}=\sum_{i=1}^{2n}\phi_{r}(i)E_{i,\tau(i)}.\) Moreover, we put \[\varphi_{r}(i):=\operatorname{ord}\phi_{r}(i).\]
The following is an analogue of [1, Proposition 6.15].
**Proposition 4.2.11**.: _Suppose that \(r+k\geq m+1\). Then \(X_{w_{r}}^{m}(b)_{\mathcal{L}}\) is contained in the higher Schubert cell \(Ix_{r}I/I^{m}\subset\operatorname{GSp}(V)/I^{m}\)._
Proof.: We only prove the case where \(n\) is even and \(k=1\) (other cases can be proved in the same way). We may assume that \(m=0\). Take an element \(g_{b,r}(v)I\in X_{w_{r}}^{0}(b)_{\mathcal{L}}\), where \(v\in\mathcal{L}_{b}^{\operatorname{symp}}\). We will show that \(g_{b,r}(v)I\subset Ix_{r}I\). First, we will modify \(g_{b,r}(v)=(g_{i,j})_{1\leq i,j\leq 2n}\) by multiplying by elements of \(I\) from the left-side. We will modify \(\tau(1),\dots,\tau(n)\)-th column in
this order. More precisely, for the \(\tau(j)\)th column (\(1\leq j\leq n\)), we will use the row elementary transformations \(\operatorname{row}_{j\to i}\) to eliminate the \((i.\tau(j))\)-th entry of \(g_{b,r}(v)\) where
\[2n+1-j\geq i>j.\]
Here, the order of vanishing is as follows:
\[(2,\tau(1)),\dots,(2n,\tau(1)),(3,\tau(2)),\dots,(2n-1,\tau(2)),\dots.\]
A priori, it is non-trivial that we can proceed with such modifications. The main problem is the order of entries: Since we should use elementary transformations corresponding to elements in \(I\), we should know the order of the \((j,\tau(j))\)-th entries to execute the above modification in each step. Such a problem is settled in the next paragraph. Once we can overcome such a problem, then the above modification is realized by the multiplication from the left by elements in \(I\), and one can show that the above modifications do not affect already modified entries: \(\operatorname{row}_{j\to i}\) itself does not affect already vanished entries since \((j,\tau(l))\)-th entry for \(1\leq l\leq i-1\) is already vanished, and the counterpart \(\operatorname{row}_{2n+1-i\to 2n+1-j}\) does not affect already vanished entries clearly since the elimination of \((2n+1-j,\tau(j))\)-th entry takes place in the last step dealing with the \(\tau(i)\)th column. Note that, if we can eliminated above entries for \(j=1,\dots,\tau(l)\), then the \((i,j)\)-entries with
\[i>\max(2n-l,\tau^{-1}(j)) \tag{35}\]
are automatically vanished since our modifications are made in \(\operatorname{GSp}(\breve{K})\). In particular, after such a modification, the \((i,j)\)-th entries for \(i>\tau^{-1}(j)\) and \(j=\tau(1),\dots,\tau(l)\) vanished.
In the following, we will verify that such modifications work. By the definition of \(g_{b,r}(v)\), the \((i,\tau(i))\)-th entry of \(g_{b,r}(v)\) equals to \(\varphi_{r}(i)\) and the transformation modifying the \(\tau(i)\)-th column clearly preserves the order of the \((i+l,\tau(i+l))\)-th entry for odd \(l\). If the order of the \((i+2,\tau(i+2))\)-th entry coincides with the order of \(\varphi_{r}(i+2)\) after modifying \(\tau(i)\)-th column, then we can proceed with the elimination. (Here, we suppose that \(i\leq n-2\). Otherwise, we can proceed with the elimination without doing anything. In particular, this process is already done when \(n=2\).) We suppose that \(i=2i^{\prime}+1\) is odd (the even case can be proved in the same way). Consider the matrix \(\bar{g}=(\bar{g}_{s,t})\in M_{n}(\overline{k})\) given by
\[\bar{g}_{s,t}=\left\{\begin{array}{ll}\text{the image of $\phi_{r}(t)^{-1}g_{2s-1, \tau(t)}$,}&\text{if $1\leq s\leq n^{\prime}$,}\\ \text{the image of $\phi_{r}(t)^{-1}g_{2s,\tau(t)}$}&\text{if $n^{\prime}+1\leq s \leq n$.}\end{array}\right.\]
Then by the same argument as in [1, Lemma 6.16], we can show that the determinant of the upper left \((s\times s)\)-minor is non-zero. For \(1\leq i^{\prime},j^{\prime}\leq n\), the row elementary transformation \(\operatorname{row}_{2i^{\prime}-1\to 2j^{\prime}-1}\) on \(g_{b,r}(v)\) corresponds to the row elementary transformation \(\operatorname{row}_{i^{\prime}\to j^{\prime}}\) on \(\bar{g}\) and such transformations preserves the determinant of the upper left \((i\times i)\)-minor of \(\overline{g}\) for \(1\leq i\leq n\). Similarly, one can show that all the row elementary transformations in our process do not affect the upper left \((i\times i)\)-minor of \(\overline{g}\) for \(1\leq i\leq n\). Therefore, after modifying \(\tau(i)\)-th column, the upper left \((\frac{i+3}{2}\times\frac{i+3}{2})\) submatrix becomes upper triangular whose determinant is non-zero, so the \((\frac{i+3}{2},\frac{i+3}{2})\)-th entry is non-zero. It means that the order of the \((i+2,\tau(i+2))\)-th entry is the same as the order of \(\varphi_{r}(i+2)\), and we can proceed with the modification.
Now we have successfully modified \(\tau(1),\ldots,\tau(n)\)th columns. Since the resulting matrix \(H=(H_{i,j})\) is an element in \(\operatorname{GSp}(V)\) (see (35)), we have
\[H_{i,j}=0\quad\text{ for }i>\tau^{-1}(j),1\leq i,j\leq 2n. \tag{36}\]
We also have
\[H\in G_{x,0}\cdot\operatorname{diag}(1,1,\ldots,\varpi^{n},\varpi^{n})\cdot \operatorname{diag}(1,\varpi^{r},\ldots,\varpi^{(n-1)r},\varpi^{r},\ldots, \varpi^{nr}) \tag{37}\]
since so is \(g_{b,r}(v)\) and the above row transformations preserve the right-hand-side.
Finally, we will modify \(H\) to show \(H\in Ix_{r}I\). Basically, we use column elementary transformations to modify each row (more precisely, 1st, \(\ldots\), \(n\)th row) (note that the problem of the order of \((j,\tau(j))\)-th entries as in the last modification does not occur since we already know the condition (36) and (37), and our modification preserves the determinant of matrices). However, before that, we should treat exceptional entries (located on 1st, \(\ldots\), \(n\)th row) which cannot be eliminated by column elementary transformations. Exceptional entries are the \((4l+3,2l)\)-th entries (\(1\leq 4l+3\leq n\)). Note that there is no exceptional entries when \(n=2\). To eliminate these entries, we can use row the row elementary transformations \(\operatorname{row}_{4l+4\to 4l+3}\) since the order of the \((4l+3,2l)\)-th entry is greater than the order of the \((4l+4,2l)\)-th entry. Note that such transformations do not affect other \((4l^{\prime}+3,2l^{\prime})\)-th entry, and preserve the condition (36) and (37). Let \(H^{\prime}\) be the resulting matrix. We will modify \(1,\ldots,n\)th rows of \(H^{\prime}\) in this order. In the \(i\)-th row, if \(i\not\equiv 3\mod 4\) (resp. \(i\equiv 3\mod 4\)), first we will eliminate \((i,j)\)th entries for
\[j \in\{1,\ldots,2n\}\setminus\{\tau(1),\ldots,\tau(i),2n+1-\tau(1), \ldots 2n+1-\tau(i)\}\] \[(\text{resp.}\,j \in\{1,\ldots,2n\}\setminus\{\tau(1),\ldots,\tau(i),2n+1-\tau(1), \ldots 2n+1-\tau(i),\frac{i-3}{2}\}).\]
by using \(\operatorname{col}_{\tau(i)\to j}\). Obviously, \(\operatorname{col}_{\tau(i)\to j}\) itself only affects the \((i,j)\)-th entry. On the other hand, the counterpart \(\operatorname{col}_{2n+1-j\to 2n+1-\tau(i)}\) does not affect already modified entries, since \((i,2n+1-\tau(i))\) is not modified yet.
After that, we modify the \((i,2n+1-\tau(i))\)-th entry by using \(\operatorname{col}_{\tau(i)\to 2n+1-\tau(i)}\), which only affects the \((i,2n+1-\tau(i))\)-th entry. Finally, by multiplying the diagonal matrices in \(I\) from the right (or left), we have modified them to \(x_{r}\). It finishes the proof of \(g_{b,r}(x)\in Ix_{r}I\).
### Structure of \(\mathcal{L}_{b}^{\text{symp}}\), \(n:\) even case
In this subsection, we suppose that \(n\) is even.
**Definition 4.3.1**.: We put
\[\overline{V}:=\begin{cases}\mathcal{L}/\varpi\mathcal{L}&\text{ if }k=0,\\ \mathcal{L}/F(\mathcal{L})&\text{ if }k=1.\end{cases}\]
There exists the natural symplectic form \(\langle,\rangle\) on \(\overline{V}\) (cf. Remark 4.2.9). Here, \(F\colon\overline{V}\to\overline{V}\) is the Frobenius morphism over \(\mathbb{F}_{q^{2}}\) (i.e. the \(q^{2}\)-th power map). Note that, the image of
\[e_{i}\begin{cases}(i\equiv 1\mod n_{0})&\text{(if }1\leq i\leq n)\\ (i\equiv 0\mod n_{0})&\text{(if }n+1\leq i\leq 2n)\end{cases}\]
form a basis of \(\overline{V}\). In the following, we regard \(\overline{V}\) as \(\overline{\mathbb{F}}_{q}^{\oplus n^{\prime}}\) via the above basis.
Moereover, we define \(\overline{w}\in\operatorname{GSp}(\overline{V})\) by
\[\overline{w}=\left(\begin{array}{cccc|cccc}1&&\text{\text{\Large$\mathsf{0}$} }&&-1&&\\ &\ddots&&&&&\text{\text{\Large$\mathsf{0}$}}&&\\ &&1&&&&&\\ \hline&&&&&&&1&&\\ &&&&&&&\ddots&\\ &\text{\Large$\mathsf{0}$}&&&&&\text{\Large$\mathsf{0}$}&&1\\ &&&1&&&&\end{array}\right).\]
We also denote the upper-half Borel subgroup of \(\operatorname{GSp}(\overline{V})\) by \(\overline{B}\), and its unipotent radical by \(\overline{U}\). Note that, \(\overline{B}\) and \(\overline{U}\) is \(\overline{\sigma}\)-stable, where \(\overline{\sigma}\) is the Frobenius morphism over \(\mathbb{F}_{q^{n_{0}}}\).
We put
\[\overline{\mathcal{L}_{b}^{\operatorname{symp}}}:=\{v\in\overline{V}\mid \langle v,\overline{\sigma}^{i}(v)\rangle=0\,(1\leq i\leq n^{\prime}-1), \langle v,\overline{\sigma}^{n^{\prime}}(v)\rangle\neq 0\}.\]
**Lemma 4.3.2**.: _We define the equivalence relation \(\sim\) on \(\overline{\mathcal{L}_{b}^{\operatorname{symp}}}\) by_
\[v\sim v^{\prime}\Leftrightarrow v\in\overline{\mathbb{F}}_{q}^{\times}\cdot v ^{\prime}.\]
_Then we have an isomorphism of schemes over \(\overline{\mathbb{F}}_{q}\)_
\[\overline{\mathcal{L}_{b}^{\operatorname{symp}}}/\sim\,\simeq X_{\overline{ w}}^{(\overline{B})}. \tag{38}\]
_Here, we equip the left-hand side of (38) with the scheme structure by the locally closed embedding_
\[\overline{\mathcal{L}_{b}^{\operatorname{symp}}}/\sim\,\hookrightarrow\, \mathbb{P}(\overline{V}),\]
_and the right-hand side of (38) is the classical Deligne-Lusztig variety for \(\operatorname{GSp}_{2n^{\prime}}\) over \(\mathbb{F}_{q_{0}}^{n}\). Note that, \(\overline{\mathcal{L}_{b}^{\operatorname{symp}}}/\sim\) is naturally identified with the subset of \(\overline{\mathcal{L}_{b}^{\operatorname{symp}}}\) consisting of \((v_{1},\ldots,v_{2n})\in\overline{\mathcal{L}_{b}^{\operatorname{symp}}}\) with \(v_{1}=1\). We denote this subset by \(\mathbb{P}(\overline{\mathcal{L}_{b}^{\operatorname{symp}}})\)._
Proof.: We only prove the case where \(k=1\), (the case where \(k=0\) can be proved in the same way). We define the map \(\overline{g}\colon\overline{\mathcal{L}_{b}^{\operatorname{symp}}}\to \operatorname{GSp}(\overline{V})\) as follows. We put
\[\overline{G}_{1}(v):=-\frac{\alpha_{v}}{\overline{\sigma}^{-1}(\alpha_{v})}( \overline{\sigma}^{-1}(v)-\frac{\langle\overline{\sigma}^{-1}(v),\overline{ \sigma}^{n^{\prime}}(v)\rangle}{\alpha_{v}}v),\]
where we put \(\alpha_{v}:=\langle v,\overline{\sigma}^{n^{\prime}}(v)\rangle\). We also put
\[\overline{G}_{i+1}(v):=\frac{\alpha_{v}}{\overline{\sigma}^{-1}(\alpha_{v})}( \overline{\sigma}^{-1}(\overline{G}_{i}(v))-\frac{\langle\overline{\sigma}^{ -1}(\overline{G}_{i}(v)),\overline{\sigma}^{n^{\prime}}(v)\rangle}{\alpha_{v} }v)\ \ (i=1,n+1,\ldots,n-2).\]
Then for any \(v\in\mathcal{L}_{b,1}^{\operatorname{symp}}\), we can define \(\overline{g}(v)\) by
\[\overline{g}(v):=(v,\overline{\sigma}(v),\ldots,\overline{\sigma}^{n^{\prime} -2}(v),\overline{\sigma}^{n^{\prime}-1}(v),\overline{G}_{1}(v),\overline{G}_{ 2}(v),\ldots,\overline{G}_{n^{\prime}-1}(v),\overline{\sigma}^{n^{\prime}}(v)).\]
By the same proof as in Theorem 2.3.6 (1), \(\overline{g}(v)\) induces the desired isomorphism. By definition of \(\overline{g}\), this is an isomorphism of schemes.
**Definition 4.3.3**.:
1. Suppose that \(k=0\). We put \[\mathcal{L}_{h}:=(\mathcal{O}/\mathfrak{p}^{h})^{\oplus 2n},\] which is a quotient of \(\mathcal{L}\). We also put \[\mathcal{L}_{b,h}^{\operatorname{symp}}:=\left\{v=(\overline{v_{1}},\ldots \overline{v_{2n}})\in\mathcal{L}_{h}\left|\begin{array}{l}\langle v,F^{i}(v )\rangle=0\mod\mathfrak{p}^{h}\quad(1\leq i\leq n-1)\\ \langle v,F^{n}(v)\rangle\neq(\mathcal{O}/\mathfrak{p}^{h})^{\times}\end{array} \right.\right\}.\]
2. Suppose that \(k=1\). For any \(h\geq 1\), we put \[\mathcal{L}_{h}:=(\mathcal{O}/\mathfrak{p}^{h})^{\oplus n}\oplus\mathfrak{p}/ \mathfrak{p}^{h+1}\oplus\mathcal{O}/\mathfrak{p}^{h}\oplus\cdots\oplus \mathfrak{p}/\mathfrak{p}^{h+1}\oplus\mathcal{O}/\mathfrak{p}^{h},\] which is a quotient of \(\mathcal{L}\). We also put \[\mathcal{L}_{b,h}^{\operatorname{symp}}:=\left\{v=(\overline{v_{1}},\ldots \overline{v_{2n}})\in\mathcal{L}_{h}\left|\begin{array}{l}\langle v,F^{i}(v )\rangle=0\mod\mathfrak{p}^{h+[\frac{i}{2}]}\quad(1\leq i\leq n-1)\\ \langle v,F^{n}(v)\rangle\in\varpi^{n^{\prime}}(\mathcal{O}/\mathfrak{p}^{h+ n^{\prime}})^{\times}\end{array}\right.\right\}.\]
**Remark 4.3.4**.: The above set \(\mathcal{L}_{b,h}^{\operatorname{symp}}\) admits a natural scheme structure of finite type over \(\overline{k}\) (resp. perfectly of finite type over \(\overline{k}\)) if \(\operatorname{char}K>0\) (resp. \(\operatorname{char}K=0\)). Indeed, such a structure for \(\mathcal{L}_{h}\) is given in the same way as the definition of \(L_{[r,s)}\) in [1, p.1815]. Moreover, since the symplectic form \(\langle v,F^{i}(v)\rangle\) defines a function on a scheme \(\mathcal{L}_{h}\), we can regard \(\mathcal{L}_{b,h}^{\operatorname{symp}}\) as a locally closed subscheme of \(\mathcal{L}_{h}\) naturally. We put
\[\mathcal{L}_{h}^{\prime}:=\ker(\mathcal{L}_{h}\to\overline{V}).\]
Then \(\mathcal{L}_{h}^{\prime}\) also admits an affine space (resp. \(\operatorname{perfection}\) of an affine space) structure. We can identify \(\overline{V}\times_{\overline{k}}\mathcal{L}_{h}^{\prime}\) (resp. \(\overline{V}^{\operatorname{perf}}\times_{\overline{k}_{q}}\mathcal{L}_{h}^{\prime}\), where \(\operatorname{perf}\) is the perfection) with \(\mathcal{L}_{h}\) by the natural map
\[(\overline{v},w)\mapsto[\overline{v}]+w.\]
Here, [-] is a lift defined by the same way as in Definition 4.5.4.(3).
**Lemma 4.3.5**.:
1. _For any_ \(h\geq 1\)_, the natural morphism_ \[\mathcal{L}_{b,h}^{\mathrm{symp}}\to\overline{\mathcal{L}_{b}^{\mathrm{symp}}}\] _is a surjection._
2. _Consider the embedding_ \[\mathcal{L}_{b,h}^{\mathrm{symp}}\times_{\overline{\mathcal{L}_{b}^{\mathrm{ sympperf}}}}\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}\subset\mathcal{L}_{h}^{ \prime}\times_{\overline{\mathbb{F}}_{q}}\overline{\mathcal{L}_{b}^{\mathrm{ symp}}}\times_{\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}}\overline{ \mathcal{L}_{b}^{\mathrm{sympperf}}},\] _where the right-hand side is an affine space_ \(\mathbb{A}_{\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}}\) _over_ \(\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}\) _(resp. the perfection of an affine space_ \(\mathbb{A}_{\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}}^{\mathrm{perf}}\) _over_ \(\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}\)_) if_ \(\mathrm{char}\,K>0\) _(resp._ \(\mathrm{char}\,K=0\)_). Then this embedding is given by the intersection of hyperplane sections of_ \(\mathbb{A}_{\overline{\mathcal{L}_{b}^{\mathrm{symperf}}}}\)__ _(resp._ \(\mathbb{A}_{\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}}^{\mathrm{perf}}\)_). In particular,_ \(\mathcal{L}_{b,h}^{\mathrm{symp}}\times_{\overline{\mathcal{L}_{b}^{\mathrm{ sympperf}}}}\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}\) _is isomorphic to an affine space_ _(resp. the perfection of an affine space) over_ \(\overline{\mathcal{L}_{b}^{\mathrm{sympperf}}}\)_._
Proof.: We only prove the case where \(k=1\). (the case where \(k=0\) can be proved in the same way). It suffices to show (2). For simplicity, we suppose that \(h=1.\) The general case can be shown similarly, by considering projections \(\mathcal{L}_{h}\to\mathcal{L}_{h-1}\) inductively (see the proof of Proposition 4.6.2 for the related argument). We define the coordinates \(x_{1,0},\ldots,x_{2n,0}\) of \(\mathcal{L}_{b,1}^{\mathrm{symp}}\) as
\[x_{i,0}\text{ is the image in }\overline{\mathbb{F}}_{q}\text{ of }\left\{\begin{aligned} &\varpi^{-1}\times\text{ $i$-th component}&&\text{if $n+1\geq i$ and $i$ is odd},\\ & i\text{-th component}&&\text{otherwise}.\end{aligned}\right.\]
Then \(\mathcal{L}_{b,1}^{\mathrm{symp}}\subset\mathbb{A}_{x_{10},x_{2,0}\ldots,x_{2n -1,0}x_{2n,0}}\) is defined by the following.
\(\bullet\)
(39) \[\begin{split}&\langle v,F^{2i-1}(v)\rangle\\ &=\varpi^{i}\left(\sum_{1\leq j\leq n,j:\text{ odd}}(x_{j,0}x_{2n-j,0}^{q^{2i-1}}+x_{2n-j,0}x_{j,0}^{q^{2i-1}})-\sum_{1\leq j\leq n,j:\text{ even}}(x_{j,0}x_{2n-j+2,0}^{q^{2i-1}}+x_{2n-j+2,0}x_{j,0}^{q^{2i-1}}) \right)\\ &=0\end{split}\] for \(i=1,\ldots,n^{\prime}.\)
\(\bullet\)
(40) \[\begin{split}&\langle v,F^{2i}(v)\rangle\\ &=\varpi^{i}\left(\sum_{1\leq j\leq n,j:\text{ odd}}(x_{j,0}x_{2n+1-j,0}^{q^{2i}}-x_{2n+1-j,0}x_{j,0}^{q^{2i}})\right)\\ &=0\end{split}\] for \(i=1,\ldots,n^{\prime}-1.\)
\[\begin{array}{l}\langle v,F^{n}(v)\rangle\\ =\varpi^{n^{\prime}}\left(\sum_{1\leq j\leq n,j:\;\text{odd}}(x_{j,0}x_{2n+1-j}^ {q^{n}}-x_{2n+1-j,0}x_{j,0}^{q^{n}})\right)\\ \neq 0\end{array} \tag{41}\]
On the other hand, \(\overline{\mathcal{L}_{b}^{\text{symp}}}\subset\mathbb{A}_{x_{1,0},x_{3,0}, \ldots,x_{n-1,0},x_{n+2,0},\ldots,x_{2n,0}}\) is defined by equations (40) and (41). Therefore, we should solve the equations (39). We define \(v,v^{\prime}\in W:=\overline{\mathbb{F}}_{q}^{\oplus 2n}\) by
\[v^{\prime} := \sum_{i=1}^{n^{\prime}}x_{2i-1,0}e_{2i-1}+\sum_{i=n^{\prime}+1}^ {n}x_{2i,0}e_{2i},\] \[v^{\prime\prime} := \sum_{i=1}^{n^{\prime}}x_{2i,0}e_{2i}+\sum_{i=n^{\prime}+1}^{n}x_ {2i-1,0}e_{2i-1}.\]
We also define the morphism \(E\colon W\to W\) by
\[\overline{\sigma}\circ\left(\begin{array}{ccc}\left(\begin{array}{cc}0&1 \\ 1&0\end{array}\right)&&\\ &&\ddots&&\\ &&&\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\end{array}\right),\]
where \(\overline{\sigma}\) is the \(q\)-th power map. We define the coordinates \(\eta_{1},\ldots,\eta_{n}\) by
\[\eta_{i}:=\langle v^{\prime\prime},E^{-(2i-1)}(v^{\prime})\rangle.\]
By definition of \(\overline{\mathcal{L}_{b}^{\text{symp}}}\) (cf. Definition 4.3.1), \(\eta_{1},\ldots,\eta_{n}\) give a linear coordinate transformation of \(x_{2,0},\ldots,x_{n,0},x_{n+1,0},\ldots,x_{2n-1,0}\) over \(\mathcal{O}_{\overline{\mathcal{L}_{b}^{\text{symp}}}}^{\text{perf}}\). We define the matrix \(Q=(q_{i,j})_{1\leq i,j\leq n}\in\operatorname{GL}_{n}(\mathcal{O}_{\overline{ \mathcal{L}_{b}^{\text{symp}}}}^{\text{perf}})\) by
\[{}^{t}\!(E(v^{\prime}),\ldots,E^{2n-1}(v^{\prime}))=Q\,{}^{t}\!(E^{-1}(v^{ \prime}),\ldots,E^{-(2n-1)}(v^{\prime})).\]
Note that, to define \(Q\), we need to take the perfection. We will show that there exits
\[u=\left(\begin{array}{ccccc}1&&\text{\Large$\text{\Large$\text{\Large$0$}$} }&&\\ &\ddots&&\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{ \Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\small$ \text{\small$\text{\quad$}$\text{\quad$\text{\quad$}$\text{\quad$ \text{\small$\text{\quad$}$\text{\quad$\text{\quad$\quad$}$}}}}}}}}}}\\ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{ \Large$\text{\Large$\text{\Large$\text{\Large$\text{\quad$}}}}}}}}}}}}}}\\ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\quad$}}}}}}}}}}}}}} \\ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{ \Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large${\Large$ \text{\Large$\text{\Large$\text{\Large${\quad$\quad$}}}}}}}}}}}}}$}}\\ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$ \text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large$\text{\Large${\Large$ \text{\Large$\text{\Large$\text{\Large${\quad$\quad$\text{\quad$}}}}}}}}}}}}}}$}} \end{array}\right)\in\operatorname{GL}_{n}(\mathcal{O}_{\overline{\mathcal{L}_{b} ^{\text{symp}}}^{\text{perf}}),\]
such that \(Q^{\prime}:=uQ\) is a matrix whose \((i,j)\)-entries \(q^{\prime}_{i,j}\) (\(i>n+1-j,j\geq n^{\prime}\)) are \(0\). Since
\[(v,F(v),\ldots,F^{2n-1}(v))\in\operatorname{GL}_{2n}(\mathcal{O})\]
for any geometric point of \(\overline{\mathcal{L}_{b}^{\mathrm{symp}}}\), we have \(q_{1,n}\in\mathcal{O}_{\overline{\mathcal{L}_{b}^{\mathrm{symp}}}}^{\mathrm{perf }\times}\). Therefore, by using the row transformation \(u^{(1)}=(u^{(1)}_{i,j})\), we can eliminate \(q_{2,n},\ldots,q_{n,n}\). We denote the resulting matrix by \(Q^{(1)}=(q^{(1)}_{i,j})\). We have
\[u^{(1)}_{21}E^{1}(v^{\prime})+E^{3}(v^{\prime})=\sum_{j=1}^{n-1}q^{(1)}_{2,j}E ^{-(2j-1)}(v^{\prime}).\]
By the same reason as above, \(q^{(1)}_{2,n-1}\in\mathcal{O}_{\overline{\mathcal{L}_{b}^{\mathrm{symp}}}}^{ \mathrm{perf}\times}\). Repeating these arguments, we have desired transformation \(u=u^{(n-1)}\cdots u^{(1)}\). The equations (39) are equivalent to
\[\eta^{q^{2i-1}}_{i}+\sum_{j=1}^{n}q_{i,j}\eta_{j}=0\]
for \(1\leq i\leq n^{\prime}\) (cf. the arguments in Lemma 4.2.6). By using the \(n^{\prime}\times n\)-upper half part \(u^{\mathrm{up}}\) of \(u\), we can organize equation as
\[(u^{\mathrm{up}}\,^{t}\!(\eta^{q}_{1},\ldots,\eta^{q^{2n^{\prime}-1}}_{n^{ \prime}}))_{i}+\sum_{j=1}^{n+1-i}q^{\prime}_{i,j}\eta_{j}=0,\]
where \(q^{\prime}_{i,n+1-i}\in\mathcal{O}_{\overline{\mathcal{L}_{b}^{\mathrm{symp}}} }^{\times}\) for \(1\leq i\leq n^{\prime}\). We note that the first term of the left-hand side only contains polynomials of \(\eta_{1},\ldots\eta_{i}\). Therefore, we can solve the \(i\)-th equation with respect to \(\eta_{n+1-i}\) for \(i=1,\ldots,n^{\prime}\). It finishes the proof.
### Structure of \(\mathcal{L}_{b}^{\mathrm{symp}}\), \(n:\) odd case
In this subsection, we suppose that \(n\) is odd.
**Definition 4.4.1**.: We put
\[\overline{V}:=\begin{cases}\mathcal{L}/\varpi\mathcal{L}&\text{if $k=0$},\\ \mathcal{L}/F(\mathcal{L})&\text{if $k=1$}.\end{cases}\]
as before. Then the image of
\[e_{i}\ (i\equiv 1\mod n_{0},1\leq i\leq 2n)\]
form a basis of \(\overline{V}\).
1. If \(k=0\), then we can equip \(\overline{V}\) with the natural symplectic form \(\langle,\rangle\) associated with \(\Omega\). By the same way as in Definition 4.3.1, we define \[\overline{w}:=\left(\begin{array}{ccccc}1&&\mbox{\Large$\bigcirc$}&1&&\\ &\ddots&&\mbox{\Large$\bigcirc$}&\\ &&1&&\\ \hline&&&&1&&\\ &&&&\ddots&\\ \mbox{\Large$\bigcirc$}&&\mbox{\Large$\bigcirc$}&1\\ &&&&1\\ \end{array}\right)\in\mathrm{GSp}(\overline{V}).\]
We denote the upper-half Borel subgroup of \(\mathrm{GSp}(\overline{V})\) by \(\overline{B}\), and its unipotent radical by \(\overline{U}\). We denote the Frobenius morphism over \(\mathbb{F}_{q}\) by \(\overline{\sigma}\). Then \(\overline{B}\) and \(\overline{U}\) are \(\overline{\sigma}\)-stable. We also put
\[\overline{\mathcal{L}_{b}^{\mathrm{symp}}}:=\{v\in\overline{V}\mid\langle v, \overline{\sigma}^{i}(v)\rangle=0\ (1\leq i\leq n^{\prime}-1),\langle v,\overline{\sigma}^{n^{\prime}}(v) \rangle\neq 0\}\]
as before.
2. If \(k=1\), we equip \(\overline{V}\) with a inner product \((,)\) associated with \[H:=\left(\begin{array}{ccc}&&1\\ &\iddots&\\ 1&&\end{array}\right).\] Since \(H\) is a Hermitian matrix, we can consider the algebraic group \(\mathrm{GU}_{n}\) over \(\mathbb{F}_{q}\), whose \(R\)-valued point is given by \[\mathrm{GU}_{n}(R):=\{(g,\lambda)\in\mathrm{GL}_{n}\times\mathbb{G}_{m}(R \otimes_{\mathbb{F}_{q}}\mathbb{F}_{q^{2}})\mid^{t}c(g)Hg=\lambda H\}\] for any \(\mathbb{F}_{q}\)-algebra \(R\) (note that \(\lambda\in\mathbb{G}_{m}(R)\) naturally). Here, \(c(g)\) is the conjugation of \(g\) induced by the non-trivial element of the Galois group \(\mathbb{F}_{q^{2}}/\mathbb{F}_{q}\). For \(\mathbb{F}_{q^{2}}\)-algebra \(R\), we have \[\mathrm{GU}_{n}(R) = \{(g_{1},g_{2},\lambda)\in\mathrm{GL}_{n}\times\mathrm{GL}_{n} \times\mathbb{G}_{m}(R)\mid^{t}(g_{2},g_{1})H(g_{1},g_{2})=(\lambda,\lambda)H\}\] \[\simeq \{(g,\lambda)\in\mathrm{GL}_{n}\times\mathbb{G}_{m}(R)\},\] i.e. we have \(\mathrm{GU}_{n,\mathbb{F}_{q^{2}}}\simeq\mathrm{GL}_{n,\mathbb{F}_{q^{2}}} \times\mathbb{G}_{m,\mathbb{F}_{q^{2}}}\). The last isomorphism is given by \((g_{1},g_{2},\lambda)\mapsto(g_{1},\lambda)\). We define \[\overline{w}=(\left(\begin{array}{ccccc}&&&1\\ 1&&\mbox{\Large$\bigcirc$}&&\\ &\ddots&&\mbox{\Large$\bigcirc$}&\\ &&&1&\\ \hline&&&1&\\ &&&\ddots&\\ \mbox{\Large$\bigcirc$}&&&&&1\end{array}\right),1)\in\mathrm{GU}(\overline{ \mathbb{F}}_{q}),\] where the grids are lined between the \(\lceil\frac{n}{2}\rceil\)-th row (resp. column) and the \(\lceil\frac{n}{2}\rceil+1\)-th row (resp. column). We use the same rule of grids in this subsection. We also denote the first component of \(\overline{w}\) by \(\overline{w}_{1}\). Note that \(\overline{w}\) is \(\sigma_{\mathrm{GU}_{n}}\)-Coxeter element in the sense of [13, Subsection 2.8]. Here, \(\sigma_{\mathrm{GU}_{n}}\) is the Frobenius morphism associated with \(\mathrm{GU}_{n}\). We can also define the upper-half Borel subgroup of \(\mathrm{GU}_{n}\) by \(\overline{B}\), and its unipotent radical by \(\overline{U}\), which is \(\sigma_{\mathrm{GU}_{n}}\)-invariant We also denote the \(q\)-th power map by \(\overline{\sigma}\). We put
\[\overline{\mathcal{L}_{b}^{\mathrm{symp}}}:=\{v\in\overline{V}\mid(v, \overline{\sigma}^{i}(v))=0\ (1\leq i<n,\ i\colon\mathrm{odd}),(v,\overline{\sigma}^{n}(v))\neq 0\}.\]
**Lemma 4.4.2**.: _We define the equivalence relation \(\sim\) on \(\overline{\mathcal{L}_{b}^{\mathrm{symp}}}\) by_
\[v\sim v^{\prime}\Leftrightarrow v\in\overline{\mathbb{F}}_{q}^{\times}\cdot v ^{\prime}.\]
_Then we have an isomorphism of schemes over \(\overline{\mathbb{F}}_{q}\)_
\[\overline{\mathcal{L}_{b}^{\mathrm{symp}}}/\sim\,\simeq X_{\overline{w}}^{( \overline{B})}.\]
_Here, \(X_{\overline{w}}^{(\overline{B})}\) is the classical Deligne-Lusztig variety for \(\mathrm{GSp}_{2n}\) (resp. \(\mathrm{GU}_{n}\)) if \(k=0\) (resp. \(k=1\)). Note that, we don't consider \(\mathbb{F}_{q}\)-structure for \(X_{\overline{w}}^{(\overline{B})}\). Here, the scheme structure is defined as in Lemma 4.3.2. We put \(\mathbb{P}(\overline{\mathcal{L}_{b}^{\mathrm{symp}}})\) in the same way as in Lemma 4.3.2._
Proof.: The case where \(k=0\) follow from Lemma 4.3.2. We prove the case where \(k=1\). As in the proof of Lemma 4.3.2, for any \(v\in\overline{\mathcal{L}_{b}^{\mathrm{symp}}}\), we put
\[\overline{G}_{1}(v):=\frac{\alpha_{v}}{\overline{\sigma}^{-1}(\alpha_{v})} \overline{\sigma}^{-1}(v),\]
where we put \(\alpha_{v}:=(v,\overline{\sigma}^{n}(v))\). We also put
\[\overline{G}_{i+1}(v):=\begin{cases}\frac{\alpha_{v}}{\overline{ \sigma}^{-1}(\alpha_{v})}\overline{\sigma}^{-1}(\overline{G}_{i}(v))&\text{if $i$ is even},\\ \frac{\alpha_{v}}{\overline{\sigma}^{-1}(\alpha_{v})}\left(\overline{ \sigma}^{-1}(\overline{G}_{i}(v))-\frac{(\overline{\sigma}^{-1}(\overline{G}_ {i}(v)),\overline{\sigma}^{n}(v))}{\alpha_{v}}v\right)&\text{if $i$ is odd},\end{cases}\]
for \(1\leq i\leq n-2\). We put
\[\overline{g}_{1}(v) = (v,\overline{\sigma}^{2}(v),\ldots,\overline{\sigma}^{n-1}(v), \overline{G}_{2}(v),\ldots,\overline{G}_{n-1}(v)),\] \[\overline{g}_{2}(v) = (\overline{\sigma}(v),\overline{\sigma}^{3}(v),\ldots,\overline {\sigma}^{n-2}(v),\overline{G}_{1}(v),\overline{G}_{3}(v),\ldots,\overline{G }_{n-2}(v),\overline{\sigma}^{n}(v)).\]
Then we have \(\overline{g}_{2}(v)H\overline{g}_{1}(v)=\alpha H\). Therefore,
\[\overline{g}(v):=(\overline{g}_{1}(v),\alpha)=(\overline{g}_{1}(v),\overline{ g}_{2}(v),\alpha)\in\mathrm{GU}_{n}(\overline{\mathbb{F}}_{q}).\]
We have
\[\overline{g}_{1}(v)\overline{w} = (\overline{\sigma}^{2}(v),\ldots,\overline{\sigma}^{n-1}(v),v, \overline{G}_{2}(v),\ldots,\overline{G}_{n-1}(v)),\] \[\sigma_{\mathrm{GU}_{n}}(\overline{g}_{2}(v)) = (\overline{\sigma}^{2}(v),\ldots,\overline{\sigma}^{n-1}(v), \overline{\sigma}(\overline{G}_{1}(v)),\overline{\sigma}(\overline{G}_{3}(v)),\ldots,\overline{\sigma}(\overline{G}_{n-2}(v)),\overline{\sigma}^{n+1}(v)).\]
Since
\[\sigma_{\mathrm{GU}_{n}}(\overline{g}_{1}(v),\overline{g}_{2}(v),\alpha)=( \overline{\sigma}(\overline{g}_{2}(v)),\overline{\sigma}(\overline{g}_{1}(v) ),\overline{\sigma}(\alpha)),\]
we can show that
\[\sigma_{\mathrm{GU}_{n}}(\overline{g}(v))=\overline{g}(v)\overline{w}C(v)\]
such that
\[C(v)=(\begin{pmatrix}1&&&&*\\ &\ddots&\big{\rgroup}\\ &&1&&\big{\rgroup}\\ &&&1&0\\ \hline&&&&\frac{\overline{\sigma}(\alpha)}{\alpha}\\ &&&&\ddots&0\\ \big{\rgroup}&&&&\frac{\overline{\sigma}(\alpha)}{\alpha}\\ \end{pmatrix},\frac{\overline{\sigma}(\alpha)}{\alpha}).\]
Therefore, we have a morphism
\[\overline{\mathcal{L}_{b}^{\mathrm{symp}}}\to X_{\overline{w}}^{(\overline{B})};v \mapsto\overline{g}(v)\overline{B}.\]
Clearly,
\[\overline{\mathcal{L}_{b}^{\mathrm{symp}}}/\sim\to X_{\overline{w}}^{( \overline{B})}\]
is injective. Therefore, it suffices to show the surjectivity. Let \(\overline{g}\overline{B}\in X_{\overline{w}}^{(\overline{B})}\). We may assume that \(\overline{g}^{-1}\sigma_{\mathrm{GU}_{n}}(\overline{g})\in\overline{w} \overline{B}\). We put \(\overline{g}=(\overline{g}_{1},\overline{g}_{2},\lambda)\). We also put \(\overline{g}_{1}^{-1}\overline{\sigma}(\overline{g}_{2})=C\), where \(C\in w_{1}B_{1}\). Here, \(B_{1}\) is the upper-half Borel subgroup of \(\mathrm{GL}_{n}\). By a similar argument to the proof of Claim 3.1.1, it suffices to show that there exists \(P\in B_{1}(\overline{\mathbb{F}}_{q})\) such that
\[P^{-1}C\overline{\sigma}(P_{\lambda}^{\prime})=C^{\prime},\]
with
\[C^{\prime}:=\left(\begin{array}{cccc|cccc}*&&\mbox{\Large$\bigcirc$}&&*&&&& 0\\ &\ddots&&&&*\\ &&*&&&&*\\ \hline&&&&*&&*\\ &&&&*&&*\\ \hline&&&&*&&\\ &&&&*&0\\ \mbox{\Large$\bigcirc$}&&&&*\end{array}\right).\]
Here, for any \(\lambda\in\overline{\mathbb{F}}_{q}^{\times}\), \(P_{\lambda}^{\prime}\) is the matrix defined by
\[{}^{t}\!P_{\lambda}^{\prime}HP=\lambda H.\]
To this end, we use the following kinds of elementary matrices
\[P=1_{n}+ce_{i,j}.\]
For such a \(P\), we have \(P_{1}^{\prime}=1_{n}-ce_{n+1-j,n+1-i}\). Therefore, the conjugation \(*\to P^{-1}*\sigma(P_{1}^{\prime})\) act as adding \(-c\) times the \(j\)-th row to the \(i\)-th row, followed by adding \(-\overline{\sigma}(c)\) times the \((n+1-j)\)-th column to the \((n+1-i)\)-th column. We will eliminate entries of \(C\) by 3 steps. In the following, we use the same notation as in Section 3.
First, we eliminate the following entries:
\[(n-1,n),(2,2),(n-2,n-1),\ldots,(n^{\prime}+2,n^{\prime}+3),(n^{ \prime},n^{\prime}),\] \[(n-2,n),(2,3),\ldots,(n^{\prime}+2.n^{\prime}+4),(n^{\prime}-1,n ^{\prime})\] \[\vdots\] \[(n^{\prime}+2,n),(2,n^{\prime})\]
Here, for the \(i\)-th line, we will eliminate \((n+1-(i+j),n+1-j),(j+1,j+i)\) (\(1\leq j\leq n^{\prime}-i\)). To eliminate \((n+1-(i+j),n+1-j)\), we use \(\mathrm{row}_{n+1-j\to n+1-(i+j)}\), which is together with \(\mathrm{col}_{j\to i+j}\). Here, \(\mathrm{col}_{j\to i+j}\) does not affect already vanished entries since \((j+1,i+j)\) has not vanished yet. To eliminate \((j+1,j+i)\), we use \(\mathrm{row}_{j+i+1\to j+1}\), which is together with \(\mathrm{col}_{n-(i+j)\to n-j}\). Here, \(\mathrm{col}_{n-(i+j)\to n-j}\) does not affect already vanished entries since \((n-(i+j),n-j)\) is not vanished yet.
In the next step, we eliminate the following entries:
\[(n^{\prime}+1,n^{\prime}+1)(n^{\prime}+1,n^{\prime}+2),(n^{\prime},n^ {\prime}+2),(n^{\prime},n^{\prime}+3),\ldots,(3,2n^{\prime}),(2,2n^{\prime}),\] \[(n^{\prime},n^{\prime}+1)(n^{\prime}+1,n^{\prime}+3),(n^{\prime}-1,n^{\prime}+2),(n^{\prime},n^{\prime}+4),\ldots,(4,2n^{\prime}),(2,2n^{\prime}- 1),\] \[\vdots\] \[(2,n^{\prime}+1)\]
Here, for the \(i\)-th line, we eliminate \((n^{\prime}+2-i,n^{\prime}+1)\), \((n^{\prime}+2-j,n^{\prime}+i+j)\), and \((n^{\prime}+2-(i+j),n^{\prime}+1+j)\), \((1\leq i\leq n^{\prime}-1,1\leq j\leq n^{\prime}-i)\). To eliminate \((n^{\prime}+2-i,n^{\prime}+1)\), we use \(\operatorname{col}_{n^{\prime}+1-i\to n^{\prime}+1}\), which is together with \(\operatorname{row}_{n^{\prime}+1+i\to n^{\prime}+1}\). Note that \(\operatorname{row}_{n^{\prime}+1+i\to n^{\prime}+1}\) affects only \((n^{\prime}+1,n^{\prime}+1+i)\), which is not vanished yet. To eliminate \((n^{\prime}+2-j,n^{\prime}+i+j)\), we use \(\operatorname{col}_{n^{\prime}+1-j\to n^{\prime}+i+j}\), which is together with \(\operatorname{row}_{n^{\prime}+1+j\to n^{\prime}+2-(i+j)}\). Note that \(\operatorname{row}_{n^{\prime}+1+j\to n^{\prime}+2-(i+j)}\) affects only \((n^{\prime}+2-(i+j),n^{\prime}+1+j)\), which is not vanished yet. To eliminate \((n^{\prime}+2-(i+j),n^{\prime}+1+j)\), we use \(\operatorname{col}_{n^{\prime}+1-(i+j)\to n^{\prime}+1+j}\), which is together with \(\operatorname{row}_{n^{\prime}+1+i+j\to n^{\prime}+1+j}\). Here, \(\operatorname{row}_{n^{\prime}+1+i+j\to n^{\prime}+1+j}\) affects only \((n^{\prime}+2-(j+1),n^{\prime}+i+(j+1))\), which is not vanished yet.
Thirdly, we will vanish the following entries:
\[(1,n^{\prime}+2),\ldots,(1,n)\]
To eliminate \((1,i)\)\((n^{\prime}+2\leq i\leq n)\), we use \(\operatorname{row}_{i\to 1}\), which is together with \(\operatorname{col}_{n+1-i\to n}\). Here, \(\operatorname{col}_{n+1-i\to n}\) affects only \((n+2-i,n)\), which does not need to be eliminated since \(2\leq n+2-i\leq n^{\prime}+1\). It finishes the proof.
**Definition 4.4.3**.:
1. Suppose that \(k=0\). We define \(\mathcal{L}_{h},\mathcal{L}_{b,h}^{\operatorname{symp}}\) by the same way as in Definition 4.3.3 (1).
2. Suppose that \(k=1\). We put \[\mathcal{L}_{h}:=(\mathcal{O}/\mathfrak{p}^{h})^{\oplus 2n},\] which is a quotient of \(\mathcal{L}\). We also put \[\mathcal{L}_{b,h}^{\operatorname{symp}}:=\left\{v=(\overline{v_{1}},\ldots \overline{v_{2n}})\in\mathcal{L}_{h}\left|\begin{array}{l}\langle v,F^{i}(v) \rangle=0\mod\mathfrak{p}^{h+\lfloor\frac{i}{2}\rfloor}\\ \langle v,F^{n}(v)\rangle\in\varpi^{n^{\prime}}(\mathcal{O}/\mathfrak{p}^{h+n^ {\prime}})^{\times}\end{array}\right.\right.\]
**Lemma 4.4.4**.:
1. _For any_ \(h\geq 1\)_, the natural morphism_ \[\mathcal{L}_{b,h}^{\operatorname{symp}}\to\overline{\mathcal{L}_{b}^{ \operatorname{symp}}}\] _is a surjection._
2. _As in Lemma_ 4.3.5_, consider the natural embedding_ \[\mathcal{L}_{b,h}^{\operatorname{symp}}\times_{\overline{\mathcal{L}_{b}^{ \operatorname{symp}}}}\overline{\mathcal{L}_{b}^{\operatorname{symp}}} \overline{\mathcal{L}_{b}^{\operatorname{symp}}}\subset\mathbb{A}_{ \overline{\mathcal{L}_{b}^{\operatorname{symp}}}}(\operatorname{resp.} \mathbb{A}_{\overline{\mathcal{L}_{b}^{\operatorname{symp}}}^{\operatorname{ perf}}}^{\operatorname{perf}}),\] _if_ \(\operatorname{char}K>0\) _(resp._ \(\operatorname{char}K=0\)_). Then this embedding is given by the intersection of hyperplane sections of_ \(\mathbb{A}_{\overline{\mathcal{L}_{b}^{\operatorname{symp}}}^{\operatorname{perf}}}\) _(resp._ \(\mathbb{A}_{\overline{\mathcal{L}_{b}^{\operatorname{symp}}}^{\operatorname{ perf}}}^{\operatorname{perf}}\)_). In particular,_ \(\mathcal{L}_{b,h}^{\operatorname{symp}}\otimes_{\overline{\mathcal{L}_{b}^{ \operatorname{symp}}}}\overline{\mathcal{L}_{b}^{\operatorname{symp}}}^{ \operatorname{perf}}\) _is isomorphic to an affine space (resp. perfection of an affine space) over_ \(\overline{\mathcal{L}_{b}^{\operatorname{symp}}}^{\operatorname{perf}}\)_._
Proof.: It follows from the same proof as in Lemma 4.3.5
### Structure of affine Deligne-Lusztig varieties
In this section, we prove the structure theorem for affine Deligne-Lusztig varieties.
**Definition 4.5.1**.: We define a reductive group \(\overline{G}\) as follows:1
Footnote 1: The field of definition is not essential. Indeed, in the second case, the corresponding Deligne–Lusztig variety is the same as the Deligne–Lusztig varieties for the Weil restriction of \(\overline{G}\) (see Remark 4.5.3)
\[\overline{G}:=\begin{cases}\operatorname{GSp}_{2n}\text{ over }\mathbb{F}_{q}& \text{ if }n\text{ is even and }k=0,\\ \operatorname{GSp}_{n}\text{ over }\mathbb{F}_{q^{2}}&\text{ if }n\text{ is even and }k=1,\\ \operatorname{GSp}_{2n}\text{ over }\mathbb{F}_{q}&\text{ if }n\text{ is odd and }k=0,\\ \operatorname{GU}_{n}\text{ over }\mathbb{F}_{q}&\text{ if }n\text{ is odd and }k=1.\end{cases}\]
We also define \(\overline{w},\overline{U}\) as in Definition 4.3.1 and Definition 4.4.1.
In this section, we prove the following theorem.
**Theorem 4.5.2**.: _Suppose that \(r+k\geq 1\). Let \(b\in\operatorname{GSp}(V)\) be the special representative or the Coxeter representative with \(\kappa(b)=k\). Then we have a decomposition of \(\overline{\mathbb{F}}_{q}\)-schemes_
\[X^{0}_{w_{r}}(b)^{\operatorname{perf}}_{\mathcal{L}}\simeq X^{\overline{B}, \operatorname{perf}}_{\overline{w}}\times\mathbb{A}^{\operatorname{perf}}.\]
_Here, \(\mathbb{A}\) is an affine space over \(\overline{\mathbb{F}}_{q}\) and \(\operatorname{perf}\) means the perfection._
**Remark 4.5.3**.: When \(n\) is even and \(k=1\), the Deligne-Lusztig variety \(X^{\overline{B}}_{\overline{w}}\) is isomorphic to the Deligne-Lusztig variety of \(\overline{G}_{0}:=\operatorname{Res}_{\mathbb{F}_{q^{2}}/\mathbb{F}_{q}} \operatorname{GSp}_{n,\mathbb{F}_{q^{2}}}\) with respect to some \(\sigma_{\overline{G}_{0}}\)-Coxeter element \(\overline{w}_{0}\in\overline{G}_{0}(\overline{\mathbb{F}}_{q})=\operatorname{ GSp}(\overline{V})\times\operatorname{GSp}(\overline{V})\) which corresponds to \(\overline{w}\times 1\).
In the following of this section, we assume that \(b\) is the special representative with \(\kappa(b)=k\) (by Proposition 4.2.2.(3), the Coxeter representative case is reduced to this case). Since \(X^{m}_{w_{r}}(b)_{\mathcal{L}}\) is contained in an affine Schubert cell, we can calculate the structure of \(X^{m}_{w_{r}}(b)_{\mathcal{L}}\) directly, by using the coordinates of affine Schubert cells ([12, Lemma 4.7]). We want to study such coordinates by using the results on the structure of \(\mathcal{L}^{\operatorname{symp}}_{b}\). However, since representatives of such coordinates have \(\varpi\)-adic extensions of finite length, our \(\mathcal{L}^{\operatorname{symp}}_{b}\) is not suitable (note that, each entry of \(v\in\mathcal{L}^{\operatorname{symp}}_{b}\) has \(\varpi\)-adic expansion of infinite length). Therefore, first, we will define a finite-length analogue of \(\mathcal{L}^{\operatorname{symp}}_{b}\) in the following.
**Definition 4.5.4**.:
1. We define \[\alpha(i),\beta_{m,r}(i),\gamma_{m,r}(i)\colon\{1,\dots,2n\}\to\mathbb{Z}\] by the following: \[(\alpha(1),\dots,\alpha(n),|\alpha(n+1),\dots,\alpha(2n))\] \[= \begin{cases}(0,\dots,0,|1,0,\dots,1,0)&\text{ if }n\text{ is even and }k=1,\\ (0,\dots,0,|0,\dots,0)&\text{ otherwise.}\end{cases}\] \[(\beta_{m,r}(1),\dots,\beta_{m,r}(n),|\beta_{m,r}(n+1),\dots,\beta_{m,r}(2n))\]
\[=\begin{cases}(m+r,\ldots,m+r,|m+r,\ldots,m+r)&\text{ if }k=0,\\ (m+r+1,m+r,\ldots,,m+r+1,m+r,|m+r+1,\ldots,m+r+1)&\text{ if }n\text{ is even and }k=1,\\ (m+r+1,m+r,\ldots,m+r+1,m+r)&\text{ if }n\text{ is odd and }k=1,\end{cases}\] \[(\gamma_{m,r}(1),\ldots,\gamma_{m,r}(n),|\gamma_{m,r}(n+1),\ldots, \gamma_{m,r}(2n))\] \[= (m+\varphi_{r}(1),\ldots,m+\varphi_{r}(2n)).\]
2. We define \(\mathcal{L}^{\prime}_{b,m,r}\) as the projection of \[\mathcal{L}^{\text{symp}}_{b}\cap^{t}(1,\mathfrak{p}^{\alpha(2)},\ldots, \mathfrak{p}^{\alpha(2n)})\] via \[\,{}^{t}\!(1,\mathfrak{p}^{\alpha(2)},\ldots,\mathfrak{p}^{\alpha(2n)}) \twoheadrightarrow^{t}(1,\mathfrak{p}^{\alpha(2)}/\mathfrak{p}^{\beta_{m,r}( 2)},\ldots,\mathfrak{p}^{\alpha(2n)}/\mathfrak{p}^{\beta_{m,r}(2n)}).\] We define \(\mathcal{L}_{b,m,r}\) by the inverse image of \(\mathcal{L}^{\prime}_{b,m,r}\) via the projection \[\,{}^{t}\!(1,\mathfrak{p}^{\alpha(2)}/\mathfrak{p}^{\gamma_{m,r}(2)},\ldots, \mathfrak{p}^{\alpha(2n)}/\mathfrak{p}^{\gamma_{m,r}(2n)})\twoheadrightarrow^{t }(1,\mathfrak{p}^{\alpha(2)}/\mathfrak{p}^{\beta_{m,r}(2)},\ldots,\mathfrak{p} ^{\alpha(2n)}/\mathfrak{p}^{\beta_{m,r}(2n)}).\] Note that \(\gamma_{m,r}(i)\geq\beta_{m,r}(i)\) for \(2\leq i\leq 2n\).
3. We define \([\mathcal{L}_{b,m,r}]\subset V\) by \[[\mathcal{L}_{b,m,r}]:=\{{}^{t}\!([v_{1}],\ldots,[v_{2n}])\in\mathcal{L}|{}^{ t}\!(v_{1},\ldots,v_{2n})\in\mathcal{L}_{b,m,r}\}.\] Here, we define the map \[[-]\colon\mathfrak{p}^{i}/\mathfrak{p}^{j}\to\mathcal{O}\] for \(j>i\geq 0\) by \[[a]:=\sum_{l=i}^{j-1}[a_{l}]\varpi^{l}.\] Here, \(a_{l}\in\overline{k}\) is defined by the formula \[\widetilde{a}:=\sum_{l\geq i}[a_{l}]\varpi^{l},\] where \(\widetilde{a}\in\mathfrak{p}^{i}\) is any lift of \(a\), and \([a_{l}]\in W(\overline{k})\subset K\) is the Teichmuller lift of \(a_{l}\).
4. Let \(\mathcal{L}^{b,m,r}_{\text{coord}}\subset[\mathcal{L}_{b,m,r}]\) be the subset consisting of \(v\in[\mathcal{L}_{b,m,r}]\) such that (42) \[\langle v,F^{i}(v)\rangle\in\mathfrak{p}^{(n-i)r+m+\lfloor\frac{nk}{2}\rfloor}\] hold for \(1\leq i\leq n-1\).
**Remark 4.5.5**.:
1. The set \(\mathcal{L}^{b,m,r}_{\text{coord}}\) have a natural \(\overline{k}\)-scheme structure for the same reason as in Remark 4.3.4.
2. When \(n=1\), then we have \[\beta_{m,r}(2)=\gamma_{m,r}(2)=m+r.\] In this case, we have \(\mathcal{L}^{\prime}_{b,m,r}=\mathcal{L}_{b,m,r}\).
3. The equation (42) is automatic when \(n=1\) or \(n=2\). Therefore, in this case, we have \[\mathcal{L}_{\mathrm{coord}}^{b,m,r}=[\mathcal{L}_{b,m,r}].\] Combining with the remark (1), we have \[\mathcal{L}_{\mathrm{coord}}^{b,m,r}=[\mathcal{L}_{b,m,r}^{\prime}]\] when \(n=1\). This is no other than "\(U^{\prime\prime}\) used in the proof of [1, Theorem 6.17].
**Definition 4.5.6**.: We define the map
\[h_{1},\ldots,h_{2n}\colon\mathcal{L}_{\mathrm{coord}}^{b,m,r}\to V\]
as follows: Let \(v\in\mathcal{L}_{\mathrm{coord}}^{b,m,r}\). First, we put
\[h_{1}(v) :=v,\] \[h_{2n}(v) :=F^{n}(v).\]
Next, we inductively define
\[h_{n+1-i}(v):= F^{n-i}(v)-\frac{\langle h_{2n}(v),F^{n-i}(v)\rangle}{\langle h _{2n}(v),h_{1}(v)\rangle}h_{1}(v)-\frac{\langle h_{1}(v),F^{n-i}(v)\rangle}{ \langle h_{1}(v),h_{2n}(v)\rangle}h_{2n}(v)\] \[-\sum_{n+2-i\leq j\leq n}\left(\frac{\langle h_{2n+1-j}(v),F^{n-i }(v)\rangle}{\langle h_{2n+1-j}(v),h_{j}(v)\rangle}h_{j}(v)+\frac{\langle h_{ j}(v),F^{n-i}(v)\rangle}{\langle h_{j}(v),h_{2n+1-j}(v)\rangle}h_{2n+1-j}(v) \right),\]
\[h_{n+i}(v):= V_{k}^{i}(v)-\frac{\langle h_{2n}(v),V_{k}^{i}(v)\rangle}{\langle h _{2n}(v),h_{1}(v)\rangle}h_{1}(v)-\frac{\langle h_{1}(v),V_{k}^{i}(v)\rangle}{ \langle h_{1}(v),h_{2n}(v)\rangle}h_{2n}(v)\] \[-\sum_{n+2-i\leq j\leq n}\left(\frac{\langle h_{2n+1-j}(v),V_{k}^ {i}(v)\rangle}{\langle h_{2n+1-j}(v),h_{j}(v)\rangle}h_{j}(v)+\frac{\langle h_ {j}(v),V_{k}^{i}(v)\rangle}{\langle h_{j}(v),h_{2n+1-j}(v)\rangle}h_{2n+1-j}( v)\right),\]
for \(1\leq i\leq n-1\). Note that, we have
\[\langle h_{i}(v),h_{j}(v)\rangle=0 \tag{43}\]
when \(i+j\neq 2n+1\).
The next lemma is an analogue of Lemma 2.3.4.
**Lemma 4.5.7**.:
1. _For_ \(v\in\mathcal{L}_{\mathrm{coord}}^{b,m,r}\)_, we have_ \[\mathrm{ord}\langle h_{i}(v),h_{2n+1-i}(v)\rangle=\mathrm{ord}\langle v,F^{n} (v)\rangle=\lfloor\frac{kn}{2}\rfloor.\] _We put_ \[h_{b,r}(v) :=(h_{1}(v),\varpi^{r}h_{2}(v),\ldots,\varpi^{(n-1)r}h_{n}(v),\] \[(-1)^{n+1}\varpi^{r}\frac{\langle h_{1}(v),h_{2n}(v)\rangle}{ \langle h_{n}(v),h_{n+1}(v)\rangle}h_{n+1}(v),(-1)^{n+2}\varpi^{2r}\frac{ \langle h_{1}(v),h_{2n}(v)\rangle}{\langle h_{n-1}(v),h_{n+2}(v)\rangle}h_{n+ 2}(v),\ldots,\varpi^{nr}h_{2n}(v)\right).\]
2. _We have_ \[F(h_{b,r}(v))=h_{b,r}(v)w_{r}A_{b,r}\] _with_ \[A_{b,r}\in I^{m}.\]
Proof.: Note that, any \(v\in[\mathcal{L}_{b,m,r}]\) can be written as \(v=v_{0}+\varpi^{r+m}u\), where \(v_{0}\in\mathcal{L}_{b}^{\mathrm{symp}}\) and
\[u\in^{t}(0,\mathfrak{p}^{\beta_{m,r}(2)-m-r},\ldots,\mathfrak{p}^{\beta_{m,r} (2n)-m-r})\subset^{t}(\mathfrak{p}^{\beta_{m,r}(1)-m-r},\mathfrak{p}^{\beta_{ m,r}(2)-m-r},\ldots,\mathfrak{p}^{\beta_{m,r}(2n)-m-r}).\]
We have
\[\mathrm{ord}\langle v,F^{n}(v)\rangle=\langle v_{0},F^{n}(v_{0})\rangle+ \varpi^{r+m}(\langle v_{0},F^{n}(u)\rangle+\langle u,F^{n}(v_{0})\rangle)+ \varpi^{2(r+m)}(\langle u,F^{n}(u)\rangle).\]
Since \(v_{0}\in\mathcal{L}\), we have
\[\mathrm{ord}\langle v_{0},F^{n}(u)\rangle\left(\mathrm{resp.}\,\langle u,F^{ n}(v_{0})\rangle\text{ and }\langle u,F^{n}(u)\rangle\right)\geq\lfloor\frac{kn}{2}\rfloor+k\]
by the direct calculation of orders. Note that, we have \(r\geq 1\) in the case where \(k=0\). Therefore, we can conclude that
\[\langle v,F^{n}(v)\rangle=\lfloor\frac{kn}{2}\rfloor.\]
To compute \(\mathrm{ord}\langle h_{i}(v),h_{2n+1-i}(v)\rangle\), it is useful to introduce the reduced version of \(h_{i}\) as following. We put
\[h_{i}^{\mathrm{red}}(v)=\begin{cases}\varpi^{-\lfloor\frac{k(i-1)}{2}\rfloor h _{i}(v)&\text{ if }1\leq i\leq n,\\ \varpi^{-\lfloor\frac{kn}{2}\rfloor+\lfloor\frac{k(2n-i)}{2}\rfloor}&\text{ if }n+1\leq i\leq 2n.\end{cases}\]
It is enough to show that
\[\mathrm{ord}\langle h_{i}^{\mathrm{red}}(v),h_{2n+1-i}^{\mathrm{red}}(v) \rangle=0. \tag{44}\]
For \(1\leq i\leq 2n\), we define the subset \(M_{i}\subset\mathcal{L}\) by the formula
\[\begin{cases}M_{i}=\varpi^{m+r}F^{\overline{i-1}+1}\mathcal{L}&\text{ if }1\leq i\leq n,\\ M_{i}=\varpi^{m+r}F^{\overline{n-2n-i}}\mathcal{L}&\text{if}n+1\leq i\leq 2n. \end{cases}\]
Here, for any integer \(j\in\mathbb{Z}\), \(\overline{j}\in\{0,1\}\) means its mod \(2\). By the direct computation according to the definition of \(h_{i}^{\mathrm{red}}\) and (43), we can show
\[\begin{cases}h_{i}^{\mathrm{red}}(v)-\varpi^{-\lfloor\frac{i}{2}\rfloor}F^{i} (v)\in M_{i}&\text{ if }1\leq i\leq n,\\ h_{i}^{\mathrm{red}}(v)-\varpi^{-\lfloor\frac{n}{2}\rfloor}F^{n}(v)\in M_{i}& \text{ if }i=2n.\end{cases}\]
By the definition of \(M_{i}\) and (43), these imply (44). It finishes the proof of the assertion (1).
Next, we will prove the assertion (2). This calculation is rather redundant, so we will only describe a sketch. We only consider the case where \(k=1\) and \(n\) is even. Since we only need to estimate the order of entries, we will proceed with the calculation ignoring \(\mathcal{O}^{\times}\)-multiplications. We define \(\widetilde{h}_{b,r}(v)\) by
\[\widetilde{h}_{b,r}(v):=(\widetilde{h}_{1},\ldots,\widetilde{h}_{2n}), \tag{45}\]
where
\[\widetilde{h}_{i}:=\begin{cases}F^{i-1}(v)&\text{ if }1\leq i\leq n,\\ V_{k}^{i-n}(v)&\text{ if }n+1\leq i\leq 2n-1,\\ F^{n}(v)&\text{ if }i=2n.\end{cases} \tag{46}\]
We also put
\[\begin{split}\widetilde{h}_{b,r}^{\text{red}}(v)=(\widetilde{h}_ {1}^{\text{red}},\ldots,\widetilde{h}_{2n}^{\text{red}})=&\widetilde{h}_ {b,r}(v)\cdot\text{diag}(1,1,\varpi^{-1},\varpi^{-1},\ldots,\varpi^{-(n^{ \prime}-1)},\varpi^{-(n^{\prime}-1)},|\\ &\varpi^{-1},\varpi^{-1},\ldots,\varpi^{-n^{\prime}},\varpi^{-n^{ \prime}}).\end{split} \tag{47}\]
We define the equations \(s\colon\mathbb{Z}^{2}\to\{0,1\}\) by \(s(i,j)\) is \(1\) (resp. \(0\)) if \(i\) and \(j\) are odd (resp. otherwise). First, for \(1\leq i\leq n-1\), we can show that
\[\begin{split} h_{n+1-i}^{\text{red}}&\in \widetilde{h}_{n+1-i}^{\text{red}}+\mathfrak{p}^{(n-i)r+m+\frac{n}{2}-\lfloor \frac{i}{2}\rfloor}\widetilde{h}_{1}+\mathfrak{p}^{ir+m+\frac{n}{2}-\lfloor \frac{n-i}{2}\rfloor}\widetilde{h}_{2n}\\ &+\sum_{j=1}^{i-1}\mathfrak{p}^{(i-j)r+m+\frac{n}{2}-\lfloor\frac {n-i+j}{2}\rfloor-\overline{n-j}+s(j,n-i)}\widetilde{h}_{n-j+1}\\ &+\sum_{j=1}^{i-1}\mathfrak{p}^{(n-i+j)r+m+\frac{n}{2}-\lfloor \frac{i-j}{2}\rfloor}\widetilde{h}_{n+j}.\end{split} \tag{48}\]
and
\[\begin{split} h_{n+i}^{\text{red}}(v)&\in \widetilde{h}_{n+i}^{\text{red}}+\mathcal{O}v+\mathfrak{p}^{(n-i)r+m+\frac{n}{ 2}-\lceil\frac{i}{2}\rceil}\widetilde{h}_{2n}^{\text{red}}\\ &+\sum_{j=1}^{i-1}\mathfrak{p}^{(n-i)r+m+\frac{n}{2}-\lceil\frac {i}{2}\rceil}\widetilde{h}_{n-j+1}^{\text{red}}\\ &+\sum_{j=1}^{i-1}\mathfrak{p}^{s(n-j,i+1)}\widetilde{h}_{n+j}^{ \text{red}}.\end{split} \tag{49}\]
Indeed, we can show these inclusions by the induction on \(i\) by using the definition of \(h_{i}^{\text{red}}\) and equations (42). We can define the matrix \(H\in\operatorname{GL}_{2n}(\tilde{K})\) by
\[h_{b,r}(v)=\widetilde{h}_{b,r}(v)H.\]
Since we have
\[w_{r}^{-1}h_{b,r}(v)^{-1}F(h_{b,r}(v))=w_{r}^{-1}H^{-1}w_{r}w_{r}^{-1} \widetilde{h}_{b,r}(v)^{-1}F(\widetilde{h}_{b,r}(v))\sigma(H),\]
it suffices to show the following.
* \(\widetilde{h}_{b,r}(v)^{-1}F(\widetilde{h}_{b,r}(v))\in I_{\text{GL}}^{m}\),
* \(H\in I_{\text{GL}}^{m}\cap w_{r}I_{\text{GL}}^{m}w_{r}^{-1}\).
First, we prove the former inclusion. By the direct computation similar to Lemma 2.3.4, we have the following.
\[\widetilde{h}_{b,r}(v)^{-1}F(\widetilde{h}_{b,r}(v))\in\left(\begin{array}{ c|ccc}1&&&&&a_{1}\\ &\ddots&&0&&\vdots\\ &&1&&&a_{n}\\ \hline&&&1&&a_{n+1}\\ &\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100 \vrule width 0.0pt height 6.0pt\kern 6.0pt\vrule width 0.0pt} \hrule height 0.0pt width 100%}$}&&\\ &&\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&&\\ &&\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&&\\ &&\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&&\\ &&\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&&\\ \end{array}\right).\]
Here, the diagonal entries are \(1\) except for the \(2n\)-th row. Moreover, we have the following.
1. For \(1\leq i\leq n\), we have \[\operatorname{ord}\delta_{i}\geq rn+\frac{k}{2}(n-i+1).\]
2. For \(n+1\leq i\leq 2n-1\). we have \[\operatorname{ord}\delta_{i}\geq(2n-i)r+(n-\frac{i}{2})k.\]
3. We have \(\operatorname{ord}a_{2n}=0\).
In particular, we have the former inclusion.
Next, we show the later inclusion. By using (48) and (49), we can show that
\[H\in\left(\begin{array}{cccc|ccc}1&*&\cdots&*&*&\cdots&*&0\\ 0&1&&&&&0&0\\ \vdots&&\ddots&&&&&\ddots&&\vdots\\ 0&\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&&1&0&&\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&0\\ \hline 0&&&&0&1&&&0\\ \vdots&\hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&\cdots&&\ddots& \hbox to 0.0pt{$\vbox{\hrule height 0.0pt width 100%}$}&\vdots\\ 0&0&&&&1&0\\ 0&*&\cdots&*&*&\cdots&*&1\end{array}\right).\]
Here, since we only need to focus on the order of entries, we ignore multiplications by units. By unwinding equations (48) and (49), we also have estimates for the order of entries of \(H\), and we have the desired inclusion. It finishes the proof.
**Lemma 4.5.8**.: _Let \(v\in\mathcal{L}_{\operatorname{coord}}^{b,m,r}\) and \(u\in\mathcal{L}_{b}^{\operatorname{symp}}\). We assume that we have_
\[v\in g_{b,r}(u)^{t}(\mathcal{O}^{\times},\mathfrak{p}^{m},\ldots,\mathfrak{p} ^{m}).\]
_Then we have_
\[h_{b,r}(v)I^{m}=g_{b,r}(u)I^{m}.\]
Proof.: As before, we only consider the case where \(k=1\) and \(n\) is even. By the proof of Lemma 2.3.4, we have \(h_{b,r}(v)I^{m}=\widetilde{h}_{b,r}(v)HI^{m}=\widetilde{h}_{b,r}(v)I^{m}\). In the same argument as in the
proof of Proposition 3.3.1, we have \(\widetilde{h}_{b,r}(v)I^{m}=\widetilde{h}_{b,r}(u)I^{m}.\) Here, we define \(\widetilde{h}_{b,r}\) on \(\mathcal{L}_{b}^{\mathrm{symp}}\) by the same formula as in (45) and (46). Moreover, if we define \(H^{\prime}\in\mathrm{GL}(V)\) by
\[h_{b,r}(u)=\widetilde{h}_{b,r}(u)H^{\prime},\]
then we have \(H^{\prime}\in I^{m}\). Now we have
\[\widetilde{h}_{b,r}(u)I^{m}=h_{b,r}(u)I^{m},\]
and it finishes the proof.
**Proposition 4.5.9**.:
1. _The map_ \(h_{b,r}\) _defines a morphism_ \[\mathcal{L}_{\mathrm{coord}}^{b,0,r}\to X_{w_{r}}^{0}(b)_{\mathcal{L}}\] _of schemes over_ \(\overline{\mathbb{F}}_{q}\)__
2. _The above morphism is an isomorphism._
Proof.: From Lemma 2.3.4, it is clear that assertion (1) holds true for \(\overline{k}\)-valued points. Moreover by Proposition 4.2.11, \(X_{w_{r}}^{m}(b)_{\mathcal{L}}\) is a locally closed subscheme of a higher affine Schubert cell \(Ix_{r}I/I^{m}\), which is described by [1, Lemma 4.7]. Then the assertion (1) follows directly from the above description. However, there seems to be subtlety in the statement of [1, Lemma 4.7] (see Remark 4.5.10). Therefore, we introduce the correct statement in the following. We only treat the case where \(k=1\) and \(n\) is even since other cases can be proved similarly. Furthermore, we will reduce the argument to \(\mathrm{GL}\) case for simplicity. To clarify when we need \(m=0\), we use the general \(m\) in the following argument.
We denote \(g^{0}(i,j)\) be the minimum order of the \((i,j)\)-th entry of the right-hand side of (37), i.e.
\[G_{x,0}\cdot\mathrm{diag}(1,1,\ldots,\varpi^{n},\varpi^{n})\cdot\mathrm{diag} (1,\varpi^{r},\ldots,\varpi^{(n-1)r},\varpi^{r},\ldots,\varpi^{nr}).\]
We use the notation \(\tau\) defined in Definition 4.2.10. For \(1\leq i\neq j\leq 2n\), we put
\[g(i,j)=\begin{cases}0&\text{ if }i>j,\\ 1&\text{ if }j<i.\end{cases}\]
For such \((i,j)\), we also put
\[h(i,j):=\begin{cases}g^{0}(i,\tau(i))+1-g^{0}(j,\tau(j))&\text{ if }\tau(i)< \tau(j),\\ g^{0}(i,\tau(i))-g^{0}(j,\tau(j))&\text{ if }\tau(i)>\tau(j).\end{cases}\]
We define the set \(I\) by
\[I:=\{\alpha_{i,j}\mid g(i,j)<h(i,j)\},\]
here, \(\alpha_{i,j}\) is the root of \(\mathrm{GL}_{2n}\) corresponding to \((i,j)\)th entries. We have
\[\begin{split}\psi:X_{w_{r}}^{m}(b)_{\mathcal{L}}\subset Ix_{r}I/I ^{m}&\subset I_{\mathrm{GL}}x_{r}I_{\mathrm{GL}}/I_{\mathrm{GL}}^{m}\\ &\simeq\prod_{\alpha_{i,1}\in I}L_{[g(i,1),h(i,1))}U_{\alpha_{i,1}} \times\prod_{\alpha_{i,j}\in I,j\neq 1}L_{[g(i,j),h(i,j)}U_{\alpha_{i,j}} \times I_{\mathrm{GL}}/I_{\mathrm{GL}}^{m}.\end{split} \tag{50}\]
\[\left(\begin{array}{cccc|cccc}1&0&0&0&0&0&0&0\\ \mathcal{O}/\mathfrak{p}^{r}&1&0&0&0&0&0&0&0\\ \mathcal{O}/\mathfrak{p}^{1+2r}&\mathcal{O}/\mathfrak{p}^{1+r}&1&\mathfrak{p}/ \mathfrak{p}^{1+r}&0&0&0&0\\ \mathcal{O}/\mathfrak{p}^{r}&\mathcal{O}/\mathfrak{p}&0&1&0&0&0&0\\ \hline\mathcal{O}/\mathfrak{p}^{2+3r}&\mathcal{O}/\mathfrak{p}^{2+2r}& \mathcal{O}/\mathfrak{p}^{1+r}&\mathcal{O}/\mathfrak{p}^{2+2r}&1&\mathfrak{ p}/\mathfrak{p}^{1+r}&0&0\\ \mathcal{O}/\mathfrak{p}^{1+2r}&\mathcal{O}/\mathfrak{p}^{r+2}&\mathcal{O}/ \mathfrak{p}&\mathcal{O}/\mathfrak{p}^{1r}&0&1&0&0\\ \mathcal{O}/\mathfrak{p}^{2+3r}&\mathcal{O}/\mathfrak{p}^{3+2r}&\mathcal{O}/ \mathfrak{p}^{2+r}&\mathcal{O}/\mathfrak{p}^{2+2r}&\mathcal{O}/\mathfrak{p} &\mathcal{O}/\mathfrak{p}^{1+r}&1&0\\ \mathcal{O}/\mathfrak{p}^{2+4r}&\mathcal{O}/\mathfrak{p}^{2+3r}&\mathcal{O}/ \mathfrak{p}^{1+2r}&\mathcal{O}/\mathfrak{p}^{2+3r}&\mathcal{O}/\mathfrak{p}^{ r}&\mathcal{O}/\mathfrak{p}^{1+2r}&\mathcal{O}/\mathfrak{p}^{r}&1\end{array}\right).\]
The shape of \((\prod_{\alpha_{i,j}\in I}L_{[g(i,j),h(i,j))}U_{\alpha_{i,j}})\) for \(\mathrm{GSp}_{8}\).
Here, \(L_{[g(i,j),h(i,j)}U_{\alpha_{i,j}}\) is the \(\overline{\mathbb{F}}_{q}\)-perfect scheme defined in [12, p.1815]. The last isomorphism \(f\) is given by
\[f^{-1}((u_{i1}),(u_{ij}),g)=\prod_{\alpha_{i,1}\in I}[u_{i1}]\cdot\prod_{ \alpha_{i,j}\in I,j\neq 1}[a_{ij}]\cdot x_{r}\cdot gI_{\mathrm{GL}}^{m}, \tag{51}\]
where the first product is with respect to any order, and the second product is taken with respect to the lexicographical order for \((j,i)\). Moreover, \([\cdot]\) means the Teichmuller lift (more precisely, the operation applying \([\cdot]\) in Definition 4.5.4 to each entry).
Since we only consider the perfect scheme structure on \(\mathcal{L}_{coord}\), it suffices to show that the map \(\psi\circ h_{b,r}\) defines a map of schemes over \(\overline{\mathbb{F}}_{q}\). By the proof of Lemma 2.3.4, we have
\[h_{b,r}(v)I_{\mathrm{GL}}^{m}=\widetilde{h}_{b,r}(v)I_{\mathrm{GL}}^{m}.\]
Therefore, it suffices to show that \(f\circ\widetilde{h}_{b,r}\) define a morphism of schemes. Each factor of \(f(\widetilde{h}_{b,r}(v))\) can be computed as following: For any \(\widetilde{h}_{b,r}(v)\)\((v\in\mathcal{L}_{\mathrm{coord}})\) there exist column-elementary transformations in \(I\) whose composition \(g\in I\) satisfies that
\[\widetilde{h}_{b,r}(v)g^{-1}\in f(\prod_{\alpha_{i,j}\in I}L_{[g(i,j),h(i,j)}U _{\alpha_{i,j}}).\]
Such row elementary transformations are done by transforming the 1st,..., \((2n)\)-th row in this order. Note that, under such transformation of \(\widetilde{h}_{b,r}(v)\), the order of \((i,\tau(i))\)-th entry is preserved by the same argument as in the proof of Proposition 4.2.11. Since each coefficient of \(\varpi\)-adic expansion of \(\widetilde{h}_{b,r}(v)g^{-1}\) is an algebraic function on \(\mathcal{L}_{\mathrm{coord}}\), we have the assertion (1).
Next, we will prove the assertion (2). By the projection \(p\) to \(\prod_{\alpha_{i,1}\in I}L_{[h(i,1),g(i,1))}U_{\alpha_{i,1}}\) followed by \(\psi\), we have the following diagram:
By the definition of \(h_{b,r}\) and the argument in the proof of the assertion (1), \(p\circ h_{b,r}\) is just a projection. (here, we use the information of the order of products in (51)). Note that, for \(2\leq i\leq 2n\), \(\alpha\) and \(\beta\) defined in Definition 4.5.4 satisfy
\[g(i,1)\leq\alpha(i)\,\text{ and }\,\gamma_{m,r}(i)=\gamma_{0,r}(i)=h(i,1). \tag{52}\]
Therefore the map \(p\circ h_{b,r}\) is a natural closed immersion. Now, it suffices to show that \(h_{b,r}\) is surjective. To this end, by Definition 4.2.7 and Lemma 4.5.8, it suffices to show that for any \(u\in\mathcal{L}_{b}^{\text{\rm symp}}\), there exists an element \(v\in\mathcal{L}_{\text{\rm coord}}^{b,0,r}\) such that
\[v\in g_{b,r}(u)^{t}\!(\mathcal{O}^{\times},\mathcal{O},\dots,\mathcal{O}). \tag{53}\]
Let \(a_{i}\) be a projection of \(g_{b,r}(u)I\) to \(L_{[g(i,1),h(i,1))}U_{\alpha_{i,1}}\). We put \(v=^{t}(1,[a_{2}],\dots,[a_{2n}])\). Here, \([-]\) means the canonical lift as in Definition 4.5.4 (3). Then by (52), we have
\[v\in^{t}(1,[\mathfrak{p}^{\alpha(2)}/\mathfrak{p}^{\gamma_{0,r}(2)}],\dots,[ \mathfrak{p}^{\alpha(2n)}/\mathfrak{p}^{\gamma_{0,r}(2n)}]).\]
Moreover, (53) holds true since \(u\in\mathcal{L}_{b}^{\text{\rm symp}}\). Now it suffices to show that \(v\in\mathcal{L}_{\text{\rm coord}}^{b,0,r}\). By the equation (53), clearly we have \(v\in[\mathcal{L}_{b,0,r}]\). It suffices to show that the equation (42) holds true for \(m=0\). This part can be shown by direct computation.
**Remark 4.5.10**.: The statement of [1, Lemma 4.7] seems to be subtle. In [1, Lemma 4.7], they construct an isomorphism
\[\prod_{\alpha\in S}L_{[f_{I}(\alpha),f(\alpha))}U_{\alpha}\times I/I_{f}\simeq IxI /I_{f},\]
where \(f\) is a concave function such that the associated subgroup \(I_{f}\) is normal in \(I\), \(f_{I}\) is the concave function of the Iwahori subgroup \(I\), and the set \(S\) is a certain set of roots (we omit the definition here). First, \(f(\alpha)\) in the index of \(L\) is unsuitable since it should depend on \(x\) (see [1, (6.19)] for the correct formula for \(\operatorname{GL}\)). Indeed, if \(I_{f}=I\), the left-hand side is \(1\) though the right-hand side is non-trivial.
Moreover, they define an isomorphism by
\[((a_{\alpha}),i)\mapsto\prod_{\alpha}\widetilde{a}_{\alpha}xI^{m}, \tag{54}\]
where \(\widetilde{a}_{\alpha}\) is an arbitrary lift of \(a_{\alpha}\). However, in general, this morphism depends on the choice of lifts, and we need to fix a choice of lifts. For example, consider
\[g=\left(\begin{array}{ccc}1&0&0\\ \varpi&\varpi&0\\ \varpi&\varpi&\varpi^{2}\end{array}\right)I_{\operatorname{GL}_{3}}=\left( \begin{array}{ccc}1&0&0\\ 0&\varpi&0\\ 0&\varpi&\varpi^{2}\end{array}\right)I_{\operatorname{GL}_{3}}\in I_{ \operatorname{GL}_{3}}\operatorname{diag}(1,\varpi,\varpi^{2})I_{ \operatorname{GL}_{3}}/I_{\operatorname{GL}_{3}}.\]
In this case, the correct isomorphism is given by
\[L_{[0,1)}U_{\alpha_{2,1}}\times L_{[0,2)}U_{\alpha_{3,1}}\times(\text{other terms})\simeq I_{\operatorname{GL}_{3}}\operatorname{diag}(1,\varpi,\varpi^{2})I_{ \operatorname{GL}_{3}}/I_{\operatorname{GL}_{3}}.\]
However, as above, if we do not fix the choice of lifts, the projection of \(g\) in \(L_{0,2}U_{\alpha_{3,1}}\) is not well-defined. Therefore, to fix the isomorphism, we need to fix a lift. As long as we modify the statement as above, then the proof can be done in the same way as in the proof of [1, Lemma 4.7]. Note that, we use the canonical lift \([-]\) (Definition 4.5.4) in our paper. Since \([0]=0\), the projection of \(g\) in \(L_{0,2}U_{\alpha_{3,1}}\) via our choice of isomorphism is \(0\).
**Remark 4.5.11**.: In structure of \(X^{m}_{w_{r}}(b)\) for \(m>0\) is more subtle. Indeed, the argument in (2) Proposition 4.5.9 does not work for \(m>0\). This is due to the fact that \(X^{m}_{w_{r}}(b)_{\mathcal{L}}\nsubseteq Ix_{r}I^{m}\) for \(m>0\).
Now we can prove Theorem 4.5.2.
_Proof of Theorem 4.5.2_. For simplicity, we consider the case where \(\operatorname{char}K=0\). Let
\[\mathcal{L}^{b,0,r}_{\mathrm{coord}}\to\overline{\mathcal{L}^{\mathrm{symp} \mathrm{perf}}_{b}}\]
be the natural projection. Note that, this map factors through \(\mathbb{P}(\overline{\mathcal{L}^{\mathrm{symp}\mathrm{perf}}_{b}})\subset \overline{\mathcal{L}^{\mathrm{symp}}_{b}}\subset\overline{V}\), which is isomorphic to \(X^{\overline{B},\mathrm{perf}}_{\overline{w}}\) by Lemma 4.3.2 and Lemma 4.4.2. We want to show that this natural projection induces
\[\mathcal{L}^{b,0,r}_{\mathrm{coord}}\simeq X^{\overline{B},\mathrm{perf}}_{ \overline{w}}\times\mathbb{A}^{\mathrm{perf}}.\]
First, we consider the case where \(n=1\) or \(n=2\) for simplicity. In this case, the embedding \(\mathcal{L}^{b,0,r}_{\mathrm{coord}}\subset[\mathcal{L}_{b,0,r}]\) is equality (Remark 4.5.5). By the definition of \([\mathcal{L}_{b,0,r}]\), it suffices to show that the projection \(\mathcal{L}^{\prime}_{b,0,r}\to\overline{\mathcal{L}^{\mathrm{symp}\mathrm{ perf}}_{b}}\) induces the decomposition
\[\mathcal{L}^{\prime}_{b,0,r}\simeq X^{\overline{B},\mathrm{perf}}_{\overline{ w}}\times\mathbb{A}^{\mathrm{perf}}.\]
This follows from Lemma 4.3.5 and 4.4.4. In the general case, in addition to the argument of Lemma 4.3.5 and 4.4.4, further equations (43) need to be solved. However, we can solve this equation by the same method as in Lemma 4.3.5 and 4.4.4. It finishes the proof.
### Family of finite type varieties \(X_{h}\)
In this section, we take \(b\) as a special representative or a Coxeter representative.
**Definition 4.6.1**.:
1. Suppose that \(n\) is even and \(k=1\). We put \[\mathcal{L}^{(h)}:=\mathfrak{p}^{h}e_{1}\oplus\mathfrak{p}^{h-1}e_{2}\oplus \cdots\oplus\mathfrak{p}^{h-1}e_{n}\oplus\mathfrak{p}^{h}e_{n+1}\oplus \mathfrak{p}^{h}e_{n+2}\oplus\cdots\oplus\mathfrak{p}^{h}e_{2n}.\] Moreover, we define the closed subscheme \(X_{h}\) of \(\mathcal{L}/\mathcal{L}^{(h)}\) by \[X_{h}:=\left\{v=(\overline{v_{1}},\ldots\overline{v_{2n}})\in\mathcal{L}/ \mathcal{L}^{(h)}\left|\begin{array}{l}\langle v,F^{i}(v)\rangle=0\mod \mathfrak{p}^{h+\lfloor\frac{i}{2}\rfloor}\quad(1\leq i\leq n-1)\\ \langle v,F^{n}(v)\rangle\in\varpi^{n^{\prime}}(\mathcal{O}_{K}/\mathfrak{p} ^{h+n^{\prime}})^{\times}\end{array}\right.\right\}.\]
2. Suppose that \(n\) is odd and \(k=1\). We put \[\mathcal{L}^{(h)}:=\mathfrak{p}^{h}e_{1}\oplus\mathfrak{p}^{h-1}e_{2}\oplus \cdots\oplus\mathfrak{p}^{h}e_{2n-1}\oplus\mathfrak{p}^{h-1}e_{2n}\] if \(k=1\). We also put \[X_{h}:=\left\{v=(\overline{v_{1}},\ldots\overline{v_{2n}})\in\mathcal{L}/ \mathcal{L}^{(h)}\left|\begin{array}{l}\langle v,F^{i}(v)\rangle=0\mod \mathfrak{p}^{h+\lfloor\frac{i}{2}\rfloor}\quad(1\leq i\leq n-1)\\ \langle v,F^{n}(v)\rangle\in\varpi^{n^{\prime}}(\mathcal{O}_{K}/\mathfrak{p} ^{h+n^{\prime}})^{\times}\end{array}\right.\right\}.\]
3. Suppose that \(k=0\). We put \[\mathcal{L}^{(h)}:=\mathfrak{p}^{h}e_{1}\oplus\cdots\oplus\mathfrak{p}^{h}e_{2 n}.\] We also put \[X_{h}:=\left\{v=(\overline{v_{1}},\ldots\overline{v_{2n}})\in\mathcal{L}/ \mathcal{L}^{(h)}\left|\begin{array}{l}\langle v,F^{i}(v)\rangle=0\mod \mathfrak{p}^{h}\quad(1\leq i\leq n-1)\\ \langle v,F^{n}(v)\rangle\in(\mathcal{O}_{K}/\mathfrak{p}^{h})^{\times}\end{array} \right.\right\}.\]
**Proposition 4.6.2**.: _The natural map \(\mathcal{L}_{b}^{\mathrm{symp,rat}}\to X_{h}\) is surjective._
Proof.: We may assume that \(b\) is a special representative. It suffices to show that
\[X_{h+1}\to X_{h}\]
is surjective. For simplicity, we consider the case where \(n\) is even and \(k=1\). We also put
\[X_{h}^{+}:=\left\{v=(\overline{v_{1}},\ldots\overline{v_{2n}})\in\mathcal{L}_ {h}\left|\begin{array}{l}\langle v,F^{i}(v)\rangle=0\mod\mathfrak{p}^{h+ \lceil\frac{i}{2}\rceil}\quad(1\leq i\leq n-1)\\ \langle v,F^{n}(v)\rangle\mod\mathfrak{p}^{h+n^{\prime}}\in(\mathcal{O}_{K}/ \mathfrak{p}^{h+n^{\prime}})^{\times}\end{array}\right.\right\}.\]
For simplicity, we prove the surjectivity of \(X_{2}\to X_{1}.\) The surjectivity of \(X_{1}^{+}\to X_{1}\) follows from the same argument as in the proof of Lemma 4.3.5. More precisely, we can show that
\[X_{1}^{+}\times_{X_{1}}X_{1}^{\mathrm{perf}}\simeq\mathbb{A}^{\mathrm{perf}} \times X_{1}^{\mathrm{perf}}.\]
Therefore, we will show the surjectivity of \(X_{2}\to X_{1}^{+}.\) We also put
\[x_{i,j}\in\overline{\mathbb{F}}_{q}\text{ is the image of }p_{j}\text{ of } \left\{\begin{aligned} &\varpi^{-1}\times\text{ $i$-th component}&\text{if $n+1\geq i$ and $i$ is odd,}\\ &\text{$i$-th component}&\text{otherwise.}\end{aligned}\right.\]
Here, \(p_{j}\colon\mathcal{O}_{\bar{K}}\to\overline{\mathbb{F}}_{q}\) (\(j\geq 0\)) is the projection to the coefficient of \(\varpi^{j}\) of the \(\varpi\)-adic expansion (i.e. \(x=\sum_{j\geq 0}[\mathfrak{p}_{j}(x)]\varpi^{j}\)). Then
\[X_{2}\subset\mathbb{A}_{x_{1,0},x_{2,0},\ldots,x_{2n-1,0},x_{2n,0},x_{1,1},x_ {3,1},\ldots,x_{n-1,1},x_{n+2,1},x_{n+4,1},\ldots,x_{2n,1}}\]
is defined by the following equations.
* (55) \[\begin{split}& p_{i}(\langle v,F^{2i-1}(v)\rangle)\\ &=\left(\sum_{1\leq j\leq n,j\colon\text{odd}}(x_{j,0}x_{2n-j,0}^ {q_{2i-1}}+x_{2n-j,0}x_{j,0}^{q_{2i-1}})-\sum_{1\leq j\leq n,j\colon\text{ even}}(x_{j,0}x_{2n-j+2,0}^{q_{2i-1}}+x_{2n-j+2,0}x_{j,0}^{q_{2i-1}})\right)\\ &=0\end{split}\] for \(i=1,\ldots,n^{\prime}\).
* (56) \[\begin{split}& p_{i}(\langle v,F^{2i}(v)\rangle)\\ &=\left(\sum_{1\leq j\leq n,j\colon\text{odd}}(x_{j,0}x_{2n+1-j,0} ^{q_{2i}^{2i}}-x_{2n+1-j,0}x_{j,0}^{q_{2i}^{2i}})\right)\\ &=0\end{split}\] for \(i=1,\ldots,n^{\prime}-1\).
* \[p_{i+1}(\langle v,F^{2i}(v)\rangle)\] (57) \[=\left(\sum_{1\leq j\leq n,j:\text{ odd}}(x_{j,1}x_{2n+1-j,0}^{q^{2i }}-x_{2n+1-j,1}x_{j,0}^{q^{2i}}+x_{j,0}x_{2n+1-j,1}^{q^{2i}}-x_{2n+1-j,0}x_{j,1} ^{q^{2i}})\right)\] \[+P(x_{1,0},\ldots,x_{2n,0})=0\] for \(i=1,\ldots,n^{\prime}-1\). Here, \(P\) is a certain polynomial.
* \[p_{n^{\prime}}(\langle v,F^{n}(v)\rangle)\in\mathbb{F}_{q}^{\times}\] (58) \[p_{n^{\prime}+1}(\langle v,F^{n}(v)\rangle)\in\mathbb{F}_{q},\] which is equivalent to (59) \[Q^{q}-Q=0,\] where (60) \[Q :=\left(\sum_{1\leq j\leq n,j:\text{ odd}}(x_{j,1}x_{2n+1-j,0}^{q ^{n}}-x_{2n+1-j,1}x_{j,0}^{q^{n}}+x_{j,0}x_{2n+1-j,1}^{q^{n}}-x_{2n+1-j,0}x_{j,1}^{q^{n}})\right)\] \[+P(x_{1,0},\ldots,x_{2n,0})\]
with some polynomial \(P\). Note that \(X_{1}^{+}\) is defined by equations (55), (56), and (58). Therefore, we should solve the equation (57) and (59) with respect to
\[x_{1,1},x_{3,1},\ldots,x_{n-1,1},x_{n+2,1},\ldots,x_{2n,1}.\]
We define \(v,v^{\prime}\in W:=\overline{\mathbb{F}}_{q}^{\oplus 2n}\) by
\[v^{\prime} := \sum_{i=1}^{n^{\prime}}x_{2i-1,0}e_{2i-1}+\sum_{i=n^{\prime}+1}^{ n}x_{2i,0}e_{2i},\] \[v^{\prime\prime} := \sum_{i=1}^{n^{\prime}}x_{2i-1,1}e_{2i-1}+\sum_{i=n^{\prime}+1}^{ n}x_{2i,1}e_{2i}.\]
We define the coordinates \(\eta_{0},\ldots\eta_{n-1}\) by
\[\eta_{i}:=\langle v^{\prime\prime},\overline{\sigma}^{-2i}(v^{\prime})\rangle,\]
which is a linear coordinate transformation of
\[x_{1,1},x_{3,1},\ldots,x_{n-1,1},x_{n+2,1},\ldots,x_{2n,1}.\]
over \(\mathcal{O}_{\overline{\mathcal{L}}_{b}^{\text{symp}}}^{\text{perf}}\). We also define the matrix \(Q=(q_{i,j})_{1\leq i,j\leq n}\in\operatorname{GL}_{n}(\mathcal{O}_{\overline{ \mathcal{L}}_{b}^{\text{symp}}}^{\text{perf}})\) by
\[\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$}$\text{$}$}}}}}}}}$}}}}}^{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text
By the same argument as in the proof of Lemma 4.3.5, there exits
\[u=\left(\begin{array}{cccc|ccc}1&&\mbox{\Large$\mathbb{0}$}&&\\ &\ddots&&\mbox{\Large$\mathbb{0}$}&\\ \mbox{\Large$\ast$}&&1&\\ \hline&&1&\\ &\mbox{\Large$\mathbb{0}$}&&\ddots&\\ &&\mbox{\Large$\mathbb{0}$}&&1\end{array}\right)\in\operatorname{GL}_{n}( \mathcal{O}_{\overline{\mathcal{L}_{b}^{\operatorname{symp}}}^{\operatorname{ perf}}}^{\operatorname{perf}}),\]
such that \(Q^{\prime}:=uQ\) is a matrix whose \((i,j)\)-entries \(q^{\prime}_{i,j}\) (\(i>n+1-j,j\geq n^{\prime}\)) are \(0\). The equations (57) and (59) are equivalent to the following equations.
* \[\eta_{i}^{q^{2i}}+\sum_{j=1}^{n}q_{i,j}\eta_{j-1}+P=0\] for \(i=1,\ldots,n^{\prime}-1\). Here, \(P\) is a certain polynomial of \(x_{1,0},\ldots,x_{2n,0}\).
* \[(\eta_{n^{\prime}}^{q^{n}}+\sum_{j=1}^{n}q_{n^{\prime}j}\eta_{j-1}+P)^{q}\] \[-(\eta_{n^{\prime}}^{q^{n}}+\sum_{j=1}^{n}q_{n^{\prime}j}\eta_{j-1}+P)=0,\] where \(P\) is a certain polynomial of \(x_{1,0},\ldots,x_{2n,0}\).
By using the \(n^{\prime}\times n\)-upper half part \(u^{\operatorname{up}}\) of \(u\), we can organize equations as
* \[(u^{\operatorname{up}}\,^{t}\!(\eta_{1}^{q^{2}},\ldots,\eta_{n^{\prime}}^{q^{2 n^{\prime}}}))_{i}+\sum_{j=1}^{n+1-i}q^{\prime}_{i,j}\eta_{j-1}+P=0\] for \(i=1,\ldots,n^{\prime}-1\).
* \[(u^{\operatorname{up}}\,^{t}\!(\eta_{1}^{q^{2}},\ldots,\eta_{n^{\prime}}^{q^{2 n^{\prime}}}))_{n^{\prime}}+\sum_{j=1}^{n+1-n^{\prime}}q^{\prime}_{n^{\prime},j} \eta_{j-1}+P)^{q}\] \[-(u^{\operatorname{up}}\,^{t}\!(\eta_{1}^{q^{2}},\ldots,\eta_{n^{\prime}}^{ q^{2n^{\prime}}}))_{n^{\prime}}+\sum_{j=1}^{n+1-n^{\prime}}q^{\prime}_{n^{\prime},j} \eta_{j-1}+P)=0.\]
The first equations can be solved with respect to \(\eta_{n-i}\). On the other hand, the second equation can be solved with respect to \(\eta_{n^{\prime}}\). Therefore, it finishes the proof of surjectivity.
In the following, we assume that \(b\) is a Coxeter representative.
**Definition 4.6.3**.: We define \(\mathbb{G}\) to be the smooth affine group scheme over \(\mathbb{F}_{q}\) such that
\[\mathbb{G}(\overline{\mathbb{F}}_{q})=G_{x,0},\qquad\mathbb{G}(\mathbb{F}_{q})=G _{x,0}^{F_{b}}.\]
Here, we put
\[F_{b}\colon\operatorname{GSp}_{2n}(\check{K})\to\operatorname{GSp}_{2n}( \check{K});g\mapsto b\sigma(g)b^{-1}.\]
For \(h\in\mathbb{Z}_{\geq 1}\), we define \(\mathbb{G}_{h}\) to be the smooth affine group scheme over \(\mathbb{F}_{q}\) such that
\[\mathbb{G}_{h}(\overline{\mathbb{F}}_{q})=G_{x,0}/G_{x,(h-1)+},\qquad\mathbb{G }_{h}(\mathbb{F}_{q})=G_{x,0}^{F_{b}}/G_{x,(h-1)+}^{F_{b}}.\]
Let \(\mathbb{U}\subset\mathbb{G}\) (resp. \(\mathbb{U}^{-}\subset\mathbb{G}\)) be the smooth subgroup scheme whose \(\overline{\mathbb{F}}_{q}\)-points are upper (resp. lower) triangular unipotent matrices of \(G_{x,0}\). We denote the corresponding subgroups of \(\mathbb{G}_{h}\) by \(\mathbb{U}_{h}\) and \(\mathbb{U}_{h}^{-}\).
The following is an analogue of [1, Proposition 7.12]
**Proposition 4.6.4**.:
1. _The subgroup_ \(\mathbb{U}_{h}^{-}\cap F_{b}(\mathbb{U}_{h})\subset\mathbb{G}_{h}\) _consisting of matrices of the following form._
2. _The subgroup_ \(\mathbb{U}_{h}^{-}\cap F_{b}^{-1}(\mathbb{U}_{h}^{-})\subset\mathbb{G}_{h}\) _consisting of matrices of the following form._
3. _We have an isomorphism_ \[X_{h}(\overline{\mathbb{F}}_{q}) \simeq\{g\in\mathbb{G}_{h}(\overline{\mathbb{F}}_{q})|g^{-1}F_{b} (g)\in\mathbb{U}_{h}^{-}\cap F_{b}(\mathbb{U}_{h})\}\] \[\simeq\{g\in\mathbb{G}_{h}(\overline{\mathbb{F}}_{q})|g^{-1}F_{b} (g)\in\mathbb{U}_{h}^{-}\}/(\mathbb{U}_{h}^{-}\cap F_{b}^{-1}(\mathbb{U}_{h}^ {-})).\]
Proof.: (1) and (2) follows from the direct computation. Therefore, we will prove (3). The second isomorphism follows from Lemma 4.6.5. Therefore, we will prove the first isomorphism. We will show that
\[\lambda\colon X_{h}(\overline{\mathbb{F}}_{q})\to\{g\in\mathbb{G}_{h}(\overline{ \mathbb{F}}_{q})|g^{-1}F_{b}(g)\in\mathbb{U}_{h}^{-}\cap F_{b}(\mathbb{U}_{h}) \};\overline{v}\to\overline{g_{b}^{\prime}(v)}\]
gives the desired isomorphism. Here, \(g_{b}^{\prime}(v)\) is defined in Remark 2.3.3. Since any element of \(X_{h}\) can be lifted to \(\mathcal{L}_{b}^{\mathrm{symp,rat}}\) by Proposition 4.6.2, we can show that \(\lambda\) is well-defined by the same argument as in the proof of Lemma 2.3.4. Moreover, \(\lambda\) is clearly injective. Therefore, it suffices to show the surjectivity of \(\lambda\). We take an element \(g=(g_{1},\ldots,g_{2n})\) in the right-hand side. By (1) and the assumption
\[g^{-1}F_{b}(g)\in\mathbb{U}_{h}^{-}\cap F_{b}(\mathbb{U}_{h}),\]
we have
\[(F(g_{1}),\ldots,F(g_{2n}))b^{-1}\] \[= (g_{1},\varpi g_{2},\ldots,t_{i}g_{i},\ldots,t_{n-1}g_{n-1},t_{n }g_{2n},\] \[t_{n+1}((-1)^{n+1}g_{1}+\sum_{i=2,\ldots,n,2n}*g_{i}),t_{n+2}(g_ {n+1}+*g_{2n}),\ldots,t_{2n}(g_{2n-1}+*g_{2n}))b^{-1}\]
in \(\mathbb{G}_{h}(\overline{\mathbb{F}}_{q})\). Here, we put
\[t_{i}=\begin{cases}1&\text{ if $i$ is odd},\\ \varpi&\text{ if $i$ is even}.\end{cases}\]
By this equation, we can show that \(g_{1}\in X_{h}\) and \(g=\lambda(g_{1})\).
**Lemma 4.6.5**.: _The morphism_
\[(\mathbb{U}_{h}^{-}\cap F_{b}^{-1}(\mathbb{U}_{h}^{-}))\times(\mathbb{U}_{h}^ {-}\cap F_{b}(\mathbb{U}_{h}))\to\mathbb{U}_{h}^{-};(x,g)\mapsto x^{-1}gF_{b} (x)\]
_is an isomorphism._
Proof.: As in [1, Lemma 7.13], we prove this lemma by direct computation. First, we prove that
\[(U^{-}\cap F_{b}^{-1}(U^{-}))\times(U^{-}\cap F_{b}(U))\to U^{-};(x,g)\mapsto x ^{-1}gF_{b}(x) \tag{61}\]
is an isomorphism. Here, \(U\subset\mathrm{GSp}_{2n}(\breve{K})\) (resp. \(U^{-}\subset\mathrm{GSp}_{2n}(\breve{K})\)) be a subgroup consisting of upper (resp. lower) triangular unipotent matrices.
Take an element \(A\in U^{-}\). We want to show that there exists a unique pair
\[(x,g)\in(U^{-}\cap F_{b}^{-1}(U^{-}))\times(U^{-}\cap F_{b}(U))\]
such that
\[xA=gF_{b}(x). \tag{62}\]
Let\(E_{i,j}\in\mathrm{M}_{2n\times 2n}(\breve{K})\) be a matrix whose \((s,t)\)-component is \(\delta_{i,s}\delta_{j,t}\), where \(\delta\) is the Kronecker's delta. We put
\[g=1+\sum_{i,j}c_{i,j}E_{i,j},\]
\[x=1+\sum_{i,j}x_{i,j}E_{i,j},\] \[A=1+\sum_{i,j}a_{i,j}E_{i,j}.\]
Then \(c_{2,1},\ldots c_{n,1},c_{2n,1}\) (resp. \(x_{i,j}\) (\(1\leq i\leq n,1\leq j\leq i-1\)) and \(x_{i,j}\) (\(n+2\leq i\leq 2n,1\leq j\leq 2n+1-i\)) determines \(g\) (resp. \(x\)) since they are elements in GSp. We will compare the \((i,j)\)-th entry of (62) for \(n+1\leq i\leq 2n,1\leq j\leq 2n+1-i\) and \(n+1\leq i\leq 2n,n+1\leq j\leq i-1\). Note that, equality for all entries above are equivalent to the equality (62), under the assumption that the both-hand sides are elements in \(\mathbb{U}^{-}(\overline{\mathbb{F}}_{q})\). We put
\[x^{\prime}_{i,j}:=\frac{t_{i}}{t_{j}}\sigma(x_{i,j}),\]
where \(t_{i}\) is as in the proof of Proposition 4.6.4. We have
\[F_{b}(x)=\left(\begin{array}{ccccc|ccccc}1&&&&&&&&\\ 0&&1&&&&&&\\ 0&&x^{\prime}_{2,1}&\ddots&&&&\\ \vdots&&\vdots&\ddots&\ddots&&&&\\ 0&&x^{\prime}_{n-1,1}&\cdots&x^{\prime}_{n-1,n-2}&1&&&&\\ \hline-x^{\prime}_{n+2,n+1}&x^{\prime}_{n+2,1}&\cdots&x^{\prime}_{n+2,n-2}&x^{ \prime}_{n+2,n-1}&1&&&&\\ -x^{\prime}_{n+3,n+1}&x^{\prime}_{n+3,1}&\cdots&x^{\prime}_{n+3,n-2}&x^{\prime }_{n+3,n-1}&x^{\prime}_{n+3,n+2}&\ddots&&\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\ddots&&\\ -x^{\prime}_{2n,n+1}&x^{\prime}_{2n,1}&\cdots&x^{\prime}_{2n,n-2}&x^{\prime}_{ 2n,n-1}&x^{\prime}_{2n,n+2}&\cdots&x^{\prime}_{2n,2n-1}&1\\ 0&&x^{\prime}_{n,1}&\cdots&x^{\prime}_{n,n-2}&x^{\prime}_{n,n-1}&0&\cdots&0&0&1 \end{array}\right).\]
First, we compare the \(n+1\)-th column of (62). we have
\[-x^{\prime}_{n+2,n+1} =a_{n+1,1}, \tag{63}\] \[x^{\prime}_{n+2,1} =a_{n+1,2},\] \[\vdots\] \[x^{\prime}_{n+2,n-1} =a_{n+1,n}.\]
which determines the \(n+2\)-th row of \(x\) uniquely. Next, we compare the \(n+1+j\)-th row of (62) (\(1\leq j\leq n-2\)). We will show that the \(n+2+j\)-th row of \(x\) are uniquely determined inductively. Seeing the \((n+1+j,1)\)-th entry, we have
\[-x^{\prime}_{n+2+j,n+1}=x_{n+1+j,1}+\sum_{k=2}^{n-1}x_{n+1+j,k}a_{k,1}+\sum_{k= n+1}^{n+j}x_{n+1+j,k}a_{k,1}+a_{n+1+j,1}. \tag{64}\]
Seeing the \((n+1+j,l)\)-th entry \((2\leq l\leq n-j)\), we have
\[x_{n+2+j,l-1}^{\prime}=x_{n+1+j,l}+\sum_{k=l+1}^{n-1}x_{n+1+j,k}a_{k,l}+\sum_{k=n +1}^{n+j}x_{n+1+j,k}a_{k,l}+a_{n+1+j,l}. \tag{65}\]
Seeing the \((n+1+j,l)\)-th entry \((n+1\leq l\leq n+j)\), we have
\[x_{n+2+j,l+1}^{\prime}=x_{n+1+j,l}+\sum_{k=n+1}^{n+j}x_{n+1+j,k}a_{k,l}+a_{n+1 +j,l}. \tag{66}\]
Inductively, these equations determine \(x_{n+2+j,l}\) for \(l=1,\ldots,n-j-1,n+1\ldots,n+1+j\) uniquely. The quality
\[\langle n+1+j\text{-th row of }gF_{b}(x),n+1+l\text{-th row of }gF_{b}(x)\rangle=0 \tag{67}\]
for \(l=0,\ldots,j-1\) uniquely determines \(x_{n+2+j,l}\) for \(l=n-j,\ldots,n-1\). Therefore, the \(n+2+j\)-th row of \(x\) are uniquely determined. Finally, we will see the \(2n\)-th row of (62). By seeing the \((2n,1)\)-th entry and \((2n,n+j)\)-th entries (\(1\leq j\leq n\)), we have
\[\begin{split}& c_{2n,1}=x_{2n,1}+\sum_{k=2}^{n-1}x_{2n,k}a_{k,1}+ \sum_{k=n+1}^{2n-1}x_{2n,k}a_{k,1}+a_{2n,1},\\ &(-1)^{n+1-j}c_{n+1-j,1}=x_{2n,n+j}+\sum_{k=n+j+1}^{2n-1}x_{2n,k} a_{k,n+j}+a_{2n,n+j},\end{split} \tag{68}\]
which determines \(c_{2,1},\ldots,c_{n,1},c_{2n,1}\) uniquely. By the construction, these \(x_{i,j}\), \(c_{i,j}\) determines an element \(x,g\) uniquely. Therefore, we have the isomorphism (61).
By \((\ref{eq:21}),\ldots,(\ref{eq:22})\), \(A\in G_{x,0}\) corresponds to \(x\in G_{x,0}\) and \(g\in G_{x,0}\). Therefore, we have
\[(\mathbb{U}^{-}\cap F_{b}^{-1}(\mathbb{U}^{-}))\times(\mathbb{U}^{-}\cap F_{b }(\mathbb{U}))\to\mathbb{U}^{-};(x,g)\mapsto x^{-1}gF_{b}(x). \tag{69}\]
Moreover, by estimating the order for \((\ref{eq:21}),\ldots,(\ref{eq:22})\), the isomorphism (69) induces the desired isomorphism.
**Remark 4.6.6**.: The variety \(X_{h}\) here is isomorphic to the variety \(X_{r}\) defined in [1, Subsection 6.1] by Proposition 4.6.4. Chan-Oi studied the Deligne-Lusztig induction by these varieties. On the other hand, our \(X_{h}\) satisfies that
\[\varprojlim_{\varprojlim_{h}}X_{h}\simeq\mathcal{L}_{b}^{\text{symp},\text{rat }}\simeq\varprojlim_{r>m}\dot{X}_{w_{r}}^{m}(b)_{\mathcal{L}}\subset \varprojlim_{r>m}\dot{X}_{w_{r}}^{m}(b)\simeq X_{w}^{(U)}(b),\]
i.e., the left-hand side can be regarded as a component of \(X_{w}^{(U)}(b)\). In this sense, we can say that Proposition 4.6.4 translates [1]'s results into the realization of Lusztig's expectation in [10].
| ```
Lusztigの半無限Deligne-Lusztig多様体の構造が、GSp(およびその内部形式)に対して、無限級数レベルの affinedDeligne-Lusztig多様体と同型であることが証明されました。これはChan-Ivanovの結果の一般化である。さらに、いくつかのaffinedDeligne-Lusztig多様体の成分は、古典的なDeligne-Lusztig多様体とAffine空間の直接積であることが示されました。これは、精度まで、これらの多様体を研究しました。ChanとIvanovによって定義された多様体X_rについて、無限級数レベルにおけるX_rは、GSpのケースでも半無限Deligne-Lusztig多様体の成分として解釈できます。この結果により、X_rからの表現に関する以前の研究をLusztigの予想の実現であると解釈することができます。
``` |
2301.03350 | mRpostman: An IMAP Client for R | Internet Message Access Protocol (IMAP) clients are a common feature in
several programming languages. Despite having some packages for electronic
messages retrieval, the R language, until recently, lacked a broader solution,
capable of coping with different IMAP servers and providing a wide spectrum of
features. mRpostman covers most of the IMAP 4rev1 functionalities by
implementing tools for message searching, selective fetching of message
attributes, mailbox management, attachment extraction, and several other IMAP
features that can be executed in virtually any mail provider. By doing so, it
enables users to perform data analysis based on e-mail content. The goal of
this article is to showcase the toolkit provided with the mRpostman package, to
describe its key features and provide some application examples. | Allan V. C. Quadros | 2022-12-11T07:39:59 | http://arxiv.org/abs/2301.03350v1 | # mRpostman: An IMAP Client for R
###### Abstract
Internet Message Access Protocol (IMAP) clients are a common feature in several programming languages. Despite having some packages for electronic messages retrieval, the R language, until recently, lacked a broader solution, capable of coping with different IMAP servers and providing a wide spectrum of features. mRpostman covers most of the IMAP 4rev1 functionalities by implementing tools for message searching, selective fetching of message attributes, mailbox management, attachment extraction, and several other IMAP features that can be executed in virtually any mail provider. By doing so, it enables users to perform data analysis based on e-mail content. The goal of this article is to showcase the toolkit provided with the mRpostman package, to describe its key features and provide some application examples.
keywords: IMAP, e-mail, R
## 1 Motivation and significance
The acknowledgement of the R programming language[1] as having remarkable statistical capabilities is much due to the excellence brought by its statistical and data analysis packages. This reputation also stands on the capabilities of a myriad of utility packages, which extends the use of the language by facilitating the integration of the steps involved in data collection, analysis, and communication. With that in mind, and considering the amount of data transmitted daily through e-mail, mRpostman was conceived to fill the absence of an Internet Message Access Protocol (IMAP) client in the R statistical environment; therefore, providing an appropriate toolkit for electronic messages retrieval, and paving the way for e-mail data analysis in R.
The Comprehensive R Archive Network (CRAN) has at least seven packages for sending emails (Table 1). Whereas some of these packages aim to provide a plain Simple Mail Transport Protocol (SMTP) client for R (e.g. sendmailR and emayili), others focus on more sophisticated implementations, using Application Program Interfaces (API), or providing seamless integration between SMTP and other R features such as rmarkdown[2]. However, despite the surplus of available clients in R, the SMTP protocol is not suitable for receiving e-mails. It only allows clients to communicate with servers to deliver their messages.
For the purpose of message retrieval, there are the Post Office Protocol 3 (POP3) and the Internet Message Access Protocol (IMAP). In comparison with IMAP, POP3 is a very limited protocol, working as a simple interface for clients to download e-mails from servers. IMAP, on the other hand, is a much more complex protocol, and can be considered as the evolution of POP3, with a very different and broader set of functionalities. In contrast to POP3, all the messages are kept on the IMAP server and not locally. This means that a user can access the same mail account using parallel connections from different clients[3]. Besides the mail folders structure and management, the capacity of issuing sophisticated search queries also contribute to the level of complexity of the IMAP protocol.
In this article, we present a brief view of the main functionalities of the package and its applications.
## 2 Software description
mRpostman is conceived to be an easy-to-use session-based IMAP client for R. The package implements intuitive methods for executing the majority of the IMAP commands described in the Request for Comments 35011, such as mailbox management, and selectively search and fetch of message attributes. The package also implements complementary functions for decoding quoted-printable and base 64 content, following the MIME specification2.
Footnote 1: The RFC 3501[12] is a formal document from the Internet Engineering Task Force (IETF) specifying standards for the IMAP, Version 4rev1 (IMAP4rev1).
Footnote 2: The RFC 2047[13] specifies rules for encoding and decoding non-ASCII characters in electronic messages.
All these methods and functions play an important role in facilitating e-mail data analysis. We shall not overlook the amount of data analyses daily performed on e-mail content. The package has proved to be very useful as an
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{Features} \\ \cline{4-8} & protocol & mail & search & message & attachment & mailbox & active \\ & & providers & queries & fetch & extraction & manage- & development \\ \hline sendmailR[4] & SMTP & - & - & - & - & - & - \\ mailR[5] & SMTP & - & - & - & - & - & - \\ mail[6] & SMTP & - & - & - & - & - & - \\ blatr[7] & SMTP & - & - & - & - & - & - \\ gmailr[8] & SMTP/IMAP & Gmail & no & limited & limited & no & yes \\ blastula[9] & SMTP & - & - & - & - & - & - \\ emayili[10] & SMTP & - & - & - & - & - & - \\ edeR[11] & IMAP & Gmail & no & limited & no & no & no \\
**mRpostman** & IMAP & all & yes & yes & yes & yes & yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the current available CRAN packages for e-mail communication. The following attributes are evaluated: protocol - the supported protocol (SMTP or IMAP); mail providers - if the IMAP protocol is supported, which mail providers are supported by the package; Features - which type of IMAP features are available in the package; active development - if the package is currently under active development. If the package does not provide IMAP support, the remaining fields do not apply.
additional feature in this workflow by, for instance, enabling the possibility of automating the attachments retrieval step. Also, by fetching other message contents, users are able to apply statistical techniques for analysing the frequency of e-mails with regard to some message aspect, running sentiment analysis on e-mail content, etc.
Since mRpostman works as a session-based IMAP client, one can think of the provided methods following a natural order in which the steps shall be organised in the event of an IMAP session (Fig. 1). For instance, if the goal is to search messages within a specific period of time and/or containing a specific word, first we need to configure the connection to the IMAP server; then, choose a mail folder where the search is to be performed; and execute the single criteria (left) or the custom multi-criteria search (right). If the user intends to fetch the matched message(s) or its parts, additional fetch steps can be chained to the described schema.
mRpostman is flexible in the sense that the aforementioned steps can be used either under the tidy framework, with pipes[14], or via the conventional base R approach.
Fig. 1: Basic schema for fetching the full content of a message or its parts after a search query.
## 3 Software architecture
The software was designed following the object-oriented framework from the R6 package[15]. A class called ImapCon is implemented to retain and organize the necessary IMAP connection parameters. All the methods that derive from this class will serve one of the two following purposes: to issue a request toward the IMAP server (request methods) or re-configure an existing IMAP connection (reset methods).
In order to execute IMAP commands, this package makes extensive use of the curl[16] R package3. All mRpostman's request methods are built on top of the so-called curl handles. Under the hood, a curl handle consists of a C pointer variable that gathers the necessary parameters to execute a request to the server. As a matter of fact, the handle itself does not issue any command, but is used as a parameter inside a curl's fetch function. This last object is the one that actually triggers the request to the server, ranging from mail folder selection to search queries, or message fetch requests.
Footnote 3: The curl package is a binding for the libcurl[17] C library.
The object-oriented framework combined with the use of one curl handle per session enables mRpostman to elegantly run as a session based IMAP client, without demanding a connection reconfiguration between commands. For example, if a mail folder is selected on the current session, all requests using the same connection token will be performed on the selected folder, unless the user re-selects a different one.
### Software functionalities
#### 3.1.1 Configuring an IMAP connection
As we demonstrated in Fig. 1, the first step for using mRpostman is to configure an IMAP connection. It consists of creating a connection token object of class ImapCon that will retain all the relevant information to issue requests toward the server.
configure_imag is the function used to configure and create a new IMAP connection. The mandatory arguments are three character strings: url, username, and password for plain authentication; or url, username, and xoauth2_bearer for OAuth2.0 authentication4.
Footnote 4: Please refer to the _“IMAP OAuth2.0 authentication in mRpostman”_ vignette in [18].
The following example illustrates how to configure a connection to a Microsoft Exchange IMAP 4 server; more specifically, to an Office 365 Outlook account using plain authentication.
library("mRpostman")
con <- configure_imap(url = "imaps://outlook.office365.com", username = "[email protected]", password = rstudioapi::askForPassword())
We opted for using an Outlook Office 365 account as an example in order to highlight the difference between mRpostman and the other two CRAN packages which, although also capable of receiving e-mails, are restricted to Gmail accounts and fewer IMAP functionalities. Although mRpostman is able to theoretically connect to any mail provider5, the Outlook Office 365 service is broadly used by universities and companies. This enriches the range of data analyses applications of this package, thus justifying our choice.
Footnote 5: Besides Outlook Office 365, the package has been already successfully tested with Gmail, Yahoo, Yandex, AOL, and Hotmail accounts.
In a hypothetical situation where the user needs to simultaneously connect to more than one e-mail account (in different providers or not) in the same R session, it can be easily attained by creating and configuring multiple connection tokens, such as con1, con2, and so on.
#### 3.1.2 Selecting a mail folder
Mailboxes are structured as folders in the IMAP protocol. This allows us to replicate many of the operations done in a local folder such as creating, renaming or deleting folders. As messages are kept inside the mail folders, users need to select one of them whenever they intend to execute a search, fetch or other message-related operation, as presented in Fig. 1.
In this sense, the select_folder method is one of the key features of this package. It selects a mail folder for the current IMAP section. The mandatory argument is a character string containing the name of the folder to be selected.
Supposing that we want to select the "INBOX" folder and considering that we are going to use the same connection object (con) that has been previously created, the command would be:
con$select_folder(name = "INBOX")
Further details on other important mailbox management features are provided in [18].
#### 3.1.3 Message search
The IMAP protocol is designed to allow the execution of single or multi-criteria queries on the mailboxes. This package implements a vast range of
IMAP search commands, which consist of a critical feature for performing data analysis on email content.
As of its version 1.0.0, mRpostman has five types of single-criterion search methods implemented: by date; string; flag, size; and span of time (WITHIN extension)6. The custom-search, on the other hand, enables the execution of multi-criteria queries by allowing the combination of two or more types of search. However, in this article, we will focus on the single-criterion search-by-string type.
Footnote 6: The WITHIN extension is not supported by all IMAP servers. A call to the list_server_capabilities method will present all the IMAP extensions supported by the mail provider[18].
The search_string method searches messages that contain a specific string or expression. One or more specific sections of a message, such as the TEXT section or the TO header field, for example, must be specified.
In the following code snippet, we search for messages from senders whose mail domain is "@ksu.edu".
ids <- con$search_string(expr = "@ksu.edu", where = "FROM")
The resulting object is a vector containing the matched unique ids (UID) or the message sequence numbers7 such as presented below:
Footnote 7: More details on the message identification methodology deployed by the IMAP protocol are provided in [19; 12; 18].
[1] 60 145 147 159 332 333 336 338 341 428
Further details on the other single-search methods and the custom-search method available in this package are provided in [18].
#### 3.1.4 Message fetch
After executing a search query, users may be interested in fetching the full content or some part of the messages indicated in the search results. In this regard, mRpostman implements six types of fetch features:
fetch_body Fetches the message body (message's full content), or an specified MIME level, which can refer to the text or the attachments if there are any.
fetch_header Fetches the message header, which comprises all the components of the HEADER section of a message. Besides the traditional ones (from, to, cc, subject), it may include several more fields.
fetch_metadata Fetches the message metadata, which consists of some message's attributes such as the internal date, and the envelope (from, to, cc, and subject fields).
fetch_text Fetches the message text section, which can comprise attachment MIME levels if applicable.
Each of these methods can be seamlessly integrated into a previous search operation so that the returned ids are used as input for the fetch method.
Above all, these methods consist of a powerful source of information for performing data analysis on e-mail content. Here, we mimic the extraction of the TEXT portion of a message. Although there is a fetch_text method, the recommended approach is to use fetch_body(..., mime_level = 1L) because the former may collect attachment parts along with the message text.
out <- ids %>% fetch_body(mime_level = 1L)
Once the messages are fetched, the text can be cleaned and decoded with the clean_msg_text helper function. A subsequent call to the writeLines base R function produces a clean printing of the fetched text:
cleaned_text <- clean_msg_text(msg_list = out) writeLines(cleaned_text[[1]])
Receipt Number: XXXXXXX Customer: Vieira de Castro Quadros, Allan Kansas State University Current Date: 04/15/2020 Description Amount ------------------------------------------------ HOUSING & DINING $30.00 User Number: XXXXXXXXX Total $30.00 Payments Received Amount ------------------------------------------------
07 CREDT CARD PAYMENTS $30.00 Visa XXXXXXXXX8437 Authorization # XXXXXX Total $30.00
Thanks you for the payment.
Besides other applications, the exported function clean_msg_text can be used to decode hexadecimal and base 64 characters in the text and other parts of the message. In some locales such as French, German or Portuguese speaking countries, message parts may contain non-ASCII characters. SMTP servers, then, encode it using the RFC 2047 specifications when sending the e-mail. In these cases, clean_msg_text is capable of correctly decoding the non-ASCII characters.
#### 3.1.5 Attachment extraction
In its pretension to be an IMAP client for R, mRpostman provides methods that enable users to list and download message payloads. This feature can be particularly critical for automating the analysis of attachment data files, for instance.
Attachments can be downloaded using two different approaches in this package: extending the fetch_text/body operation by adding an attachment extraction step at the end of the workflow with get_attachments; or directly fetching attachment parts via the fetch_attachments method. In this article, we focus on the first type of attachment methods, adding a step to our previous workflow.
The get_attachments method extracts attachment files from the fetched messages and saves these files to the disk. In the following code excerpt, we extract attachments in a unique pipeline that gathers fetching and search steps.
con$search_string(expr = "@ksu.edu", where = "FROM") %>% con$fetch_text() %>% con$get_attachments()
During the execution, the software locally saves the extracted attachments into sub-folders inside the user's working directory. These sub-folders are named following the messages' ids. The attachments are placed into their respective messages' sub-folders as demonstrated in Fig. 2. Note that the parent levels are named after the informed username and the selected mail folder.
For more information on the other attachment-related methods, the reader should refer to the documentation in [18].
## 4 Illustrative Examples
To demonstrate the capabilities of the proposed software, we explore two use cases of this package in support of data analysis tasks: a simple study of the frequency of e-mails grouped by senders; and a sentiment analysis run on a set of e-mails received during a period. The R scripts needed for reproducing these examples are provided in the appendixes. Although the results cannot be exactly reproduced once it reflects the author's mailbox contents, they can be easily adapted to the reader's context.
### Frequency analysis of e-mail data
In the first example, we run a simple analysis of the e-mail frequency with regard to senders. This can be especially useful in professional fields, such as marketing and customer service offices. A period of analysis was defined, and a search-by-date is performed using the search_period method. Then, senders' information for the returned ids are fetched via fetch_metadata, using the ENVELOPE attribute. After some basic manipulation with regular expressions, the data is ready to be plotted as shown in Fig. 3.
Figure 2: Local directory tree for the extracted attachment files
The same kind of analysis can be replicated for the messages' subjects with only a few modifications in the regular expressions code chunks. Considering that some companies/users deal with subject-standardized e-mails, this approach can be useful to analyze the frequency of e-mails with regard to different categories of subjects.
### Sentiment analysis on e-mail data
For the sentiment analysis example, we also define a period of analysis and run a search_period query. Then, we retrieve the text part of the messages by fetching the first MIME level with fetch_body(..., mime_level = 1L). The texts go trough a first cleaning step with a call to the clean_msg_text function. After further cleaning procedures, we use a lexicon[20] via the syuzhet package[21] to evaluate the sentiment of each e-mail. The output below is a subset of the resulting data frame. The last two columns indicate, respectively, the counts of negative and positive words for each message. The other columns provide counts related to detailed emotions, which are not necessarily positive nor negative.
Figure 3: An example of e-mail frequency analysis grouped by sender
body9803002014406151615216142431
## 5 Impact
As we have demonstrated, mRpostman clearly fills an existent gap of a broad, complete, and, at the same time, easy-to-use IMAP client for the R language. The package has consolidated itself as an important tool for collecting massive e-mail content, thus contributing to data analysis tasks in R.
Although all sort of users have been taking advantage of this package, we are inclined to think that its use has been prevailing amid companies. We have received a considerable number of feedback from enterprise users who deploy mRpostman as an additional feature for automatically producing daily reports based on attachment data files. Besides this, there are important applications for marketing and post-sales departments, for example. They can also deploy this package to collect e-mail data for analyzing e-mail frequency, or performing sentiment analysis, as we have demonstrated in Section 4.
## 6 Conclusions
mRpostman aims to provide an easy-to-use IMAP client for R. Its design allows the efficient, elegant, and intuitive execution of several IMAP commands on a wide range of mail providers. Consequently, users cannot only manage their mailboxes but also conduct e-mail data analysis from inside R. Finally, because IMAP is such a complex protocol, this package is in constant development, which means that new features are to be implemented in future versions.
## 7 Conflict of Interest
No conflict of interest exists: We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
## Acknowledgements
The author would like to acknowledge the Department of Statistics at Kansas State University (K-State) for the assistantship provided for his doctorate studies. He wants to especially thank Dr. Christopher Vahl and Dr.
Michael Higgins for the academic support. The author also acknowledges the academic guidance of Dr. George von Borries at the University of Brasilia (UnB). The contents of this article are the responsibility of the author and do not reflect the views of K-State or UnB.
## Appendix A Code for example 1
library(mRNApostman) con <- configure_imap( url="imaps://outlook.office365.com", username="[email protected]", password=rstudioap::askForPassword() ) con$select_folder(name = "INBOX") meta_res <- con$search_period(since_date_char = "01-Nov-2020", before_date_char = "01-Dec-2020") %>% con$fetch_metadata(attribute = "ENVELOPE") # cleaning # step 1 clean_meta <- lapply(meta_res, function(x){ regmatches(x, regexpr(pattern = "\(\((.*\"(.*?)\"\\(\(\verb(\), x, perl=TRUE)) ) # step 2 # cleaning Ccs senders1 <- lapply(clean_meta, function(x){ gsub(")) NIL.*$|)).*$|))$", "", x) })
step 3 senders1 <- lapply(senders1, function(x){ gsub("\(\(\(|^{"+, "", x) ) ) # splitting name <- c() email <- c() for (i in seq_along(senders1)){ # i = 1 out <- unlist(strsplit(senders1[[i]], " NIL ")) name <- c(name, out[1]) email <- c(email, gsub(" ", "@", out[2])) } df <- data.frame(name, email) df$name <- decode_mine_header(string = as.character(df$name)) df <- as.data.frame(table(df$email)) colnames(df2) <- c("email", "count") df2 <- df2[order(-df2[,2]), 1[1:5,] df2$name <- unique(df$name[df$email %in% df2$email]) par(mar=c(5,13,4,1)+.1) pal_cols <- c('$3B4992FF', 'EEE0000OFF', '#008B45FF', '#631879FF', '#00828OFF') barplot(rev(df2$count), main = "E-mail Frequency (by sender)", xlab = "count", names.arg = rev(df2$email), las = 1, col = pal_cols, horiz = TRUE) mysubtitle <- "Period: 01-Nov to 01-Dec-2020" legend(x = "bottomright", legend = df2$name, fill = rev(pal_cols), bty = "n", y.intersp = 1) mtext(side=3, line=0.3, at=-0.07, adj=0, cex=0.9, mysubtitle)
## Appendix B Code for example2
library(mRpostman) con <- configure_imap(url="imaps://outlook.office365.com", username="[email protected]", password=rstudioapi::askForPassword(), timeout_ms = 20000 ) con$select_folder("INBOX") ids <- con$search.period(since_date_char = "10-Oct-2020", before_date_char = "20-Dec-2020") fetch_res2 <- ids $> com$fetch_body(mine_level = 1L) cleaned_text_list <- clean_msg_text(msg_list = fetch_res2) cleaned_text_list[[4]] for (i in seq_along(cleaned_text_list)) { clean_text <- gsub("\n", "", cleaned_text_list[[i]]) clean_text <- unlist(strsplit(clean_text, "")) words <- clean_text[!grep1("\n")|d|_http|www|hsp|@|(?<=[[:lower:]])(?=[[:upper:]])", clean_text, perl = TRUE)] words <- tolower(gsub("\n", "", words)) words <- gsub('[^a-zA-Z][:blank:])', "", words) cleaned_text_list[[i]] <- paste(words, collapse = "") } cleaned_text_df <- do.call('rbind', cleaned_text_list) library(syuzhet) email_sentiment_df <-get_nrc_sentiment(cleaned_text_df) rownames(email_sentiment_df) <- rownames(cleaned_text_df) head(email_sentiment_df,10)
| Internet Message Access Protocol (IMAP)クライアントは、いくつかのプログラミング言語において、一般的な機能です。電子メッセージの取得用のパッケージは存在しますが、R言語は、最近まで、幅広い解決策を欠き、異なるIMAPサーバーに対応できず、さまざまな機能を提供していませんでした。mRpostmanは、メッセージの検索、メッセージ属性の選択的取得、メールボックスの管理、添付ファイルの抽出、およびその他のIMAP機能を、 virtually any mail providerで実行可能なツールを通じて、IMAP 4rev1の多くの機能をカバーしています。これにより、ユーザーは、メールコンテンツに基づいたデータ分析を行うことができます。この論文の目的は、mRpostmanパッケージのツールを紹介することで、その主要な機能を説明し、いくつかのアプリケーション例を提供することです。 |
2305.19827 | Bose Gas Modeling of the Schwarzschild Black Hole Thermodynamics | Black holes violate the third law of thermodynamics, and this gives rise to
difficulties with the microscopic description of the entropy of black holes.
Recently, it has been shown that the microscopic description of the
Schwarzschild black hole thermodynamics in $D = 4$ spacetime dimensions is
provided by the analytical continuation of the entropy of Bose gas with
non-relativistic one particle energy to d =-4 negative spatial dimension. In
this paper, we show that the D=5 and D=6 Schwarzschild black holes
thermodynamics can be modeled by the d-dimensional Bose gas, d=1,2,3..., with
the one particle energy $\varepsilon(k)=k^\alpha$ under conditions
$\alpha=-d/3$ and $\alpha=-d/4$, respectively. In these cases the free energy
of the Bose gas has divergences and we introduce a cut-off and perform the
minimal renormalizations. We also perform renormalizations using analytical
regularization and prove that the minimal cut-off renormalization gives the
same answer as the analytical regularization by the Riemann zeta-function. | I. Ya. Aref'eva, I. V. Volovich | 2023-05-31T13:08:34 | http://arxiv.org/abs/2305.19827v1 | # Bose Gas Modeling of the Schwarzschild Black Hole Thermodynamics
###### Abstract
Black holes violate the third law of thermodynamics, and this gives rise to difficulties with the microscopic description of the entropy of black holes. Recently, it has been shown that the microscopic description of the Schwarzschild black hole thermodynamics in \(D=4\) spacetime dimensions is provided by the analytical continuation of the entropy of Bose gas with non-relativistic one particle energy to \(d=-4\) negative spatial dimension. In this paper, we show that the \(D=5\) and \(D=6\) Schwarzschild black holes thermodynamics can be modeled by the d-dimensional Bose gas, \(d=1,2,3...\), with the one particle energy \(\varepsilon(k)=k^{\alpha}\) under conditions \(\alpha=-d/3\) and \(\alpha=-d/4\), respectively. In these cases the free energy of the Bose gas has divergences and we introduce a cut-off and perform the minimal renormalizations. We also perform renormalizations using analytical regularization and prove that the minimal cut-off renormalization gives the same answer as the analytical regularization by the Riemann zeta-function.
## 1 Introduction
The problem with the microscopic origin of the Bekenstein-Hawking entropy [1; 2] for the Schwarzschild black holes is that black holes do not satisfy the third law of thermodynamics in its standard formulation. Therefore, such exotic thermodynamics behaviour of black hole cannot be obtained by using ordinary quantum statistical mechanics models which obey the third law, see discussion and refs in [3].
In [3] we have shown that the entropy of the \(D=4\) Schwarzschild black hole
\[S_{BH}=\frac{\beta^{2}}{16\pi},\qquad\beta=\frac{1}{T}\,, \tag{1}\]
where \(T\) is the temperature, corresponds to the Bose gas in \(d=-\,4\)_negative_ spatial dimensions. This conclusion is obtained by using properties of the Riemann zeta function. The entropy of the Bose gas in \(d\)-dimensional space is proportional to
\[S_{BG}\sim\left(\frac{d}{2}+1\right)\zeta\left(\frac{d}{2}+1\right)\beta^{- \frac{d}{2}}\,, \tag{2}\]
where \(\zeta\) is the Riemann zeta function. The expression (2) admits the analytical continuation for complex \(d\), in particular for \(d=-4\) we have
\[S_{BG}\sim\beta^{2}, \tag{3}\]
therefore, we get the entropy of the \(D=4\) Schwarzschild black hole. Note that the proportionality factor is a positive number and there is no divergences in this calculation.
In this paper we show that some higher-dimensional black holes can be described using the Bose gas in positive dimensions. However, in these cases there are divergences that should be renormalized. We consider the \(d\) dimensional Bose gas with the kinetic term \(k^{\alpha}\), in this case the free energy \(F_{BG}\) is proportional to
\[F_{BG}\sim I(-\frac{d}{\alpha})\,\beta^{-1-d/\alpha}, \tag{4}\]
where
\[I(s)=\int_{0}^{\infty}\ln\left(1-e^{-x}\right)\,\frac{dx}{x^{1+s}}. \tag{5}\]
Of particular interest to us is the case with \(d/\alpha=2-D\), since in this case we get
\[F_{BG}\sim I(D-2)\,\beta^{D-3}, \tag{6}\]
that coincides with the Schwarzschild black hole dependence of the free energy on the inverse temperature \(\beta\), \(F_{BH}\sim\beta^{D-3}\). However, the integral \(I(s)\) diverges for \(s\geq 0\), and the formula (4) has no immediate meaning. To cure the formula (4) we introduce regularization in (5) and then perform renormalizations. We consider two possible regularizations of the integral in (5): cut-off regularization and analytical regularization [4]. In both cases we performed minimal subtractions and define \(I_{ren}\) and \(\mathcal{I}_{ren}\) in the first and second cases, respectively. We prove that both regularizations give the same answer, that explicitly means the validity of the identity (10) presented in Sect.5. In particular, \(D=5\) and \(D=6\) black hole spacetime dimensions correspond to the Bose gas model with \(d/\alpha=-3\) and \(d/\alpha=-4\), respectively.
The paper is organized as follows. In Sect.2 the Bose gas model with non-standard kinetic term is presented and two possible schemes of free energy renormalizations are mentioned. In Sect.3 the cut-off regularization is introduced and the minimal renormalization is performed. In Sect.4 the analytical regularization is introduced and the its minimal renormalization is presented. Sect.5 the equivalence of the cut-off minimal renormalization and minimal analytical renormalization is proved. In Sect.6 few explicit examples are presented and we conclude in Sect.7 with the discussion of obtained results.
Setup
We consider the Bose gas with kinetic term \(\lambda(\vec{k},\vec{k})^{\alpha/2}\). In d-dimensional case the free energy is [5; 6]
\[F_{BG}=\frac{\Omega_{d-1}}{\beta}\left(\frac{L}{2\pi}\right)^{d}\int_{0}^{\infty }\ln\left(1-e^{-\beta\,\lambda\,k^{\alpha}}\right)\,k^{d-1}dk, \tag{1}\]
where \(\Omega_{d-1}=2\pi^{d/2}/\Gamma(d/2)\) and \(\beta,\lambda,\alpha,L\) are positive constants, \(d=1,2,3,...\). By changing the variable
\[k=\left(\frac{x}{\beta\lambda}\right)^{1/\alpha}, \tag{2}\]
we get
\[F_{BG}=\frac{\Omega_{d-1}}{\alpha\beta}\left(\frac{L}{2\pi}\right)^{d}\left( \frac{1}{\beta\lambda}\right)^{d/\alpha}I(-\frac{d}{\alpha}), \tag{3}\]
where
\[I(s)=\,\int_{0}^{\infty}\ln\left(1-e^{-x}\right)\,\frac{dx}{x^{1+s}}. \tag{4}\]
For \(d=1,2,3,...\) and \(\alpha>0\) the integral in (4) converges and
\[I(s)=-\Gamma(-s)\zeta(1-s),\quad\mathfrak{R}s<-1. \tag{5}\]
However, as has been mentioned in Introduction, the integral in (4) diverges for \(s\geq 0\). To give a meaning for this formula for \(s\geq 0\) we introduce regularizations. We consider two regularizations: cut-off regularization and analytical regularization. We performed minimal subtractions and define \(I_{ren}\) and \(\mathcal{I}_{ren}\) in the first and second cases, respectively. Below we schematically describe both of them.
* Cut-off regularizations. In this case we start from \[I(s,a)\equiv\,\int_{a}^{\infty}\ln\left(1-e^{-x}\right)\frac{dx}{x^{1+s}},\,a >0.\] (6) We find a singular part of the asymptotics of the integral \(I(s,a)\) as \(a\to 0\) in the form \[S(s,a)=\sum_{i\geq 0}A_{i}\frac{\log a}{a^{i}}+\sum_{i\geq 1}C_{i}\frac{1}{a^{i}}.\] (7) Then we subtract this singular part \(S(s,a)\) \[I_{ren}(s,a)=I(s,a)-S(s,a),\] (8) and finally remove the regularisation \[I_{ren}(s)=\lim_{a\to 0}I_{ren}(s,a).\] (9)
* Analytical regularization. In this case we start from the following representation \[I(s)=\int_{0}^{\infty}\ln\left(1-e^{-x}\right)\frac{dx}{x^{1+s}}=-\,\Gamma(-s) \,\zeta(-s+1),\quad\mathfrak{R}s<0\] (10) However, the right-hand side of (10) is well defined for all \(s\neq 0\) and \(s\neq n\), here \(n\in\mathbb{Z}_{+}\) and we denote it by \(\mathcal{I}(s)\), \[\mathcal{I}(s)=-\,\Gamma(-s)\,\zeta(-s+1).\] (11) The function \(\mathcal{I}(s)\) given by (11) is a meromorphic function for \(s\in\mathbb{C}\). It has poles at \(s=n>0\) and a double pole at \(n=0\). We define \(\mathcal{I}_{ren}(n)\) as \[\mathcal{I}_{ren}(n) \equiv \lim_{\epsilon\to 0}\left[-\Gamma(-n+\epsilon)\zeta(1-n+ \epsilon)-\text{Pole Part}\left[(-\Gamma(-n+\epsilon)\zeta(1-n+\epsilon) ]\right]\right]\] (12) \[\text{at point}\quad n=1,2,3,...\] and \[\mathcal{I}_{ren}(0) \equiv \lim_{\epsilon\to 0}\left[-\Gamma(\epsilon)\zeta(1+ \epsilon)-\text{Double Pole Part}\left[(-\Gamma(\epsilon)\zeta(1+\epsilon) \right]\right]\] (13) \[\mathcal{I}_{ren}(s) \equiv \mathcal{I},\quad s>0,s\neq\mathbb{Z}_{+}\,.\] (14)
* In what follows we prove that \[\mathcal{I}_{ren}(n) = I_{ren}(n),\] (15) \[\mathcal{I}(s) = I_{ren}(s),\quad s\neq n\] (16) The detail definitions of \(I_{ren}(n)\) and \(\mathcal{I}_{ren}(n)\) will be given in Sect.3 and Sect.4, respectively. In Sect. 5 we show the equivalence of these two forms of renormalizations, i.e. validity of (15) and (16).
Cut-off renormalization
In this section we present the explicit form of the renormalized version of (6) after the minimal renormalization. We distinguish two cases: integer and non-integer \(s\geq 0\).
* For \(s=n\), \(n=0,1,2,...\), the following proposition holds.
**Proposition 1.**_The renormalized version of (6) after minimal renormalizations defined by (9) is given by_
\[I_{ren}(n) = \int_{0}^{1}\frac{1}{x^{n+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n}c_{k}x^{k}\Big{]}\,dx \tag{10}\] \[- \frac{1}{n^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-n}+\int_{1}^{ \infty}\frac{1}{x^{n+1}}\ln\Big{(}1-e^{-x}\Big{)}dx,\quad n>0;\] \[I_{ren}(0) = \int_{0}^{1}\frac{1}{x}\ln\Big{(}\frac{1-e^{-x}}{x}\Big{)}\,dx+ \int_{1}^{\infty}\frac{1}{x}\ln\Big{(}1-e^{-x}\Big{)}dx. \tag{11}\]
* For \(s\neq 0\), \(s\neq n\in\mathbb{Z}_{+}\), in the following proposition holds.
**Proposition 1\({}^{\prime}\).**_The renormalized version of (6) after minimal renormalizations is_
\[I_{ren}(s) = \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n(s)}c_{k}x^{k}\Big{]}dx \tag{12}\] \[- \frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{c_{k}}{k-s}+\int_{1}^{ \infty}\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}dx\,,\] \[n(s) = \text{Entier}[s],\,\text{i.e the integer part of }\,s. \tag{13}\]
**Remark.** The formula (11) can be considered as a generalization of the Chebyshev formula for the zeta-function, see [7; 8].
To prove these propositions we present \(I(s,a)\) given by (6) as
\[I(s,a)=I(s,a,1)+I(s,1,\infty), \tag{14}\]
where
\[I(s,a,1) = \int_{a}^{1}\ln\Big{(}1-e^{-x}\Big{)}\,\frac{dx}{x^{1+s}},\qquad a<1 \tag{15}\] \[I(s,1,\infty) = \int_{1}^{\infty}\ln\Big{(}1-e^{-x}\Big{)}\,\frac{dx}{x^{1+s}}. \tag{16}\]
We expand the integrand in (15) in the power series near the \(x=0\). We have
\[\ln\Big{(}1-e^{-x}\Big{)}=\log(x)+\sum_{k=1}^{\infty}c_{k}x^{k}, \tag{17}\]
\(c_{k}\) are related with the Bernoulli numbers \(B_{k}\), see Appendix A,
\[c_{k}=\frac{1}{k\,k!}\,B_{k} \tag{3.9}\]
and we have
\[\frac{1}{x^{1+s}}\ln\left(1-e^{-x}\right)=\frac{1}{x^{1+s}}\log(x)+\sum_{k=1}^{ n(s)}c_{k}x^{k-1-s}+\sum_{k=n(s)+1}^{\infty}c_{k}x^{k-1-s} \tag{3.10}\]
We take \(n(s)=E[s]\), where \(E[s]\) is the integer part of \(s\). Therefore, in the first sum in the RHS of (3.10) all terms have power less then \(-1\) and after integrating the equality (3.10) in interval \((a,1)\) give raise to singular terms for \(a\to 0\). Let us find these singular terms explicitly first for \(s=n\).
* \(s=n\). We have \[I(n,a,1) = \int_{a}^{1}\Bigl{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1} ^{n}c_{k}x^{k}\Bigr{]}\frac{dx}{x^{1+n}}+\int_{a}^{1}\frac{\log x}{x^{1+n}}dx +\sum_{k=1}^{n}c_{k}\int_{a}^{1}\frac{dx}{x^{1+n-k}}\] (3.11) \[= \int_{a}^{1}\frac{1}{x^{1+n}}\Biggl{[}\ln\left(\frac{1-e^{-x}}{x }\right)-\sum_{k=1}^{n}c_{k}x^{k}\Biggr{]}\,dx\] \[+ \frac{1}{n^{2}a^{n}}+\frac{\log a}{na^{n}}-c_{n}\log a-\frac{1}{n ^{2}}+\sum_{k=1}^{n-1}c_{k}\Biggl{[}\frac{1}{k-n}-\frac{a^{k-n}}{k-n}\Biggr{]}\] (3.12) This identity gives the representation \[I(n,a,1)=S(n,a)+F(n)+\mathcal{O}(a),\] (3.13) where \(S(a,n)\) includes all singular terms at \(a\to 0\) \[S(n,a)=\frac{\log a}{na^{n}}+\frac{1}{n^{2}a^{n}}-c_{n}\log a-\sum_{k=1}^{n-1 }c_{k}\frac{a^{k-n}}{k-n},\] (3.14)
\(F(n)\) is the finite part that contains the limit at \(a\to 0\) of the convergent integral in the line (3.11) and two terms from the line (3.12)
\[-\frac{1}{n^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-n} \tag{3.15}\]
The representation (3.13) gives the statement of Proposition 1.
* For arbitrary \(s>0\) and \(s=n+\delta\), \(0<\delta<1\) we have
\[I(s,a,1) = \int_{a}^{1}\Big{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1}^{n (s)}c_{k}x^{k}\Big{]}\frac{dx}{x^{1+s}}+\int_{a}^{1}\frac{\log x}{x^{1+s}}dx+ \sum_{k=1}^{n(s)}c_{k}\int_{a}^{1}\frac{dx}{x^{1+s-k}} \tag{3.16}\] \[!!!= \int_{a}^{1}\Big{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1}^{ n(s)}c_{k}x^{k}\Big{]}\frac{dx}{x^{1+s}}\] (3.17) \[+ \frac{1}{s^{2}a^{s}}+\frac{\log(a)}{sa^{s}}-\frac{1}{s^{2}}+\sum _{k=1}^{n(s)}c_{k}\Bigg{[}\frac{1}{k-s}-\frac{a^{k-s}}{k-s}\Bigg{]},\]
\(n(s)\) is the integer part of \(s\). This identity gives representation
\[I(s,a,1)=S(s,a)+F(s)+\mathcal{O}(a), \tag{3.18}\]
where \(S(s,a)\) includes all singular terms at \(a\to 0\)
\[S(s,a)=\frac{1}{(s)^{2}a^{s}}+\frac{\log(a)}{sa^{s}}+\sum_{k=1}^{n(s)}c_{k} \left[-\frac{a^{k-n-\delta}}{k-n-\delta}\right]. \tag{3.19}\]
Few terms give contributions to \(F(s)\). The integral in the line (3.16) converges at \(a\to 0\) and contributes to the finite part \(F(s)\). Two terms
\[-\frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{1}{k-s}c_{k} \tag{3.20}\]
also contribute to the final part \(F(s)\) and we get
\[F(s)=\int_{0}^{1}\Big{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1}^{n(s)}c _{k}x^{k}\Big{]}\frac{dx}{x^{1+s}}-\frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{1}{ k-s}c_{k} \tag{3.21}\]
Subtracting \(S(s,a)\) and removing regularization we get the proof of the Proposition \(1^{\prime}\).
## 4 Analytical renormalization
In this section we present the explicit form of the renormalized version of (2.6) after the analytical renormalization. As in Sect.3 we distinguish two cases: integer and non-integer \(s\geq 0\).
* For \(s=n\), \(n=0,1,2,...\) the following proposition holds.
**Proposition 2.**_The renormalized version of (2.10) after analytical renormalizations defined by (2.12) is given by_
\[\mathcal{I}_{ren}(n)=-\left\{\begin{array}{cc}\frac{(-1)^{n}}{n!}\left[ \zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k}\right)\zeta(1-n) \right],&n=1,2,3...\\ \\ \frac{1}{12}\left(12\gamma_{1}+6\gamma^{2}-\pi^{2}\right),&n=0\end{array}\right. \tag{4.1}\]
To prove this Proposition we follow the definitions (12) and take \(s=n-\epsilon\), \(n\neq 0\) and (11) for \(n=0\). We have
\[\Gamma(-s)\,\zeta(-s+1)=\Gamma(\epsilon-n)\zeta(1-n+\epsilon)\] \[= \frac{(-1)^{n}}{n!}\,\zeta(1-n)\,\frac{1}{\epsilon}+\frac{(-1)^{n }}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k}\right) \zeta(1-n)\right]+\mathcal{O}(\epsilon) \tag{22}\]
and for \(n=0\) we have
\[-\Gamma(-\epsilon)\zeta(1-\epsilon)=\frac{1}{12}\left(12\gamma_{1}+6\gamma^{2 }-\pi^{2}\right)-\frac{1}{\epsilon^{2}}+\mathcal{O}(\epsilon), \tag{23}\]
where \(\gamma\) is the Euler-Mascheroni constant, \(\gamma=0.577\) and \(\gamma_{1}\) is the Stieltjes constant, \(\gamma_{1}=-0.0728\). Subtracting the pole in (22) and double pole in (23) we get the first line and the second line in (21), respectively.
**Proposition \(2^{\prime}\).**_The analytical regularization for \(s\neq\mathbb{Z}\) gives directly the finite answer \(\mathcal{I}(s)\)_.
The proof follows immediately from the form of \(\mathcal{I}(s)\) given by (11).
**Remark.** Note that we have considered here the integral (4) as a whole. However, it should be noted that this integral (4) is equal to the product of the gamma function and the zeta function and in fact the divergences occur only in the gamma function. In this case, it is possible to carry the gamma function renormalization and obtain similar results.
In this case instead of the expression (25) we get
\[\mathcal{I}_{ren,\Gamma}(n)=\frac{(-1)^{n}}{n!}\left(-\gamma+\sum_{k=1}^{n} \frac{1}{k}\right)\zeta(1-n) \tag{24}\]
By using (13) we get
\[\mathcal{I}_{ren,\Gamma}(n)=\frac{B_{n}}{n!\,n}\left(\gamma-\sum_{k=1}^{n} \frac{1}{k}\right) \tag{25}\]
## 5 Equivalence of cut-off and analytical renormalizations
In this Section we prove that the renormalized free energies defined by the cut-of renormalization (9) and the analytical regularization (12)-(14) coincide. We distinguish three cases: \(s=n\neq 0\), \(s=0\) and \(s\neq 0,n\in\mathbb{Z}_{+}\).
**Proposition 3**. _The minimal renormalized free energy (9) for \(s=n\neq 0\) and the analytic renormalized free energy (12) coincide_
\[I_{ren}(n)=\mathcal{I}_{ren}(n). \tag{26}\]
_Explicitly (5.1) means the validity of the following identity_
\[\int_{1}^{\infty}\frac{1}{x^{n+1}}\ln\Big{(}1-e^{-x}\Big{)}dx+\int_ {0}^{1}\frac{1}{x^{n+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x}\Big{)}-\sum_{k=1}^{ n}c_{k}x^{k}\Big{]}dx\] \[\qquad-\frac{1}{n^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-n}\] \[=\,\frac{(-1)^{n}}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+ \sum_{k=1}^{n}\frac{1}{k}\right)\zeta(1-n)\right],\quad n=1,2,..., \tag{5.2}\]
_for_
\[c_{k}=-\frac{(-1)^{k}}{k!}\,\zeta(1-k)=\frac{B_{k}}{k\,k!},\qquad k=1,2,3,...\,. \tag{5.3}\]
**Proof.** Let us consider the function \(\psi(n,s)\) of \(s\)-variable depending on the integer parameter \(n\), \(n>0\), defined for \(\mathfrak{R}s<n+1\) as
\[\psi(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-s}+\int_{1}^{ \infty}\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}dx \tag{5.4}\] \[+ \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n}c_{k}x^{k}\Big{]}dx.\]
According Proposition 1,
\[\psi(n,n)=I_{ren}(n). \tag{5.5}\]
For \(s<0\) the integral
\[\int_{0}^{1}\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}\,dx \tag{5.6}\]
converges and after rearrangement of the terms in the RHS of (5.4) we can rewrite \(\psi(n,s)\) as
\[\psi(n,s)=H(n,s)-\,\Gamma(-s)\,\zeta(-s+1)-T(n,s),\quad s<0, \tag{5.7}\]
where
\[H(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-s},\] \[T(n,s) = \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln x+\sum_{k=1}^{n}c_{k}x^{ k}\Big{]}dx. \tag{5.8}\]
Evaluating \(T(n,s)\) for \(\mathfrak{R}s<0\) we get
\[T(n,s)=-\frac{1}{s^{2}}+\sum_{k}^{n}\frac{c_{k}}{k-s} \tag{5.9}\]
and the RHS of (5.4) becomes equal to
\[-\Gamma(-s)\,\zeta(-s+1)-\frac{c_{n}}{n-s}, \tag{5.10}\]
that is meromorphic function of variable \(s\) on whole \(\mathbb{C}\). This function from one side due to equation (5.5) for \(s=n\) coincides with \(I_{ren}\) and from other side can be evaluated in the following way.
First note that the pole in (5.10) is exactly the pole that has to be subtracted in the analytical renormalization defined in (2.12). For this purpose we take \(s=n-\epsilon\) and consider \(\Gamma(-s)\,\zeta(-s+1)\) for small \(\epsilon\). Due to (B.8), see Appendix B we have
\[\Gamma(-s)\,\zeta(-s+1)=\Gamma(\epsilon-n)\zeta(1-n+\epsilon)\] \[=\frac{(-1)^{n}}{n!}\,\zeta(1-n)\,\frac{1}{\epsilon}+\frac{(-1)^ {n}}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k} \right)\zeta(1-n)\right]+\mathcal{O}(\epsilon) \tag{5.11}\]
Therefore, to check that the pole in (5.10) is exactly the pole that we have in (5.11), we have to check that
\[c_{n}=-\frac{(-1)^{n}}{n!}\,\zeta(1-n) \tag{5.12}\]
The proof of equation (5.12) follows from representation of \(\zeta(-n)\) in term of the Bernoulli numbers, see (B.1) in Appendix B, we have
\[\zeta(1-k)=\frac{(-1)^{1-k}\,B_{k}}{k}. \tag{5.13}\]
Due to (5.13) the RHS of (5.12) is
\[-\frac{(-1)^{n}}{n!}\,\zeta(1-n)=-\frac{(-1)^{n}}{n!}\,\frac{(-1)^{1-n}\,B_{n }}{n}=\frac{1}{n\,n!}\,B_{n} \tag{5.14}\]
and the obtained expression coincides with definition (3.9) of \(c_{k}\).
Also from (5.11) we get
\[\mathcal{I}_{ren}(n)=\frac{(-1)^{n}}{n!}\left[\zeta^{\prime}(1-n)+\left(- \gamma+\sum_{k=1}^{n}\frac{1}{k}\right)\,\zeta(1-n)\right] \tag{5.15}\]
**Proposition \(3^{\prime}\)**. _The minimal renormalized free energy (3.1) and the analytic renormalized free energy (2.14) coincide,_
\[I_{ren}(s)=\mathcal{I}(s),\quad\text{for}\quad s>0\quad\text{and}\quad s\neq n \in\mathbb{Z}_{*}. \tag{5.16}\]
_Explicitly (5.16) means the validity of the following identity_
\[\int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n(s)}c_{k}x^{k}\Big{]}dx+\int_{1}^{\infty}\frac{1}{x^{s+1}} \ln\Big{(}1-e^{-x}\Big{)}dx\] \[\qquad-\frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{c_{k}}{k-s}\] \[=-\,\Gamma(-s)\,\zeta(-s+1),\quad n(s)=E(s)-\text{the integer part of number}\,s. \tag{5.17}\]
To prove the identity (5.17) we consider the function \(\psi(n,s)\), \(s<n+1\),
\[\psi(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n}\frac{c_{k}}{k-s}+\int_{1}^{\infty }\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}dx \tag{5.18}\] \[+ \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n}c_{k}x^{k}\Big{]}dx.\]
From the Proposition 1\({}^{\prime}\) we see that
\[\psi(n(s),s)=I_{ren}(s). \tag{5.19}\]
From other site, for \(\psi(n,s)\) at \(\mathfrak{R}s<0\) we can write the representation
\[\psi(n,s)=-\,\Gamma(-s)\,\zeta(-s+1),\quad\mathfrak{R}s<0. \tag{5.20}\]
Indeed, for \(s<0\) we rearrange the terms in (5.18) and get
\[\psi(n,s)=H(s,n)-\,\Gamma(-s)\,\zeta(-s+1)-T(n,s), \tag{5.21}\]
where
\[H(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n}\frac{c_{k}}{k-s},\] \[T(n,s) = \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln x+\sum_{k=1}^{n}c_{k}x^{k }\Big{]}dx. \tag{5.22}\]
Evaluating \(T(n,s)\) for \(\mathfrak{R}s<0\) we get
\[T(n,s)=-\frac{1}{s^{2}}+\sum_{k}^{n}\frac{c_{k}}{k-s} \tag{5.23}\]
and \(T(n,s)\) compensates \(H(n,s)\) and we get (5.20). From the uniqueness of analytical continuation we get (5.17).
Examples
In this Section we consider few examples of specific values of \(d,\,D,\,\alpha\) which provide the Bose gas interpretations of the Schwarzschild black hole thermodynamics. For any \(D=4,5,6,..\) and \(d=1,2,3,...\), we set \(\alpha=d/(2-D)\). Using (6) we get
\[F_{BG,ren}=\frac{\Omega_{d-1}}{\alpha}\left(\frac{L}{2\pi}\right)^{d}\lambda^{D -2}\,I_{ren}(D-2)\,\beta^{D-3}. \tag{10}\]
Considering the equation (108)-(110) we obtain the following expressions for the Bose gas free energy
* \(-\frac{d}{\alpha}=2\). In this case \(D=4\) and according (10) \[\mathcal{I}_{ren}(2)=\frac{1}{48}(24\log(A)+1-2\gamma)=0.121,\] (11) therefore, \[F_{BG,ren}=-\frac{2\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\,\lambda^{2 }\,I_{ren}(2)\beta=-0.242\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^ {d}\,\lambda^{2}\,\beta.\] (12) This case is not suitable for us since it gives negative entropy.
* \(-\frac{d}{\alpha}=3\). In this case \(D=5\) and according (10) \[\mathcal{I}_{ren}(3)=\frac{1}{6}\zeta^{\prime}(-2)=-0.00507,\] (13) therefore we have \[F_{BG,ren}=-\frac{3\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\,\lambda^{ 3}\,I_{ren}(3)\,\beta^{2}=0.0152\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi} \right)^{d}\,\lambda^{3}\,\beta^{2}.\] (14)
* \(-\frac{d}{\alpha}=4\). In this case \(D=6\) and according (110) \[\mathcal{I}_{ren}(4)=\frac{-1440\zeta^{\prime}(-3)-25+12\gamma}{34560}=-0.000747,\] (15) therefore we have \[F_{BG,ren}=-\frac{4\,\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\,\lambda ^{4}\,I_{ren}(4)\,\beta^{3}=0.00299\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi }\right)^{d}\,\lambda^{4}\,\beta^{3}.\] (16)
From the consideration above, we see that among the listed cases, only in the cases \(D=5,6\) we obtain a positive value of the corresponding entropy.
* \(D=5\). In this case according (14) we have \[S_{BG,ren}=0.0304\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\, \lambda^{3}\,\beta^{3}.\] (17) Here \(d=1,2,3,...\) and the corresponding \(\alpha\) takes values \(-1/3,-2/3,-1,....\).
* \(D=6\). In this case according (6.7) we have \[S_{BG,ren}=0.00896\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\lambda^ {4}\,\beta^{4}.\] (6.9) Here \(d=1,2,3,...\) and the corresponding \(\alpha\) takes values \(-1/4,-1/2,-3/4,...\).
Let us note, that in the case of renormalization (4.5) we get
\[F_{BG,ren,G}(D)=\frac{\Omega_{d-1}}{\alpha}\left(\frac{L}{2\pi}\right)^{d}\, \lambda^{D-2}\,I_{ren,G}(D-2)\,\beta^{D-3}. \tag{6.10}\]
Since \(\alpha<0\) the sign of (6.10) is opposite to the sign of \(I_{ren,G}(D-2)\) and according to (4.5) the sign of \(F_{BG,ren,G}\) is defined by the Bernoulli number, i.e. \(F_{BG,ren,G}(D)<0\) for \(D=4k\) and \(F_{BG,ren,G}(D)>0\) for \(D=4k+2\), \(k=1,2,3\). For odd dimensions \(F_{BG,ren,G}(D)=0\).
## 7 Conclusion
In this paper the Schwarzschild black hole thermodynamics is modeled by the Bose gas statistical system. It is shown that the Schwarzschild black hole in \(D=5\) and \(D=6\) space-time dimensions correspond to the Bose gas with one-particle energy \(\varepsilon(k)=\lambda\,k^{\alpha}\) in \(d\) dimensional space with \(d/\alpha=-3\) and \(d/\alpha=-4\), respectively. Divergences in these Bose gas models are discussed. It is shown that the cut-off minimal subtraction renormalization scheme is equivalent to the analytical renormalization.This method does not work for the case of \(D=4\) Schwarzschild black hole, which corresponds to the Bose gas in \(d=-4\) negative dimension as it has been shown in the previous paper [3]. The microscopic statistical mechanics description of the Schwarzschild black hole thermodynamics suggested in this and the previous papers use negative dimensions or renormalizations of the Bose gas models.
It would be interesting to obtain a similar microscopic description of more general black holes including the Reissner-Nordstrom, Kerr and other black holes. These models also violate the third law of thermodynamics, so it is natural to expect that the corresponding statistical mechanic models will also have unusual properties.
## Acknowlegments
We would like to thank D. Ageev, V. Berezin, V. Frolov, M. Khramtsov, K. Rannu, P. Slepov, A. Teretenkov, A. Trushechkin and V. Zagrebnov for fruitful discussions. This work is supported by the Russian Science Foundation (project 19-11-00320, V.A. Steklov Mathematical Institute).
## Appendix A Bernoulli numbers and \(c_{k}\)
Differentiating (3.8) one has
\[\frac{1}{e^{x}-1}=\sum_{k=0}kc_{k}\,x^{k-1}.\] (A.1)
Comparing (A.1) with the generation function for the Bernoulli numbers
\[\frac{x}{e^{x}-1}=\sum B_{k}\frac{x^{k}}{k!},\] (A.2)
we see that
\[kc_{k}=\frac{B_{k}}{k!}.\] (A.3)
that gives (3.9).
## Appendix B Values of \(\zeta\) and \(\Gamma\) functions
Here we present some known facts about gamma and zeta functions [9]. One has
\[\zeta(-n)=\frac{(-1)^{n}B_{n+1}}{n+1},\quad n=1,2,3,...\] (B.1)
where \(B_{n}\) are the Bernoulli numbers defined by the generating function (A.2).
\[\zeta(-1)=-\frac{1}{12};\qquad\zeta^{\prime}(-1)=\frac{1}{12}-\ln A.\] (B.2)
For \(n\in\mathbb{N}\):
\[\zeta^{\prime}(-2n) = (-1)^{n}\frac{(2n)!}{2(2\pi)^{2n}}\,\zeta(2n+1)\] (B.3) \[-\zeta^{\prime}(1-2n) = \left.(2(2\pi)^{-s}\Gamma(s)\,\zeta(s))\right.^{{}^{\prime}} \Big{|}_{s=2n}\,\cos\left(\pi\,n\right)\] (B.4) \[\zeta^{\prime}(1-2n) = (-1)^{n+1}\frac{2\,\Gamma(2n)}{(2\pi)^{2n}}\Big{[}\left(-\log(2 \pi)+\psi(2n)\right)\zeta(2n)+\zeta^{\prime}(2n)\Big{]},\]
here \(\psi\) is the digamma function
\[\psi(s)=\frac{\Gamma^{\prime}(s)}{\Gamma(s)}.\]
For \(\Gamma\)-function we have
\[\frac{\Gamma(z-n)}{\Gamma(1+z)}=\frac{(-1)^{n}}{n!}\left(\frac{1}{z }+\sum_{r=0}^{\infty}A_{r}z^{r}\right), \tag{100}\] \[A_{r}=\sum_{k=1}^{n}\binom{n}{k}\frac{(-1)^{k-1}}{k^{r+1}},\qquad A _{0}=\sum_{k=1}^{n}\frac{1}{k}. \tag{101}\]
Therefore, we have
\[\Gamma(\epsilon-n) = \Gamma(1+\epsilon)\,\frac{(-1)^{n}}{n!}\left(\frac{1}{\epsilon}+ A_{0}+\mathcal{O}(\epsilon)\right) \tag{102}\] \[= \frac{(-1)^{n}}{n!}\left(\frac{1}{\epsilon}-\gamma+\sum_{k=1}^{n} \frac{1}{k}\right)+\mathcal{O}(\epsilon)\]
and
\[\Gamma(\epsilon-n)\zeta(1-n+\epsilon) \tag{103}\] \[= \frac{(-1)^{n}}{n!}\,\zeta(1-n)\,\frac{1}{\epsilon}+\frac{(-1)^{n }}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k} \right)\,\zeta(1-n)\right]+\mathcal{O}(\epsilon)\]
**Particular cases of (103)**
\[n=0: -\Gamma(\epsilon)\zeta(1+\epsilon)=-\frac{1}{\epsilon^{2}}+\frac{ 1}{12}\left(12\gamma_{1}+6\gamma^{2}-\pi^{2}\right)+\mathcal{O}(\epsilon) \tag{104}\] \[n=1: -\Gamma(\epsilon-1)\zeta(\epsilon)=-\frac{1}{2\epsilon}+\frac{1}{ 2}(-1+\gamma-\log(2\pi))+\mathcal{O}(\epsilon)\] (105) \[n=2: -\Gamma(\epsilon-2)\zeta(\epsilon-1)=\frac{1}{24\epsilon}+\frac{ 1}{48}(24\log(A)+1-2\gamma)+\mathcal{O}(\epsilon)\] (106) \[n=3: -\Gamma(\epsilon-3)\zeta(\epsilon-2)=\frac{1}{6}\zeta^{\prime}(- 2)+\mathcal{O}(\epsilon)\] (107) \[n=4: -\Gamma(\epsilon-4)\zeta(\epsilon-3)=-\frac{1}{2880\epsilon}+ \frac{-1440\zeta^{\prime}(-3)-25+12\gamma}{34560}+\mathcal{O}(\epsilon)\] (108) \[n=5: -\Gamma(\epsilon-5)\zeta(\epsilon-4)=\frac{1}{120}\zeta^{\prime}( -4)+\mathcal{O}(\epsilon). \tag{109}\]
Here \(\gamma\) is the Euler-Mascheroni constant,
\[\gamma=0.577..., \tag{110}\]
\(\gamma_{1}\) is the Stieltjes constant,
\[\gamma_{1}=-0.0728... \tag{111}\]
and \(A\) is the Glaisher constant
\[A=1.28.... \tag{101}\]
For \({\cal I}_{ren}\) we have
\[n=0: {\cal I}_{ren}(0)=\frac{1}{12}\left(12\gamma_{1}+6\gamma^{2}-\pi^{2} \right)=-0.728694 \tag{102}\] \[n=1: {\cal I}_{ren}(1)=\frac{1}{2}(-1+\gamma-\log(2\pi))=-1.13033\] (103) \[n=2: {\cal I}_{ren}(2)=\frac{1}{48}(24\log(A)+1-2\gamma)=0.12116\] (104) \[n=3: {\cal I}_{ren}(3)=\frac{1}{6}\zeta^{\prime}(-2)=-0.00507474\] (105) \[n=4: {\cal I}_{ren}(4)=\frac{-1440\zeta^{\prime}(-3)-25+12\gamma}{3456 0}=-0.000747065\] (106) \[n=5: {\cal I}_{ren}(5)=\frac{1}{120}\zeta^{\prime}(-4)=0.0000665318 \tag{107}\]
| ブラックホールは3番目の thermodynamics 規則を破るため、ブラックホールの entropi の微視的記述に困難が生じます。近年、$D = 4$ space-time 次元におけるSchwarzschild ブラックホールの thermodynamic の微視的記述は、ボーズガスの entropy の解析的継続によって提供されています。これは、非重力 одно粒子エネルギーを d=-4 の負の空間次元へ拡張したものです。この論文では、D=5 と D=6 Schwarzschild ブラックホールの thermodynamic を d 次元のボーズガスのモデルで記述することで、そのエネルギーを $\varepsilon(k)=k^\alpha$ とする条件 $\alpha=-d/3$ と $\alpha=-d/4$ で表現することができます。これらの場合、ボーズガスの自由エネルギーは発散し、カットオフを導入し最小限の再帰化を行います。また、解析的正規化を用いて再帰化を行い、最小カットオフ再 |
2309.16619 | Cubical Approximation for Directed Topology II | The paper establishes an equivalence between localizations of (diagrams of)
cubical sets and (diagrams of) directed topological spaces by those maps
defining (natural) cubical homotopy equivalences after application of the
directed singular functor and a directed analogue of fibrant replacement. This
equivalence both lifts and extends an equivalence between classical homotopy
categories of cubical sets and topological spaces. Some simple applications
include combinatorial descriptions and subsequent calculations of directed
homotopy monoids and directed singular 1-cohomology monoids. Another
application is a characterization of isomorphisms between small categories up
to zig-zags of natural transformations as directed homotopy equivalences
between directed classifying spaces. Cubical sets throughout the paper are
taken to mean presheaves over the minimal symmetric monoidal variant of the
cube category. Along the way, the paper characterizes morphisms in this variant
as the interval-preserving lattice homomorphisms between finite Boolean lattice
and describes some of the test model structure on presheaves over this variant. | Sanjeevi Krishnan | 2023-09-28T17:19:04 | http://arxiv.org/abs/2309.16619v1 | # Cubical approximation for directed topology II
###### Abstract.
The paper establishes an equivalence between localizations of (diagrams of) cubical sets and (diagrams of) directed topological spaces by those maps defining (natural) cubical homotopy equivalences after application of the directed singular functor and a directed analogue of fibrant replacement. This equivalence both lifts and extends an equivalence between classical homotopy categories of cubical sets and topological spaces. Some simple applications include combinatorial descriptions and subsequent calculations of directed homotopy monoids and directed singular 1-cohomology monoids. Another application is a characterization of isomorphisms between small categories up to zig-zags of natural transformations as directed homotopy equivalences between directed classifying spaces. Cubical sets throughout the paper are taken to mean presheaves over the minimal symmetric monoidal variant of the cube category. Along the way, the paper characterizes morphisms in this variant as the interval-preserving lattice homomorphisms between finite Boolean lattice and describes some of the test model structure on presheaves over this variant.
###### Contents
* 1 Introduction
* 2 Conventions
* 3 Directed Spaces
* 3.1 Continuous
* 3.2 Cubical
* 3.3 Comparisons
* 3.4 Cubcats
* 4 Homotopy
* 4.1 Abstract
* 4.2 Continuous
* 4.3 Cubical
* 4.4 Algebraic
* 4.5 Comparisons
* 5 Conclusion
## 1. Introduction
State spaces, which include classifying spaces of monoids and more general homotopy colimits of dynamical systems [39, 69] as well as spacetimes, often admit extra directed structure encoding causal relationships between states. Examples of such structure include
time-orientations and more general cosheaves of preorders. The qualitative behavior of a complex system often corresponds to features of a directed topological state space \(X\) invariant under continuous deformations on \(X\) that respect the given directionality. To that end, a basic goal is a formula for the set \([X,Y]\) of directed maps \(X\to Y\) between directed topological spaces up to a directed homotopy relation, directly in terms of some combinatorical description of \(X\) and \(Y\). Such a formula yields methods of calculation for representable directed cohomology theories (eg. [46, SS7]), with applications to the formal validation of critical software (eg. [20]) and homological algebra for monoids [13, 59, 37]. A general enough formula in effect gives an intrinsically combinatorial directed homotopy theory, with applications to the semantics of programming languages (eg. [29, 50, 64], see also SS5.)
Combinatorial descriptions of directed topological spaces often take the form of _cubical sets_. In the case \(X\) and \(Y\) are _directed realizations_ of respective cubical sets \(A\) and \(B\), a desired formula for \([X,Y]\) exists in the literature under each of the following conditions:
1. \(A\) is \(1\)-dimensional [19, Theorem 4.1]
2. \(A\) is finite and \(B\) satisfies a simplicial-like condition [44, Corollary 8.2]
3. \(B\) is fibrant in the test model structure
There are concrete reasons for wanting a formula for \([X,Y]\) under more general conditions than (1)-(3). Condition (1) rules out state spaces \(X\) representing the concurrent execution of more than \(1\) process. Condition (2) rules out non-compact directed topological spaces \(X\), like the state spaces of infinitely running but non-looping computations, the representing directed topological spaces of directed cohomology theories (eg. [46, SS7]), or functorial approximations \(\left|\mathsf{sing}\ M\right|\) of spacetimes \(M\) as directed realizations of cubical sets. Condition (3) constrains \(Y\) so much so that \([X,Y]\) is just the set of classical homotopy classes of maps \(X\to Y\) of underlying topological spaces and therefore ignores information about the directionality of \(X\). In short, the three conditions collectively do not cover all possibilities for \(A\) and \(B\) needed to give a completely, intrinsically combinatorial directed homotopy theory on cubical sets.
In the search for a general formula, the main challenge is that directed topological spaces almost never decompose into homotopy colimits, with respect to directed homotopy, of simpler directed topological spaces [46, paragraph after Theorem 4.1]. This indecomposability is inextricably tied to the general difficulty of analyzing the global, qualitative behavior of a complex process, such as the execution of an asynchronous microprocessor, the concurrent operation of sequential threads in a computer, or a natural process described by a dynamical system. Small changes in the behavior of a single agent can have dramatic affects on the global behavior of a multi-agent system. Said differently, a seemingly minor local deformation in a directed state space \(X\) can sometimes drastically affect the global deformation type of \(X\). Classical approximations, such as cellular and simplicial approximation (cf. [15, Theorem 12.1]), can be constructed one cell at a time because CW complexes are homotopy colimits, with respect to classical homotopy, of their cells. Directed analogues require much greater delicacy.
Intuitively, a general formula should just be that \([X,Y]\) is the set
\[[X,Y]=\pi_{0}C^{A}\]
of connected components of a mapping cubical set \(C^{A}\) for an extension \(C\) of \(B\) to a cubical set admitting higher algebraic structure generalizing fibrancy (eg. [1, 9, 17, 32]). The desired extension will not generally define a higher category in the standard sense (eg. [9, 17, 32, 40, 68, 67]); the directed singular cubical set of a physically realistic spacetime [55],
lacking non-constant non-spacelike curves, cannot admit invertible directed singular cubes witnessing higher associativity and unitality. This paper introduces a _cubcat_ as a cubical set admitting the requisite structure, a cubical set admitting extra operations parametrized by directed maps between topological cubes and compatible composition operations [Definition 3.32].
The main point is that cubcats are directed analogues of fibrant cubical sets. Cubical sets can be replaced by fibrant cubical sets without changing classical homotopy types of topological realizations. Cubical sets can be replaced by cubcats without changing directed homotopy types of directed realizations [Proposition 3.35 and Corollary 4.23]. Fibrant cubical sets model small \(\infty\)-groupoids. Cubcats, which include cubical nerves of small categories [Proposition 3.36], singular cubical sets of directed topological spaces [Proposition 3.35], and, at least up to cubical homotopy equivalence, fibrant cubical sets [Proposition 4.13], model variants of small \((\infty,\infty)\)-categories (cf. [9, 17, 32, 40, 68, 67]) interpretable as higher order abstract rewriting systems; associativity and unitality hold not necessarily up to higher isomorphism (reversible higher order rewrites) but instead up to zig-zags of higher morphisms (the congruence defined by higher order rewrite rules). Equivalent classical homotopy categories \(h(\hat{\square})\) and \(h(\mathbf{Top})\) of cubical sets and topological spaces can be constructed by inverting those morphisms defining cubical homotopy equivalences after respective applications of fibrant replacement and the singular functor. Equivalent directed refinements \(d(\hat{\square})\) and \(d(\mathbf{DiTop})\) can be analogously constructed, with cubcat replacement playing the role of fibrant replacement [Corollary 4.27 for \(\mathscr{G}=\star\)]. This latter equivalence of directed homotopy categories in fact extends to an equivariant equivalence between diagrams [Corollary 4.27], whose classical counterpart does not follow from existing, non-algebraic (cf. [63]) Quillen equivalences.
The proofs require new techniques. The first is the use of _algebraic_ lifting properties [7, 34, 63], not only against diagrams of morphisms [Lemmas 4.5, 4.6, and 4.7] as was implicitly done in a predecessor to this paper [44] but also against _double diagrams_ of morphisms [Lemma 3.3]. Algebraic lifting properties underlie recent refinements, not used in this paper, of weak factorization systems, the small object argument, and model categories [7, 34, 63]. The second is the use [Lemmas 3.24 and 4.8] of pro-objects to encode some of the data of weak factorization systems; lifts of pro-diagrams indexed by finite acyclic categories [54, SS3] mimic Reedy-cofibrant replacement. These techniques apply in principle to other homotopy theories, including those (eg. [10, 47, 61]) in which homotopy colimit decompositions are also rare.
Directed cubical approximation yields immmediate consequences. One consequence is the desired formula [Corollary 4.30], tractable in practice for each tractable cubcat model \(C\) of the codomain \(Y=\mid B\mid\). Special cases include combinatorial descriptions and subsequent calculations of directed homotopy monoids [Corollary 4.28, Example 4.29] and singular directed \(1\)-cohomology monoids [Corollary 4.32]; these latter monoids in particular define functorial, computable, causal and conformal global spacetime invariants [Examples 4.33 and 4.34] (cf. [4, 42]). A localization \(d(\mathbf{Cat})\) of small categories by equivalences up to zig-zags of natural transformations, intermediate in generality between Thomason weak equivalences and categorical equivalences, has been previously studied in the literature [56]. Another consequence is that \(d(\mathbf{Cat})\)-isomorphisms are exactly the directed homotopy equivalences between directed classifying spaces [Corollary 4.26]. The following observation summarizes how directed homotopy both extends and refines classical homotopy theory, as encoded by \(h(\hat{\square}),h(\mathbf{Top})\) as well as classical homotopy categories \(h(\mathbf{Cat}),h(\mathbf{Gpd}),h(\infty\mathbf{Gpd})\) of small categories, small groupoids, and fibrant cubical sets.
**Theorem**.: _There exists a commutative diagram_
_in which the vertical arrows are induced from forgetful functors, the leftmost horizontal arrows in each row are induced from the cubical nerve, the rightmost horizontal arrows in the top and bottom rows are induced from realization functors, and the diagonal arrows pointing towards the left are induced from inclusions. Functors denoted as \(\hookrightarrow\) are fully faithful. Functors denoted as \(\twoheadrightarrow\) are essentially surjective. Functors are categorical equivalences if and only if they are labelled with \(\simeq\)._
Along the way, the category \(\square\) of cubes is enlarged from the usual minimal variant to the minimal symmetric monoidal variant. The change in setting makes it possible to explicitly characterize the \(\square\)-morphisms as the interval-preserving lattice homomorphisms between finite Boolean lattices [Theorem 3.10]. An application is an order-theoretic construction of cubical edgewise subdivision analogous to the usual order-theoretic construction of simplicial barycentric subdivision [Propositions 3.13 and 3.17]. Several of the main results likely bootstrap to larger variants of \(\square\) that, for example, include coconnections of one or both kinds. To this end, various observations are recorded for a general class of such variants [Propositions C.1 and C.4].
**Organization**.: After fixing some conventions in SS2, point-set theories of directed topological spaces, cubical sets, and cubcats are recalled, introduced, and compared in SS3. Homotopy theories, classical, directed, and categorical, are then compared in SS4. The main results
Figure 1. **Equivalence as different categorical structures**. The directed graphs above freely generate equivalent groupoids but freely generate mutually inequivalent categories, some of which are nonetheless directed homotopy equivalent to one another. After passage to free categories, the left two directed graphs are directed homotopy equivalent to one another, the right two directed graphs are directed homotopy equivalent to one another, but the left two and the right two are not directed homotopy equivalent to one another. Intuitively, classical equivalences ignore the structure of time in state spaces while categorical equivalences are sensitive to arbitrary subdivisions of time. Directed homotopy sidesteps some of the combinatorial explosion that bedevils geometric models of state spaces sensitive to arbitrary subdivisions in time. Section §4.4 formalizes the different notions of equivalence between small categories.
are contextualized within the broader literature in SS5. Some relevant facts about lattices, pro-objects, and test categories are recalled and further developed in SSA, SSB, and SSC.
## 2. Conventions
This section first fixes some conventions. Let \(k,m,n,p,q\) denote natural numbers. Let \(\mathbb{I}\) denote the unit interval. Let \(\mathfrak{im}\,f\) denote the image of a function \(f\). Let \(\hookrightarrow\) denote an inclusion of some sort, such as an inclusion of a subset into a set, a subspace into a space, or a subcategory into a category.
#### 2.0.1. Categories
Let \(\mathscr{X},\mathscr{Y}\) denote arbitrary categories. Let \(\bigcirc,\mathcal{X},\mathcal{Y},\mathscr{G}\) denote small categories. Let \(\star\) denote a terminal object in a given category. For a given monoidal category, let \(\otimes\) denote its tensor product. For each object \(o\) in a given closed monoidal category, \(o^{(-)}\) will denote the right adjoint to the endofunctor \(o\otimes-\). Notate special categories as follows.
\begin{tabular}{l l l}
**Set** & sets (and functions) \\
**Top** & (weak Hausdorff k-)spaces (and continuous functions) \\
**Cat** & small categories (and functors) \\
**Pos** & compact pospaces with connected intervals & SS3.1.1 \\
**DiTop** & (weak Hausdorff k-)streams & SS3.1.2 \\
**Dis** & finite distributive lattices & SS3.2.2 \\ \(\infty\)**Gpd** & fibrant cubical sets in the test model structure & SS4.3.1 \\ \(\square_{1}\) & domain of abstract interval objects & SS3.2.1 \\ \(\square\) & cube category & SS3.2.1 \\ \end{tabular}
Write \(\hat{\bigcirc}\) for the category of **Set**-valued presheaves on \(\bigcirc\), the functor category
\[\hat{\bigcirc}=\textbf{Set}^{\bigcirc^{\text{op}}}.\]
Write \(\bigcirc[-]\) for the Yoneda embedding \(\bigcirc\hookrightarrow\hat{\bigcirc}\). Let \(F/G\) denote the comma category for diagrams \(F,G\) in the same category. For a diagram \(F\) in \(\hat{\bigcirc}\), let \(\bigcirc/F=\bigcirc[-]/F\). Let \(1_{o}\) denote the identity morphism for an object \(o\) in a given category. Write \(\mathfrak{adj}(\zeta)\) for the adjoint to a morphism \(\zeta\) across an adjunction that is clear from context. A functor \(F:\mathscr{X}\to\mathscr{Y}\) is _topological_ if, for each diagram \(D:\mathcal{X}\to\mathscr{X}\), every cone \(x\to FD\) in \(\mathscr{Y}\) admits an initial lift to a cone in \(\mathscr{X}\) along \(F\); topological functors create limits and colimits [6]. A _pointed endofunctor_ is an endofunctor \(E\) on a category \(\mathscr{X}\) equipped with a distinguished natural transformation \(1_{\mathscr{X}}\to E\), denoted by \(\eta\). Dually, a _copointed endofunctor_ is an endofunctor \(E\) on a category \(\mathscr{X}\) equipped with a distinguished natural transformation \(E\to 1_{\mathscr{X}}\), denoted by \(\epsilon\). A category \(\mathscr{X}\) is _cofiltered_ if every finite diagram in \(\mathscr{X}\) has a cone. A _cofiltered limit_ is the limit of a diagram shaped like a cofiltered small category.
#### 2.0.2. Diagrams
We will sometimes regard diagrams in a category \(\mathscr{X}\) as equivariant versions of \(\mathscr{X}\)-objects. When we do, we adopt the following terminology. We take \(\mathscr{G}\)_-streams_, \(\mathscr{G}\)_-cubical sets_, and \(\mathscr{G}\)_-categories_ to mean \(\mathscr{G}\)-shaped diagrams in the respective categories **DiTop**, \(\hat{\square}\), and **Cat**. We take \(\mathscr{G}\)_-stream maps_, \(\mathscr{G}\)_-cubical functions_, and \(\mathscr{G}\)_-functors_ to mean natural transformations between \(\mathscr{G}\)-streams, \(\mathscr{G}\)-cubical sets, and \(\mathscr{G}\)-categories.
#### 2.0.3. Pro-objects
Informally, pro-objects are formal cofiltered limits. There exists a _category of pro-objects_\(\textbf{pro-}\mathscr{X}\) in \(\mathscr{X}\), a category having all cofiltered limits together with a
full and faithful inclusion \(\mathscr{X}\hookrightarrow\textbf{pro-}\mathscr{X}\), characterized up to categorical equivalence by the property that for each functor \(G\) in the solid diagram
there exists a dotted functor, unique up to natural isomorphism, preserving cofiltered limits and making the entire diagram commute. The reader is referred elsewhere [38] for explicit constructions, unnecessary in this paper, of **pro-\(\mathscr{X}\)**. For each functor \(F:\mathscr{X}\to\mathscr{Y}\), we also write \(F\) for the extension \(\textbf{pro-}\mathscr{X}\to\textbf{pro-}\mathscr{Y}\), unique up to natural isomorphism, making the diagram above commute when \(\mathscr{M}=\textbf{pro-}\mathscr{Y}\) and \(G=(\mathscr{Y}\hookrightarrow\textbf{pro-}\mathscr{Y})F\). We say that a natural transformation \(\eta:D_{1}\to D_{2}\) between diagrams \(D_{1}\) and \(D_{2}\) in \(\mathscr{X}\) indexed by the same small cofiltered category _represents_ a **pro-\(\mathscr{X}\)**-morphism \(\lim\,D_{1}\to\lim\,D_{2}\) if the latter morphism is induced by \(\eta\).
#### 2.0.4. Supports
We employ some common notation for _supports_ and _carriers_, like the support of a point in a topological realization or the carrier of a cube in a cubical subdivision. Consider a functor \(F:\mathscr{X}\to\mathscr{Y}\) and \(\mathscr{X}\)-object \(o\) admitting a complete lattice of subobjects. Let \(\text{supp}_{F}(x,o)\) denote the minimal subobject of \(o\) to which \(\zeta\) corestricts, for each \((x/F)\)-object \(\zeta:x\to Fo\). For instance, \(\text{supp}_{|-|}(x,B)\) is the usual support of a point \(x\) in the topological realization \(|B|\) of a simplicial set \(B\), the minimal subpresheaf \(A\subset B\) with \(x\in|A|\).
#### 2.0.5. Relations
A binary relation \(R_{X}\) on a set \(X\) is the data of the set \(X\) and its _graph_, a subset of \(X^{2}\) denoted as \(graph(R_{X})\). For each binary relation \(R_{X}\) on a set \(X\), write \(x\,R_{X}\,y\) if \((x,y)\in graph(R_{X})\). A binary relation \(R_{X}\) on a set \(X\) is _reflexive_ if \(x\,R_{X}\,x\) for all \(x\in X\), _transitive_ if \(x\,R_{X}\,z\) whenever \(x\,R_{X}\,y\) and \(y\,R_{X}\,z\), _antisymmetric_ if \(x=y\) whenever \(x\,R_{X}\,y\) and \(y\,R_{X}\,x\), and _total_ if for each pair \(x,y\in X\), \(x\,R_{X}\,y\) or \(y\,R_{X}\,x\). A _preorder_ on a set \(P\) is a binary, reflexive, transitive relation on \(P\). A _partial order_ on a set \(P\) is an antisymmetric preorder on \(P\). The _lexicographic order_ on all finite sequences in \(\mathbb{N}\) is the total order \(\leqslant_{\text{lex}}\) on such sequences defined by \((s_{1},\dots,s_{m})\leqslant_{\text{lex}}(t_{1},t_{2},\dots,t_{n})\) if \(m\leqslant n\) and \(s_{i}=t_{i}\) for all \(1\leqslant i\leqslant m\) or there exists \(1\leqslant j\leqslant\min(m,n)\) such that \(s_{j}<t_{j}\) and \(s_{i}=t_{i}\) for all \(1\leqslant i<j\).
#### 2.0.6. Preordered Sets
A _preordered set_ is a set \(P\) equipped with a preorder, which we denote as \(\leqslant_{P}\), on it. Preordered sets \(P\) will be regarded as small categories with object set given by the underlying set of \(P\) and with one morphism \(x\to y\) precisely when \(x\leqslant_{P}y\). A _poset_ is a set equipped with a partial order on it or equivalently a skeletal preordered set. The _minimum_ and _maximum_ of a poset \(P\) are the unique initial and final objects of \(P\), respectively denoted by \(\min\,P\) and \(\max\,P\) whenever such unique objects exist. The minimum and maximum of a poset having initial and final objects are the _extrema_ of \(P\). A _subposet_\(P\) of a poset \(Q\) is a poset \(P\) that is full as a subcategory of \(Q\). A subposet \(P\) of a poset \(Q\) is...
1. _...order-convex in \(Q\)_ if \(y\in P\) whenever \(x\leqslant_{Q}y\leqslant_{Q}z\) and \(x,z\in P\)
2....an _interval in \(Q\)_ if it is order-convex and has both a minimum and maximum
3....a _chain in \(Q\)_ if \(\leqslant_{P}\) is total.
In a poset \(P\), an element \(z\) is an _immediate successor_ to an element \(x\) if \(x\leqslant_{P}z\) and \(x=y\) or \(y=z\) whenever \(x\leqslant_{P}y\leqslant_{P}z\). In a poset, categorical products are called _infima_ and categorical coproducts are called _suprema_. In a poset \(P\), write \(x\vee_{P}y\) for the unique join
of \(x,y\) if it exists and \(x\wedge_{P}y\) for the unique meet of \(x,y\) if it exists. A _monotone function_ is a functor between preordered sets, a function \(\phi:P\to Q\) between preordered sets with \(\phi(x)\leqslant_{Q}\phi(y)\) whenever \(x\leqslant_{P}y\). A monotone function \(\phi:P\to Q\) of posets is _order-convex_ if images of order-convex subposets in \(P\) under \(\phi\) are order-convex subposets in \(Q\).
#### 2.0.7. Lattices
A _lattice_ is always taken in the order-theoretic sense to mean a poset having all binary infima and binary suprema. A lattice is _complete_ if it is complete as a small category, or equivalently if it has all infima, or equivalently if it has all suprema. A lattice is _distributive_ if \(x\wedge_{L}(y\vee_{L}z)=(x\wedge_{L}y)\vee_{L}(x\wedge_{L}z)\) for all \(x,y,z\in L\) or equivalently if \(x\vee_{L}(y\wedge_{L}z)=(x\vee_{L}y)\wedge_{L}(x\vee_{L}z)\) for all \(x,y,z\in L\). A _sublattice_ of a lattice \(L\) is a subposet \(K\) of \(L\) such that \(\wedge_{K},\vee_{K}:K^{2}\to K\) are respective restrictions of \(\wedge_{L},\vee_{L}:L^{2}\to L\). Write \(\omega\) for the set of natural numbers equipped with its standard total order. Let \([n]\) denote the subposet \(\{0,1,\dots,n\}\) of \(\omega\). Define functions
\[\delta_{\pm} :[0] \to[1] \delta_{\pm}(0)=\nicefrac{{1}}{{2}}\pm\nicefrac{{1}}{{2}}\] \[\sigma :[1] \to[0]\]
Henceforth write \([k]^{n}\) for the \(n\)-fold \(\mathbf{Cat}\)-product of \([k]\). A poset is _Boolean_ if it is \(\mathbf{Cat}\)-isomorphic to a power set, regarded as a poset under inclusion. A monotone function \(\phi:L\to M\) of finite lattices _preserves (Boolean) intervals_ if images of (Boolean) intervals in \(L\) under \(\phi\) are (Boolean) intervals in \(M\).
**Example 2.1**.: The finite Boolean lattices are, up to \(\mathbf{Cat}\)-isomorphism,
\[[0],[1],[1]^{2},[1]^{3},\dots\]
Every interval in a Boolean lattice is Boolean. A _lattice homomorphism_ is a function \(\phi:L\to M\) between lattices preserving binary suprema and binary infima.
#### 2.0.8. Constructions
For reference, we list certain constructions defined throughout.
\begin{tabular}{l l l} \(\mathfrak{so}_{k+1}\) & subdivisions & §3.2.2, §3.2.3 \\ \(\mathfrak{ev}_{k+1}\) & right adjoint to \(\mathfrak{so}_{k+1}\) & §3.2.3 \\ \(|-|\) & topological realizations & §3.3 \\ \(|-|\) & directed realizations & §3.3 \\ \(\mathfrak{sing}\) & directed cubical singular functor & §3.3 \\ \(\mathfrak{net}\) & cubical nerves & §3.2.3 \\ \(\mathrm{T}_{1}\) & fundamental category & §3.2.3 \\ \(\Pi_{1}\) & fundamental groupoid & §3.2.3 \\ \(\pi_{0}\) & path-components & §4.2.1 \\ \(\Omega^{n}\) & \(n\)-fold directed loop space & §3.2.3 \\ \(\tau_{n}\) & \(n\)th directed homotopy monoid & §3.2.3 \\ \(\mathfrak{d}\) & cannonical interval object in \(\dot{\square}\) & §4.1 \\ \(\mathfrak{h}\) & interval object in \(\mathbf{DiTop}\) that defines h-homotopy & §4.2.2 \\ \([-,-]_{\mathrm{i}}\) & homotopy classes with respect to interval object \(\mathrm{i}\) & §4.1 \\ \(H^{1}\) & cubical 1-cohomology & §4.3.1, §4.3.2 \\ \end{tabular}
## 3. Directed Spaces
Directed spaces can be modelled topologically and combinatorially. This section recalls topological models, presheaf models, and comparison functors between them. _Streams_ provide topological models of directed spaces. _Cubical sets_, presheaves over a particular variant
of the cube category, provide combinatorial models of directed spaces. _Cubcats_ are a mixture of topological and combinatorial formalisms. Streams can be constructed from cubical sets as _directed realizations_. Novel material includes a double algebraic lifting property of compact topological distributive lattices [Lemma 3.3], a characterization of morphisms in the cube category [Theorem 3.10], a subsequent order-theoretic construction of cubical subdivision [SS3.2.2 and Proposition 3.17], a lifting lemma for directed singular cubes [Lemma 3.31], and the entire theory of cubcats [SS3.4].
### Continuous
Directed spaces are modelled topologically in this paper as _streams_. An alternative topological model for directed spaces, common in the literature and essentially interchangeable with streams as foundations for directed homotopy, are _d-spaces_[30]. An advantage of a stream-theoretic foundation for directed topology is that it naturally subsumes some of the theory of pospaces, whose point-set theory is well-developed in the literature [48].
#### 3.1.1. Pospaces
A _pospace_ is a poset \(P\) topologized so that \(graph\,(\leqslant_{P})\) is closed in the standard product topology on \(P^{2}\). A _subpospace_ of a pospace \(Q\) is a pospace \(P\) that is at once a subposet and subspace of \(Q\). A _topological lattice_ is a lattice \(L\) topologized so that \(\vee_{L},\wedge_{L}\) are jointly continuous functions \(L^{2}\to L\). The underlying topological spaces of pospaces are necessarily Hausdorff. A _subtopological sublattice_ of a topological lattice \(L\) is a topological lattice that is at once a sublattice and subspace of \(L\). Conversely, topological lattices with Hausdorff underlying topological spaces are pospaces. The following observation is a straightforward combination of observations made elsewhere [[57], [5, Exercise SIV.8 4(b)]].
**Lemma 3.1**.: _Each compact Hausdorff topological lattice is complete as a lattice._
There should exist a continuous evolution between states \(x\leqslant_{P}y\) in a pospace \(P\) of states. We therefore define a category of compact pospaces satisfying such a continuity constraint as follows. A _monotone map_ of pospaces is a function between pospaces that is at once monotone as a function between underlying posets and continuous as a function between underlying topological spaces. Let \(\mathbf{Pos}\) be the concrete category whose objects are those compact pospaces \(P\) such that \(x=z\) if \(x\leqslant_{P}z\) and there does not exist \(y\neq x,z\) in \(P\) such that \(x\leqslant_{P}y\leqslant_{P}z\) and whose morphisms are all monotone maps between them.
**Example 3.2**.: Fix \(n\). The \(\mathbf{Pos}\)-object \(\vec{\mathbb{I}}^{n}=\vec{\mathbb{I}}^{\times_{\mathbf{Pos}}n}\), the topological hypercube \(\mathbb{I}^{n}\) with
\[(x_{1},x_{2},\ldots,x_{n})\leqslant_{\mathbb{I}^{n}}(y_{1},y_{2},\ldots,y_{n} )\iff y_{1}-x_{1},y_{2}-x_{2},\ldots,y_{n}-x_{n}\geqslant 0,\]
is a topological lattice whose underlying space is compact Hausdorff and connected. Every topological lattice whose underlying topological space is compact Hausdorff and connected is a \(\mathbf{Pos}\)-object [24, Proposition VI-5.15].
Terminal maps \(L\to\star\) from compact topological (distributive) lattices \(L\) admit the following right lifting property against a (_double_) _diagram_ of certain inclusions of pospaces [7].
**Lemma 3.3**.: _Consider the solid arrows in the left triangle in the diagram_
_where the left vertical inclusion is the inclusion of a closed order-convex subtopological sublattice into a compact Hausdorff topological lattice. There exists a choice of dotted monotone map \(r_{LM}\) making the left triangle commute such that the middle diagram commutes for all order-convex subtopological sublattices \(L^{\prime}\) and \(L^{\prime\prime}\) of respective compact Hausdorff topological lattices \(M^{\prime}\) and \(M^{\prime\prime}\) and all horizontal arrows making the outer rectangle commute such that the top horizontal arrow is an extrema-preserving monotone map and the bottom horizontal arrow is a continuous lattice homomorphism. If \(L_{3}\) is a distributive compact Hausdorff topological lattice with order-convex subpospaces \(L_{1}\subset L_{2}\) that are also topological lattices, then the right diagram commutes._
Proof.: For a closed, order-convex subtopological sublattice \(L\) of a compact Hausdorff topological lattice \(M\), \(L\) admits both a minimum min \(L\) and maximum max \(L\)[Lemma 3.1] and \(x\in M\) lies in \(L\) if and only if \(\min\,L\leqslant_{M}x\leqslant_{M}\max\,L\) by \(L\) order-convex in \(M\). It is therefore possible to define a monotone map making the left triangle commute by
\[r_{L,M}(x)=(\min\,L)\vee_{M}(x\wedge_{M}(\max\,L)).\]
The middle diagram commutes when the bottom horizontal arrow commutes with binary suprema and binary infima and the top horizontal arrrow preserves extrema.
Consider a distributive compact Hausdorff topological lattice \(L_{3}\) with order-convex subpospaces \(L_{1}\subset L_{2}\) that are also topological lattices. For brevity, write \(\bot_{i}\) for min \(L_{i}\) and \(\top_{i}\) for max \(L_{i}\). For each \(x\in L_{3}\),
\[r_{L_{1},L_{2}}(r_{L_{2},L_{3}}(x)) =r_{L_{1},L_{2}}(\bot_{2}\vee_{L_{3}}(x\wedge_{L_{3}}\top_{2}))\] \[=\bot_{1}\vee_{L_{2}}((\bot_{2}\vee_{L_{3}}(x\wedge_{L_{3}}\top_{ 2}))\wedge_{L_{2}}\bot_{1}\] \[=(\bot_{1}\vee_{L_{3}}(x\wedge_{L_{3}}\top_{2}))\wedge_{L_{2}} \bot_{1}\] \[=(\bot_{1}\vee_{L_{3}}x)\wedge_{L_{3}}\top_{2}\wedge_{L_{2}}\bot_{1}\] \[=(\bot_{1}\vee_{L_{2}}x)\wedge_{L_{3}}\bot_{1}\] \[=\bot_{1}\vee_{L_{2}}(x\wedge_{L_{3}}\bot_{1})\] \[=r_{L_{1},L_{3}}(x)\]
from repeated applications of distributivity, idempotency of lattice operations, and \(\bot_{1}\vee_{L_{2}}\)\(\bot_{2}=\bot_{1}\) by \(L_{1}\subset L_{2}\), \(\bot_{1}\vee_{L_{3}}\top_{2}=\top_{2}\) by \(L_{1}\subset L_{2}\), and \(\top_{2}\wedge_{L_{3}}\bot_{1}=\bot_{1}\) by \(L_{1}\subset L_{2}\). Thus the right diagram commutes.
#### 3.1.2. Streams
A _circulation_ on a topological space \(X\) is a function
\[\leqslant:U\mapsto\leqslant_{U}\]
assigning to each open subset \(U\subset X\) a preorder \(\leqslant_{U}\) on \(U\) such that \(\leqslant\) sends the union of a collection \(\mathcal{O}\) of open subsets of \(X\) to the preorder with smallest graph containing \(graph\,(\leqslant_{U})\)
for each \(U\in\mathcal{O}\)[43]. A _stream_ is a space equipped with a circulation on it [43]. Intuitively, \(x\leqslant_{U}y\) in a state stream whenever a system restricted to the subset \(U\) of states can evolve from \(x\) to \(y\).
**Example 3.4**.: Every topological space admits an _initial circulation_\(\leqslant\) defined by
\[x\leqslant_{U}y\iff x=y\in U\]
A continuous function \(f:X\to Y\) of streams is a _stream map_ if \(f(x)\leqslant_{U}f(y)\) whenever \(x\leqslant_{f^{-1}U}y\) for each open subset \(U\) of \(Y\)[43]. A _k-space_ if \(X\) is a colimit of compact Hausdorff spaces in the category of topological spaces and continuous functions. Similarly, a _k-stream_ is a colimit of compact Hausdorff streams in the category of streams and stream maps [43]. The underlying space of a k-stream is a k-space [43, Proposition 5.8]. A topological space \(X\) is _weak Hausdorff_ if images of compact Hausdorff spaces in \(X\) are Hausdorff.
**Theorem 5.4**, [43].: _Locally compact Hausdorff streams are weak Hausdorff k-streams._
Let \(\mathbf{Top}\) denote the complete, cocomplete, and Cartesian closed [52] category of weak Hausdorff k-spaces and continuous functions between them. Let \(\mathbf{DiTop}\) denote the category of weak Hausdorff k-streams and stream maps. Redefine topological space and stream, like elsewhere (eg. [43, 51]), to means objects in the respective categories \(\mathbf{Top}\) and \(\mathbf{DiTop}\). The _forgetful functor_\(\mathbf{DiTop}\to\mathbf{Top}\) lifts topological constructions in the following sense.
**Proposition 5.8**, [43].: _The forgetful functor \(\mathbf{DiTop}\to\mathbf{Top}\) is topological._
In other words, each class of continuous functions \(f_{i}:X\to Y_{i}\) from a topological space \(X\) to streams \(Y_{i}\) induces a terminal circulation on \(X\) making the \(f_{i}\)'s stream maps \(X\to Y_{i}\). Equivalently and dually, each class of continuous functions from streams to a fixed topological space induces a suitably initial circulation on that topological space. In particular, the forgetful functor \(\mathbf{DiTop}\to\mathbf{Top}\) creates limits and colimits. A _stream embedding_ is a stream map \(e:Y\to Z\) such that a stream map \(f:X\to Z\) corestricts to a stream map
Figure 2. **Conal manifolds**_Conal manifolds_, smooth manifolds whose tangent spaces are all equipped with convex cones, naturally encode state spaces of processes under some causal constraints. The convex cones define partial orders on an open basis of charts that uniquely extend to circulations on the entire manifold. The time-oriented Klein bottle \(K\) (left) and time-oriented torus \(T\) (right) depicted above are examples of conal manifolds that arise as directed realizations of cubical sets. Over cancellative commutative monoid coefficients \(\tau\), their directed 1-cohomologies are \(H^{1}(K;\tau)=\tau\times_{2\tau}\tau\)\(\tau\) and \(H^{1}(T;\tau)=\tau^{2}\) by a simple application of cubical approximation [Examples 4.34 and 4.33].
\(X\to Y\) whenever \(\mathfrak{im}\,f\subset\,\mathfrak{im}\,e\). A _substream_ of a stream \(Y\) is a stream \(X\) such that inclusion defines a stream embedding \(X\to Y\).
**Example 3.5**.: An open substream is an open subspace with a restricted circulation.
**Theorem 5.12**, [43].: _The category \(\mathbf{DiTop}\) is Cartesian closed._
The categories \(\mathbf{DiTop},\mathbf{Top}\) will sometimes be regarded as Cartesian monoidal. Explicit constructions of circulations are often cumbersome. Instead, circulations can be implicitly constructed from certain global partial orders in the sense of the following result, a special case of a more general observation [43, Lemmas 4.2, 4.4 and Example 4.5]. The following theorem allows us to henceforth regard \(\mathbf{Pos}\)-objects as streams and monotone maps between them as stream maps.
**Theorem 4.7**, [43].: _There exists a fully faithful and concrete embedding_
\[\mathbf{Pos}\hookrightarrow\mathbf{DiTop},\]
_sending each \(\mathbf{Pos}\)-object \(P\) to a unique stream having the same underlying topological space as \(P\) and whose circulation sends the entire space to the given partial order on \(P\)._
### Cubical
Directed cubes can be modelled as finite Boolean lattices, more general complexes of such cubes can be modelled as posets, and even more general formal colimits of such cubes can be modelled as cubical sets. The paper expands the typical setting (eg. [44]).
#### 3.2.1. Cubes
There are several variants of the cube category (eg. [8, 33]). While the predecessor [44] to this paper adopts the minimal variant, this paper adopts the minimal symmetric monoidal variant. For a monotone function \(\phi:[1]^{n_{1}}\to[1]^{n_{2}}\) and \(1\leqslant i\leqslant n\), let \(\phi_{i;n}\) denote the Cartesian monoidal product
\[\phi_{i;n}=[1]^{i-1}\otimes\phi\otimes[1]^{n-i}:[1]^{n+n_{1}-1}\to[1]^{n+n_{2} -1}.\]
_Codegeneracies_ are monotone functions of the form \(\sigma_{i;n}:[1]^{n+1}\to[1]^{n}\). _Cofaces_ are monotone functions of the form \(\delta_{\pm i;n}=(\delta_{\pm})_{i;n}:[1]^{n}\to[1]^{n+1}\).
**Example 3.6**.: The codegeneracy \(\sigma_{i;n}\) is exactly the projection
\[\sigma_{i;n}:[1]^{n}\to[1]^{n-1}\]
onto all but the \(i\)th factor.
Let \(\square_{1}\) denote the subcategory of \(\mathbf{Cat}\) generated by \(\delta_{\pm},\sigma\). The submonoidal category of \(\mathbf{Cat}\) generated by \(\square_{1}\) is the usual definition of the cube category in the literature, the subcategory of \(\mathbf{Cat}\) generated by all cofaces and codegeneracies. Instead let \(\square\) denote the _symmetric_ monoidal subcategory of the Cartesian monoidal category \(\mathbf{Cat}\) generated by \(\square_{1}\), a category whose objects are still the lattices \([0],[1],[1]^{2},[1]^{3},\ldots\) but whose morphisms are generated by the cofaces, codegeneracies, and coordinate permutations. We write \([1]^{\infty}\) for the (**pro**-\(\square\))-object defined as the limit
\[[1]^{\infty}=\lim\left(\cdots\xrightarrow{\sigma_{i;4}}[1]^{3}\xrightarrow{ \sigma_{3;3}}[1]^{2}\xrightarrow{\sigma_{2;2}}[1]^{1}\to[0]\right). \tag{1}\]
The following observation allows us to extend certain results on the minimal variant of the cube category to the new variant \(\square\).
**Lemma 6.2**, [44].: _For each \(n\) and interval \(I\) in \([1]^{n}\), there exist unique \(m_{I}\) and composite_
\[[1]^{m_{I}}\to[1]^{n}\]
_of cofaces that has image \(I\)._
We will repeatedly use the convenient fact that \(\square\) is the free strict symmetric monoidal category generated by the category \(\square_{1}\) pointed at \([0]\), in that every solid horizontal functor to a symmetric monoidal category \(\mathscr{M}\) sending \([0]\) to the unit uniquely extends to a strict monoidal functor making the following commute by observations made elsewhere [33].
There are some advantages to adding coordinate permutations to \(\square\). One is that the class of all directed realizations of cubical sets (see SS3.3) includes, for example, all closed conal manifolds whose cone bundles are fibrewise generating and free [45, Theorem 1.1]. A bigger one is an explicit characterization of \(\square\)-morphisms [Theorem 3.10] to which the rest of this section is devoted.
**Example 3.7**.: In \(\square\) the \(\dots\)
1. \(\dots\) isomorphisms are the coordinate permutations
2. \(\dots\) monos are the cofaces up to coordinate permutation
3. \(\dots\) epis are the codegeneracies, projections onto some of the coordinates, up to coordinate permutation
Let \(\tau\) denote the coordinate transposition \([1]^{2}\to[1]^{2}\). _Principal coordinate transpositions_ are \(\square\)-morphisms of the form \(\tau_{i;n}:[1]^{n+2}\to[1]^{n+2}\).
**Lemma 3.8**.: _The following are equivalent for a monotone function of the form_
\[\phi:[1]^{m}\to[1]^{n}.\]
1. \(\phi\) _is bijective_
2. \(\phi\) _is an interval-preserving lattice isomorphism_
3. \(\phi\) _is a lattice isomorphism_
4. \(\phi\) _is a coordinate permutation_
5. \(\phi\) _is composite of principal coordinate transpositions_
6. \(\phi\) _is a_ \(\square\)_-isomorphism_
The proof uses the fact that the symmetric group on \(\{1,2,\dots,n\}\) is generated by all principal transpositions, transpositions of the form \((i\,i+1)\) for \(1\leqslant i<n\)[14, SS6.2].
Proof.: Let \(\mathbf{0}\) denote the minimum \((0,\dots,0)\) of an element in \(\square\). Let \(\mathbf{e}_{i}\) denote the element in \([1]^{n}\) whose coordinates are all \(0\) except for the ith coordinate.
It suffices to take \(m=n\) because all of the statements imply that \(\phi\) is a bijection between finite sets and hence \(\phi\) has domain and codomain both with the same cardinality.
Suppose (1).
Then \(\phi\) preserves extrema because it is a monotone surjection. Let \(I\) be an interval in \([1]^{n}\), necessarily isomorphic to a lattice of the form \([1]^{k}\). Then \(I\) contains exactly \(k\) distinct maximal chains of length \(k\). The function \(\phi\) preserves chains of length \(k\) because it is a monotone injection between posets. Hence \(\phi(I)\) contains \(k\) distinct maximal chains each of length \(k\) by \(\phi\) injective and monotone. Hence \(\phi(I)\) must be an interval in \([1]^{n}\). Thus \(\phi\) maps intervals onto intervals.
Finite non-empty suprema of the \(\mathbf{e}_{i}\)s are the maxima of intervals in \([1]^{n}\) containing \(\mathbf{0}\). And \(\phi\) maps intervals in \([1]^{n}\) containing \(\mathbf{0}\) onto intervals in \([1]^{n}\) containing \(\mathbf{0}\). It therefore follows that \(\phi\) preserves finite non-empty suprema of the \(\mathbf{e}_{i}\)s because monotone surjections preserve
maxima. Hence \(\phi\) preserves all finite non-empty suprema. Similarly \(\phi\) preserves all finite non-empty infima by duality. It therefore follows that \(\phi\) is a bijective lattice homomorphism and hence a lattice isomorphism. Hence (2).
And (2) implies (3).
Suppose (3). The function \(\phi\), a monoid automorphism with respect to \(\vee_{[1]^{m}}\), permutes the unique minimal set of monoid generators \(\mathbf{e}_{1},\mathbf{e}_{2},\ldots,\mathbf{e}_{n}\). Thus there exists a permutation \(\sigma\) of \(\{1,2,\ldots,n\}\) such that \(\phi(\mathbf{e}_{i})=\mathbf{e}_{\sigma(i)}\) for each \(i\). Hence \(\phi(x_{1},\ldots,x_{n})=\phi(\vee_{x_{i}=1}\mathbf{e}_{i})=\vee_{x_{i}=1} \phi(\mathbf{e}_{i})=\vee_{x_{i}=1}\mathbf{e}_{\sigma(i)}=(x_{\sigma(1)},\ldots,x_{\sigma(n)})\). Hence (4).
If (4), then \(\phi\) is a composite of transpositions of successive coordinates, principal coordinate transpositions [14, SS6.2]. Then (5).
If (5), then \(\phi\) is a composite of \(\square\)-isomorphisms and hence a \(\square\)-isomorphism. Hence (6).
If (6), then (1) because the forgetful functor \(\square\to\mathbf{Set}\), like all functors, preserves isomorphisms.
**Lemma 3.9**.: _The following are equivalent for a function of the form_
\[\phi:[1]^{m}\to[1]^{n}.\]
1. \(\phi\) _is a surjective interval-preserving lattice homomorphism_
2. \(\phi\) _is a surjective lattice homomorphism_
3. \(\phi\) _is a composite of codegeneracies and principal coordinate transpositions_
Proof.: For clarity, let \(\wedge=\wedge_{L}\) and \(\vee=\vee_{L}\) when the lattice \(L\) is clear from context. Let \(\mathbf{e}_{i}^{\perp}\) denotes the element in \([1]^{m}\) whose only coordinate having value \(0\) is its \(i\)th coordinate.
(1) implies (2).
Suppose (2). Then \(m\geqslant n\) by surjectivity. We show (3) by induction on \(m-n\)
In the base case \(m=n\), \(\phi\) is a bijection because it is a surjection between sets of the same cardinality and hence is a composite of principal coordinate transpositions [Lemma 3.8].
Consider \(m-n>0\). Inductively suppose (3) for the case \(m-n<d\) and now consider the case \(m-n=d>0\). Then \(\phi\) is not injective by \(m>n\). Thus there exist distinct \(x,y\in[1]^{m}\) such that \(\phi(x)=\phi(y)\). There exists \(j\) such that \(x_{j}\neq y_{j}\) by \(x\neq y\). Take \(0=x_{j}<y_{j}=1\) and \(x_{i}=y_{i}=1\) for \(i\neq j\) without loss of generality by reordering \(x\) and \(y\) if necessary, replacing \(x\) with \(x\vee\mathbf{e}_{j}^{\perp}\) and \(y\) with \(y\vee\mathbf{e}_{j}^{\perp}\), and noting that \(\phi(x\vee\mathbf{e}_{j}^{\perp})=\phi(x)\vee\phi(\mathbf{e}_{j}^{\perp})= \phi(y)\vee\phi(\mathbf{e}_{j}^{\perp})=\phi(y\vee\mathbf{e}_{j}^{\perp})= \phi(y\vee\mathbf{e}_{j}^{\perp})\), It suffices to show the existence of a dotted function making
commute. For then the dotted function is a surjective lattice homomorphism by \(\phi\) a surjective lattice homomorphism and \(\sigma_{j}\) a projection. To that end, suppose distinct \(x^{\prime},y^{\prime}\in[1]^{m}\) satisfy \(\sigma_{j}(x^{\prime})=\sigma_{j}(y^{\prime})\). It suffices to show \(\phi(x^{\prime})=\phi(y^{\prime})\). Take \(0=x^{\prime}_{j}<y^{\prime}_{j}=1\) without loss of generality. Then \(\phi(x^{\prime})=\phi(y^{\prime}\wedge x)=\phi(y^{\prime})\wedge\phi(x)=\phi(y ^{\prime})\wedge\phi(y)=\phi(y^{\prime}\wedge y)=\phi(y^{\prime})\). Hence (3).
(3) implies (1) because identities, \(\sigma,\tau\) are all surjective interval-preserving lattice homomorphisms and the tensor on \(\square\) is closed under surjective interval-preserving lattice homomorphisms.
**Theorem 3.10**.: _The following are equivalent for a function \(\phi\) of the form_
\[\phi:[1]^{m}\to[1]^{n}.\]
1. \(\phi\) _is an interval-preserving lattice homomorphism_
2. \(\phi\) _is a_ \(\square\)_-morphism_
Proof.: Suppose (1). The function \(\phi\) factors into a composite of its corestriction onto its image \(I\), regarded as a subposet of \([1]^{n}\), followed by an inclusion \(I\hookrightarrow[1]^{n}\). Both functions \([1]^{m}\to I\) and \(I\hookrightarrow[1]^{n}\) are interval-preserving lattice homomorphisms because \(\phi\) is an interval-preserving lattice homomorphism. Moreover \(I\hookrightarrow[1]^{n}\) is isomorphic to a \(\square\)-morphism [Lemma 6.2, [44]]. Hence to show (2), it suffices to take \(\phi\) surjective. In that case \(\phi\) factors as a composite of tensor products of identities with \(\sigma,\tau\) [Lemma 3.9]. Hence (2).
Suppose (2). Then \(\phi\) is an interval-preserving lattice homomorphism because \(\sigma,\delta_{\pm},\tau\) are interval-preserving lattice homomorphisms and \(\otimes\) preserves interval-preserving lattice homomorphisms. Hence (1).
#### 3.2.2. Cube configurations
Just as posets encode simplicial complexes whose simplices correspond to finite chains, posets can encode cubical complexes whose cubes correspond to finite Boolean intervals. Let \(\mathbf{Dis}\) be the category whose objects are the finite distributive lattices and lattice homomorphisms between such lattices preserving Boolean intervals.
**Example 3.11**.: The category \(\mathbf{Dis}\) contains \(\square\) as a full subcategory [Theorem 3.10].
Technical observations about \(\mathbf{Dis}\) [Lemma 3.12 and Proposition 3.13], which require specialized observations about finite distributive lattices, are proven in SSA.
**Lemma 3.12**.: _The following are equivalent for a function_
\[\phi:L\to M\]
_between finite distributive lattices._
1. \(\phi\) _is a_ \(\mathbf{Dis}\)_-morphism_
2. _each restriction of_ \(\phi\) _to a Boolean interval in_ \(L\) _coerstricts to a surjective lattice homomorphism onto a Boolean interval in_ \(M\)_._
For each \(k\), we can make the natural identifications
\[\left([1]^{n}\right)^{[k]}=[k+1]^{n}\]
under unique isomorphisms for the case \(n=1\), that send each monotone function \(\phi\) in the middle poset to the element \(\sum_{i}\phi(i)\) in the right side, and hence under natural Cartesian monoidal \(\mathbf{Cat}\)-isomorphisms for the general case. Thus the construction \((-)^{[k]}\) intuitively subdivides an \(n\)-cube, as encoded by the Boolean lattice \([1]^{n}\), into \(kn\) subcubes. The following proposition naturally extends this subdivision construction to an endofunctor \(\mathfrak{so}_{k+1}\) on \(\mathbf{Dis}\).
**Proposition 3.13**.: _Consider the commutative outer square_
_in which the bottom horizontal functor is the left Kan extension of the composite of the top horizontal and right vertical arrows along the left vertical inclusion. There exists a unique dotted monoidal functor making the entire diagram commute. For each monotone injection \(\phi:[n]\to[m]\), there exists a unique monoidal natural transformation \(\mathfrak{so}_{m+1}\to\mathfrak{so}_{n+1}\) whose \(I\)-component is defined by \(I^{\phi}\) for each \(\square\)-object \(I\)._
#### 3.2.3. Cubical sets
Take _cubical sets_ and _cubical functions_ to mean the respective objects and morphisms of \(\hat{\square}\). Regard \(\hat{\square}\) as closed symmetric monoidal with tensor \(\otimes\) characterized by \(\square[-]:\square\hookrightarrow\hat{\square}\) monoidal. The \(\square\)-morphisms from tensor products defined by **Cat**-projections induce inclusions of the following form, natural in cubical sets \(A\) and \(B\):
\[A\otimes B\hookrightarrow A\times B\]
Write \((-)_{n}\) for the functor \(\hat{\square}\to\textbf{Set}\) naturally defined on objects by
\[C_{n}=C([1]^{n}).\]
For each atomic cubical set \(A\), let \(\partial A\) denote the maximal subpresheaf of \(A\) having dimension \(\dim\,A-1\). For integers \(1\leqslant i\leqslant n\), let \(\sqcup^{\pm i}[1]^{n}\) denote the maximal subpresheaf of \(\square[1]^{n}\) for which \(\delta_{\pm i;n}\notin(\sqcup^{\pm i}[1]^{n})_{n-1}\).
**Example 3.14**.: For each \(1\leqslant i\leqslant n\), we have the inclusions of cubical sets
\[\sqcup^{\pm i}[1]^{n}\subset\partial\square[1]^{n}\subset\square[1]^{n}. \tag{2}\]
For each \(n>0\), \(\partial\square[1]^{n}\) intuitively models the boundary of an \(n\)-cube, an \(n\)-cube missing its interior and for integers \(1\leqslant i\leqslant n\), \(\sqcup^{\pm i}[1]^{n}\) intuitively models an \(n\)-cube missing its interior and its \(\pm i\)th face, and \(\square[1]^{n}\) models an \(n\)-cube.
**Example 3.15**.: A (**pro**-\(\hat{\square}\))-morphism of the form
\[C\to\square[1]^{\infty},\]
from a cubical set \(C\) to the image of \([1]^{\infty}\) (1) under the extension of the Yoneda embedding to a functor \(\square[-]:\textbf{pro}\text{-}\square\to\textbf{pro}\text{-}\hat{\square}\), informally, is the data of a cubical function from \(C\) to a representable of arbitrarily large dimension up to surjective morphisms between such representables.
Define a monoidal adjunction \(\mathrm{T}_{1}\dashv\mathfrak{ner}\) of the form
\[\mathrm{T}_{1}:\hat{\square}\leftrightarrow\textbf{Cat}:\mathfrak{ner}\,,\]
where the cocontinuous monoidal functor \(\mathrm{T}_{1}\) is characterized by the commutative diagram
because \(\square\) is the free symmetric monoidal category on \(\square_{1}\) having unit \([0]\)[33]. Call \(\mathfrak{ner}\,\mathcal{X}\) the _cubical nerve_ of a small category \(\mathcal{X}\). For each finite poset \(P\), let \(\square[P]\) denote the subpresheaf of \(\mathfrak{ner}\,P\) whose \(n\)-cubes are all monotone functions \([1]^{n}\to P\) preserving binary infima and binary suprema and mapping (Boolean) intervals onto Boolean intervals. For each monotone function \(\phi:P\to Q\) of posets mapping Boolean intervals onto Boolean intervals, \(\mathfrak{ner}\,\phi\) restricts and corestricts to a cubical function \(\square[\phi]:\square[P]\to\square[Q]\). In particular, \(\square[-]\) will not only denote the Yoneda embedding \(\square\to\hat{\square}\), but also its extension
\[\square[-]:\textbf{Dis}\to\hat{\square}.\]
The _vertices_ of \(C\) are the elements of \(C_{0}\). Let \(\mathrm{Star}_{C}(v)\) denote the _closed star_ of a vertex \(v\in C_{0}\) in \(C\), the subpresheaf of \(C\) consisting of all images \(A\subset C\) of representables in \(C\) with \(v\in A_{0}\). Call \(C\,\dots\)
1. \(\dots\)_atomic_ if \(C\) is the image of a representable.
_
2. _...finite_ if \(C\) has finitely many atomic subpresheaves.
The _dimension_ of the initial cubical set \(\varnothing\) is \(-1\) and the dimension of a non-initial cubical set \(C\neq\varnothing\) is the infimum over all \(n=0,1,\ldots\) such that \(C\) is the colimit of representables of the form \(\square[1]^{n}\).
**Lemma 3.16**.: _The functor \(\square[-]:\mathbf{Dis}\to\hat{\square}\) is fully faithful and cocontinuous._
Proof.: For \(\bigcirc\) a skeleton of \(\mathbf{Dis}\) containing \(\square\),
commutes up to natural isomorphism. In this diagram, the Yoneda embedding \(\bigcirc[-]\) is fully faithful and cocontinuous. The other diagonal arrow, a functor between presheaf categories induced by a functor between sites and therefore cocontinuous, is fully faithful [Lemma 3.12]. Therefore \(\square[-]:\mathbf{Dis}\to\hat{\square}\) is naturally isomorphic to a composite of a categorical equivalence \(\mathbf{Dis}\simeq\bigcirc\) followed by fully faithful and cocontinuous functors.
We extend \(\mathfrak{so}_{k+1}\) to an endofunctor on \(\hat{\square}\) as follows.
**Proposition 3.17**.: _There exists a unique dotted monoidal left adjoint making_
_commute up to natural isomorphism._
Proof.: The left Kan extension of the composite of the top horizontal with right vertical functors along the left vertical functor makes the entire diagram commute up to natural isomorphism by the left vertical functor cocontinuous [Lemma 3.16]. This left Kan extension is monoidal by the top horizontal functor monoidal and \(\otimes\) cocontinuous.
Intuitively, \(\mathfrak{so}_{k+1}C\) is the cubical set obtained by taking \((k+1)\)-fold edgewise subdivisions of the cubes in \(C\).
**Example 3.18**.: There exists a natural isomorphism
\[\mathfrak{so}_{1}\cong 1_{\hat{\square}}:\hat{\square}\cong\hat{\square}.\]
Write \(\mathfrak{ev}_{k+1}\) for the right adjoint to \(\mathfrak{so}_{k+1}\) in the adjunction
\[\mathfrak{so}_{k+1}:\hat{\square}\leftrightarrowleftrightarrow\hat{\square}: \mathfrak{ev}_{k+1}.\]
Regard \(\mathfrak{so}_{3}\) as copointed by the unique monoidal natural transformation \(\epsilon\) such that
\[\epsilon_{\square[1]^{n}}=\square\left[(-)^{0\to 1:[0]\to[2]}\right]:\square[3]^{n}\to\square[1]^{n}.\]
Define a cubical analogue \(\Omega^{n}(C,v)\) of an \(n\)-fold loop space by the following Cartesian square natural in a cubical set \(C\) equipped with vertex \(v\in C_{0}\), where \(\langle v\rangle\) denotes the
minimal subpresheaf of \(C\) containing \(v\) as its unique vertex.
**Example 3.19**.: For each monoid \(M\), \(\Omega^{1}(\operatorname{\mathfrak{n}er}M,\star)\) is the discrete cubical set \(M\).
A crucial technical tool in classical proofs of simplicial approximation is the factorizability of double barycentric subdivisions through polyhedral complexes [15, SS12]. There exists a directed, cubical analogue. The following three lemmas adapt observations made in a predecessor to this paper [44, Lemmas 6.11, 6.12, 6.13] from the traditional setting of cubical sets to the cubical sets considered in this paper and from sixteen-fold subdivision \(\mathfrak{so}_{16}=\mathfrak{so}_{2}^{4}\) to nine-fold subdivision \(\mathfrak{so}_{9}=\mathfrak{so}_{3}^{2}\); justifications are given after all three lemmas are stated. Recall that under our conventions, \(\operatorname{\mathfrak{s}upp}_{\mathfrak{so}_{3}}(v,C)\) denotes the minimal subpresheaf \(B\) of \(C\) for which \(\mathfrak{so}_{3}B\) has vertex \(v\).
**Lemma 3.20**.: _For all cubical sets \(C\) and \(v\in\mathfrak{so}_{3}C_{0}\),_
\[\epsilon_{C}(\operatorname{Star}_{\mathfrak{so}_{3}C})(v)\subset\operatorname {\mathfrak{s}upp}_{\mathfrak{so}_{3}}(v,C).\]
**Lemma 3.21**.: _Fix cubical set \(C\) and atomic subpresheaf \(A\subset\mathfrak{so}_{3}C\). There exist:_
1. _unique minimal subpresheaf_ \(C_{A}\subset C\) _with_ \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\)__
2. _retraction_ \(\pi_{(C,A)}:A\to A\cap\mathfrak{so}_{3}C_{A}\)_, unique up to isomorphism_
_Moreover, \(A\cap\mathfrak{so}_{3}C_{A}\) is representable and \(\epsilon_{C}(A\hookrightarrow\mathfrak{so}_{3}C)=\epsilon_{C}(A\cap\mathfrak{so }_{3}C_{A}\hookrightarrow\mathfrak{so}_{3}C)\pi_{(C,A)}\)._
**Lemma 3.22**.: _Consider the left of the solid commutative diagrams_
_where \(A^{\prime},A^{\prime\prime}\) are non-empty subpresheaves of atomic cubical sets. Suppose \(B^{\prime},B^{\prime\prime}\) are minimal respective subpresheaves of \(C^{\prime},C^{\prime\prime}\) such that \(A^{\prime}\cap\mathfrak{so}_{3}B^{\prime}\neq\varnothing\) and \(A^{\prime\prime}\cap\mathfrak{so}_{3}B^{\prime\prime}\neq\varnothing\). Let \(\pi^{\prime},\pi^{\prime\prime}\) be retractions of inclusions in the right diagram. There exists a unique dotted cubical function making the right square commute._
The claim that \(A\cap\mathfrak{so}_{3}C_{A}\) is representable in Lemma 3.21 follows from the fact that \(C_{A}\) and hence also \(A\cap\mathfrak{so}_{3}C_{A}\) are atomic and \(A\cap\mathfrak{so}_{3}\partial C_{A}=\varnothing\) by minimality. To show the other claims, it suffices to take the case where \(C\) is representable by naturality and hence the even more special case where \(C=\square[1]\) because all the functors and natural transformations in sight are monoidal. In that case, these other claims follow from inspection.
**Lemma 3.23**.: _Consider the top left vertical inclusion of cubical sets in_
_There exist dotted cubical functions, natural in objects \(A\hookrightarrow\mathfrak{so}_{3}C\) in the full subcategory of \((\bar{\square}/\mathfrak{so}_{3})\) consisting of inclusions of non-empty subpresheaves \(A\) of atomic subpresheaves
of \(\mathfrak{so}_{3}C\), making the diagram commute. The right vertical arrows can be chosen to have as their image the minimal subpresheaf \(C_{A}\subset C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\)._
The proof mimics a proof of an analogous result in a predecessor to this paper [44, Lemma 8.16]. That result is stated at the level of streams instead of cubical sets and for \(\mathfrak{so}_{4}=\mathfrak{so}_{2}^{2}\) instead of \(\mathfrak{so}_{3}\). We therefore include the following proof for completeness.
Proof.: Call the objects in the full subcategory of \((\hat{\square}/\mathfrak{so}_{3})\) consisting of inclusions \(A\hookrightarrow\mathfrak{so}_{3}C\) of non-empty subpresheaves \(A\) of atomic subpresheaves of \(\mathfrak{so}_{3}C\)_subatomic inclusions_. Let \(\epsilon_{(C,A)}=\epsilon_{C}(A\hookrightarrow\mathfrak{so}_{3}C)\)
There exists a unique minimal atomic subpresheaf \(C_{A}\subset C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\) [Lemma 3.21]. The inclusion \(B_{(C,A)}\hookrightarrow A\) of \(B_{(C,A)}=A\cap\mathfrak{so}_{3}C_{A}\) admits a retraction \(\pi_{(C,A)}\) making the following diagram commute [Lemmas 3.21 and 3.22]:
(3)
The cubical set \(B_{(C,A)}\) isomorphic to a representable [Lemma 3.21]. It therefore suffices to show that the above diagram is natural in subatomic inclusions \(A\hookrightarrow\mathfrak{so}_{3}C\). To that end, consider the solid commutative outer rectangle in the diagram
in which the top vertical arrows are subatomic inclusions. There exists a unique dotted cubical function making the upper trapezoid commute [Lemma 3.22]. The triangles commute by (3) commutative. The lower trapezoid commutes because the outer rectangle commutes and the cubical functions of the form \(\pi_{(C,A)}\) are epi. Thus the entire diagram commutes. The desired naturality of (3) follows.
The following lemma defines pro-diagrams that encode something like weak factorization systems.
**Lemma 3.24**.: _There exists a functor \(F_{C}\), natural in cubical sets \(C\), of the form_
\[F_{C}:(\mathcal{A}(\mathfrak{so}_{3}C))^{\mathrm{op}}\to\mathbf{pro}\text{-} \left(\square/C\right),\]
_where \(\mathcal{A}(\mathfrak{so}_{3}C)\) is the poset of non-empty subpresheaves of atomic subpresheaves of \(\mathfrak{so}_{3}C\) ordered by inclusion, satisfying the following. For each \(\mathcal{A}(\mathfrak{so}_{3}C)^{\mathrm{op}}\)-object \(A\), \(F_{C}A:\square[1]^{\infty}\to C\) and \(F_{C}A\) has as its image the minimal subpresheaf \(C_{A}\) of \(C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\). For each \(\mathcal{A}(\mathfrak{so}_{3})\)-morphism \(A^{\prime}\hookrightarrow A^{\prime\prime}\), \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime})\) is represented by a monic natural transformation between cofiltered diagrams in \(\square/C\)._
The proof relies on the fact that parallel epis in \(\square\) are always isomorphic to one another in the arrow category \(\square^{[1]}\). For this reason the proof does not adapt to larger variants of \(\square\) that include, for example, coconnections of one or both kinds.
Proof.: Let \(A\) denote an atomic subpresheaf of \(\mathfrak{so}_{3}C\). There exists a unique minimal atomic subpresheaf \(C_{A}\subset C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\) for each \(A\) [Lemma 3.21]. Let \(n_{A}=\dim\,C_{A}\). Let \(\pi_{A}\) denote a choice, unique up to \((\square/C_{A})\)-isomorphism, of epi \(\square[1]^{n_{A}}\to C_{A}\) for each \(A\). Let \(F_{C}\) denote the limit in \(\mathbf{pro}\text{-}(\square/C)\) of the cofiltered diagram whose morphisms are the outer triangles in commutative triangles of the form
Consider an \(\mathcal{A}(\mathfrak{so}_{3}C)\)-morphism \(A^{\prime}\hookrightarrow A^{\prime\prime}\). Then \(C_{A^{\prime\prime}}\subset C_{A^{\prime}}\) by minimality. The cubical set \(A^{\prime\prime}\) is atomic and hence \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) is an atomic subpresheaf of \(\mathfrak{so}_{3}C_{A^{\prime}}\). The top dimensional cube in the atomic cubical set \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) is not a cube in \(\mathfrak{so}_{3}\partial C_{A^{\prime}}\) because \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) contains an atomic subpresheaf \(A^{\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) which does not intersect \(\mathfrak{so}_{3}\partial C_{A^{\prime}}\) by minimality of \(C_{A^{\prime}}\). Therefore \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) has a unique atomic preimage under \(\mathfrak{so}_{3}\pi_{A^{\prime}}\). Therefore there exists a unique minimal and hence atomic subpresheaf \(P_{A^{\prime},A^{\prime\prime}}\subset\square[1]^{n_{A^{\prime}}}\) with \(\mathfrak{so}_{3}P_{A^{\prime}A^{\prime\prime}}\) intersecting the preimage of \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) under \(\mathfrak{so}_{3}\pi_{A^{\prime}}\) [Lemma 3.21]. The cubical set \(P_{A^{\prime},A^{\prime\prime}}\), an atomic subpresheaf of \(\square[1]^{n_{A^{\prime}}}\), is isomorphic to a representable. The restriction of \(\pi_{A^{\prime}}\) to \(P_{A^{\prime},A^{\prime\prime}}\) corestricts to a cubical function \(\pi_{A^{\prime},A^{\prime\prime}}\) making the diagram
commute by minimality of \(P_{A^{\prime},A^{\prime\prime}}\). Thus it is possible to define \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime})\) as the \(\mathbf{pro}\text{-}(\square/C)\)-morphism \(F_{C}A^{\prime\prime}\to F_{C}A^{\prime}\) induced by the vertical arrows in the commutative diagram above. For each \(\mathcal{A}(\mathfrak{so}_{3}C)\)-object \(A\), \(F_{C}(1_{A})=1_{F_{C}A}\) because \(P_{A,A}=P_{A}\). It therefore suffices to show \(F_{C}\) preserves composition. For then \(F_{C}\), which preserves identities, would define the desired functor.
To that end, consider a composable sequence of \(\mathcal{A}(\mathfrak{so}_{3}C)\)-morphisms
\[A^{\prime}\hookrightarrow A^{\prime\prime}\hookrightarrow A^{\prime\prime\prime}.\]
Observe \(P_{A^{\prime},A^{\prime\prime\prime}}\subset P_{A^{\prime},A^{\prime\prime }}\) by minimality. Consider the solid arrows in
There exists a unique rightmost dotted horizontal epi in the top row whose composite with \(\pi_{A^{\prime\prime\prime}}\) is \(\pi_{A^{\prime\prime},A^{\prime\prime\prime}}\) by \(\pi_{A^{\prime\prime\prime}}\) terminal in \(\square/C\) among all epis from representables having image \(C_{A^{\prime\prime\prime}}\). There exists a unique rightmost dotted horizontal epi in the middle row whose
composite with \(\pi_{A^{\prime\prime}}\) is \(\pi_{A^{\prime},A^{\prime\prime}}\) by \(\pi_{A^{\prime\prime}}\) terminal in \(\square/C\) among all epis from representables having image \(C_{A^{\prime\prime}}\).. There exists a unique leftmost dotted horizontal epi in the top row whose composite with \(\pi_{A^{\prime\prime},A^{\prime\prime\prime}}\), the composite of the other arrows in the top row, is \(\pi_{A^{\prime},A^{\prime\prime\prime}}\) by minimality in our choice of \(P_{A^{\prime},A^{\prime\prime\prime}}\). The cofiltered limits of the top, middle, and bottom rows define the respective objects \(F_{C}A^{\prime\prime\prime},F_{C}A^{\prime\prime},F_{C}A^{\prime}\) because epis in \(\square\) are determined up to isomorphism by their domain and codomain [Lemma 3.9]. Vertical inclusions define natural transformations of these aforementioned diagrams. The top vertical arrows induce \(F_{C}(A^{\prime\prime}\hookrightarrow A^{\prime\prime\prime})\) by I commutative. The bottom vertical arrows induce \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime})\) by II commutative. The composite of the vertical arrows induces \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime\prime})\) by I+II+III commutative. \(\square\)
### Comparisons
Define dotted functors in the commutative diagram
in which the right vertical arrow is the forgetful functor, so that \(\left\lvert\square[\delta_{\pm}]\right\rvert\) is the stream map \(\star\to\vec{\mathbb{I}}\) having image \(\nicefrac{{1}}{{2}}\pm\nicefrac{{1}}{{2}}\).
**Example 3.25**.: We can make the identifications
\[\left\lvert\square[1]^{n}\right\rvert=\mathbb{I}^{n}\ \ \ \ \left\lvert\square[1]^{n} \right\rvert=\vec{\mathbb{I}}^{n}\]
along the continuous function that naturally sends each vertex \((x_{1},\dots,x_{n})\in[1]^{n}\subset\left\lvert\square[1]^{n}\right\rvert\) to \((x_{1},\dots,x_{n})\in\mathbb{I}^{n}\).
Directed realization preserves embeddings by a straightforward adaptation of a proof under the usual definition of cubical sets [44, Theorem 6.19].
**Proposition 3.26**.: _For each monic cubical function \(\iota\), \(\left\lvert\iota\right\rvert\) is a stream embedding._
**Example 3.27**.: There exists a stream embedding of the form
\[\left\lvert A\otimes B\hookrightarrow A\times B\right\rvert\colon(\left\lvert A \right\rvert\times\left\lvert B\right\rvert)\hookrightarrow\left\lvert(A \times B)\right\rvert,\]
natural in cubical sets \(A\) and \(B\).
For each cubical set \(C\), write \(\varphi_{C;k+1}\) for the component
\[\varphi_{C;k+1}:\left\lvert\mathfrak{so}_{k+1}C\right\rvert\cong\left\lvert C\right\rvert\]
of the natural isomorphism defined by the following proposition.
**Proposition 3.28**.: _The following diagram_
_commutes up to a natural isomorphism whose \(\square[1]^{n}\)-component \(\left\lvert\mathfrak{so}_{k+1}\square[1]^{n}\right\rvert\cong\left\lvert \square[1]^{n}\right\rvert\) is linear on each cell and sends each geometric vertex \(v\in[k+1]^{n}\) in \(\left\lvert\square[k+1]^{n}\right\rvert\) to \(\nicefrac{{v}}{{k+1}}\in\mathbb{I}^{n}\)._
Let \(\mathsf{sing}\) denote the right adjoint to \(\left\lvert-\right\rvert\colon\hat{\Box}\to\mathbf{DiTop}\) naturally defined by
\[(\mathsf{sing}\,X)_{n}=\mathbf{DiTop}(\left\lvert\Box[1]^{n}\right\rvert,X).\]
The following lemma is the main method of obtaining information about edge orientations on a cubical set from the circulation on a directed realization. Recall that under our definition of supports, \(\mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(x,L)\) is the minimal Boolean interval \(I\) in a finite distributive lattice \(L\) such that \(x\in\left\lvert\Box[I]\right\rvert\).
**Lemma 3.29**.: _Fix a \(\mathbf{Dis}\)-object \(L\). Consider \(x\leqslant_{\left\lvert\Box[L]\right\rvert}y\). Then_
\[\min\mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(x,L)\leqslant_{L}\min \mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(y,L).\]
Proof.: In the case \(L=[1]^{n}\),
\[\min\mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(x,[1]^{n})=(\lfloor x_{1} \rfloor,\dots,\lfloor x_{n}\rfloor)\leqslant_{[1]^{n}}(\lfloor y_{1}\rfloor, \dots,\lfloor y_{n}\rfloor)=\min\mathsf{supp}_{\left\lvert\Box[-]\right\rvert }(y,L).\]
The general case follows from transitivity of preorders.
**Remark 3.30**.: The following classes coincide [2, Theorem 2.5]:
1. \(\mathrm{CAT}(0)\) cubical complexes
2. cubical complexes in which the cubes represent the Boolean intervals in a poset of _consistent ideals_ in a _poset-with-inconsistent-pairs_
_Posets-with-inconsistent-ideals_ refer to posets with certain extra structure in which the _consistent ideals_ are the lower sets compatible with that extra structure. A Stone Duality generalizes to a duality between _distributive semilattices_ and structures which, in the finite case, coincide with posets-with-inconsistent-pairs [25, Propositions 5.7,5.8]. Thus the finite \(\mathrm{CAT}(0)\) cubical complexes are precisely the cubical complexes of the form \(\left\lvert\Box[L]\right\rvert\) for \(L\) a finite distributive semilattice. The \(\mathrm{CAT}(0)\) condition has been recently studied in directed homotopy theory as a sufficient criterion for fundamental categories to faithfully embed into fundamental groupoids [28].
We end the section with an analogue of Reedy cofibrant replacement for the support of a directed singular cube and a special _right_ lifting property for this replacement against directed cubes.
**Lemma 3.31**.: _Let \(f\) denote a \((\left\lvert\Box[-]\right\rvert/\left\lvert\mathfrak{so}_{3}-\right\rvert)\)-object as in the diagram_
_Let \(\mathscr{R}\) be the full subcategory of \(\hat{\Box}\) consisting of those cubical sets whose atomic subpresheaves are all isomorphic to representables. There exist dotted \((\mathbf{pro}\text{-}(\mathscr{R}/C_{f}))\)-object \(\Lambda_{f}^{*}\) and \((\mathbf{pro}\text{-}\mathbf{DiTop})\)-morphism \(f^{*}\), both natural in \(f\), making the diagram commute._
The proof relies on the fact that the natural quotient functor
\[\mathbf{pro}\text{-}\left(\mathscr{X}^{\mathscr{G}}\right)\to(\mathbf{pro} \text{-}\mathscr{X})^{\mathscr{G}} \tag{4}\]
is a categorical equivalence for categories \(\mathscr{G},\mathscr{X}\) with \(\mathscr{G}\) finite and having only identities for endomorphisms [54, SS3]. In our case, \(\mathscr{G}\) is a poset of Boolean intervals in a lattice of the form \(\mathfrak{so}_{k+1}[1]^{n_{f}}\). The acyclicity required of \(\mathscr{G}\) generalizes the inductive structure of Reedy
categories. Factorizations in \(\operatorname{\mathbf{pro}}\)-\(\mathscr{X}\) resemble weak factorization systems in model structures. Certain choices \(o\) of diagrams \(\mathscr{G}\to\mathscr{X}\) whose formal limit coincides with an object in the codomain of (4) resemble inductive constructions like Reedy-cofibrant replacement. When \(\mathscr{X}=\square/C_{f}\), the colimits of the choices \(o\) correspond to analogues \(C^{*}_{f;o}\to C\) of Reedy-cofibrant replacement. When \(\mathscr{X}\) is a more complicated category \(\mathscr{F}_{f}\) of local lifts of \(f\) up to natural directed homotopy, the choices \(o\) give the replacement \(C^{*}_{f;o}\to C\) as well as the lift of \(|\epsilon_{C_{f}}|\;f\) at once.
Proof.: For brevity, write \(\varphi_{f;k}\) for the homeomorphism
\[\varphi_{f;k}=\varphi_{\square[1]^{n_{f}};2^{k}}:|\mathfrak{so}_{2}^{k} \square[1]^{n_{f}}|\cong|\square[1]^{n_{f}}|.\]
For each \(i=0,1,2\), write \(f_{i}\) for the stream map
\[f_{i}= |\epsilon^{i}_{C_{f}}|\;f:|\square[1]^{n_{f}}|\to |\mathfrak{so}_{3}^{2-i}C_{f}|\;.\]
Let \(\mathcal{A}(C)\) denote the category whose objects are the non-empty subpresheaves of atomic subpresheaves of a cubical set \(C\) and whose morphisms are all inclusions between them. Let \(\mathcal{L}_{f}\) denote the poset, ordered under inclusion, of all order-convex subtopological sublattices of \(|\square[1]^{n_{f}}|\) that \(f\) maps into open stars of vertices. Let \(\mathcal{L}_{f;k,j}\) denote the subposet of \(\mathcal{L}_{f}\) consisting of all images of closed cells under \(\varphi_{f;k+i}\) for all \(0\leqslant i\leqslant j\). Let \(L\) denote an \(\mathcal{L}_{f}\)-object. Let \(\mathscr{R}\) be the full subcategory of \(\hat{\square}\) whose objects are those cubical sets whose atomic subpresheaves are isomorphic to representables. Let \(\mathscr{D}\) be the category of compact Hausdorff topological distributive lattices and continuous lattice homomorphisms between them. For each injective \(\square\)-morphism \(\delta\), write \(\delta^{\dagger}\) for the unique retraction in \(\square\) to \(\delta\). Let an _injection_ of the form \([m]\to[m+n]\) simply refer to an injection of underlying sets.
_terminal local lifts_: There exists a unique minimal non-empty and hence atomic subpresheaf \(C_{A}\subset C\) such that \(\mathfrak{so}_{3}C_{A}\cap A\neq\varnothing\) [Lemma 3.21] for each cubical set \(C\) and \(\mathcal{A}(\mathfrak{so}_{3}C)\)-object \(A\). There exists a choice of cubical function \(\theta_{A}:\square[1]^{\dim\;C_{A}}\to C\) unique up to \((\square/C)\)-isomorphism by minimality of \(\dim\;C_{A}\). There exists a choice of cubical function \(\psi_{A}:A\to\square[1]^{\dim\;A}\), natural in cubical sets \(C\) and \(\mathcal{A}(\mathfrak{so}_{3}C)\)-objects \(A\), lifting \(\epsilon_{C}(A\hookrightarrow\mathfrak{so}_{3}C)\) against \(\theta_{A}\) [Lemma 3.23].
Let \(V_{f}\) be the set of all vertices in \(\operatorname{\mathbf{supp}}_{|-|}(f,\mathfrak{so}_{9}C_{f})\), finite by \(|\square[1]^{n_{f}}|\) compact, whose open stars contain \(f_{0}(L)\). The vertices in \(V_{f}L\), whose open stars have non-empty intersection, therefore are the vertices of a unique closed cell \(E_{f}L\) in \(|\mathfrak{so}_{9}C_{f}|\). Then \(A_{f}L=\operatorname{\mathbf{supp}}_{\mathfrak{so}_{3}}(E_{f}L,\mathfrak{so} _{9}C_{f})\) is an \(\mathcal{A}(\mathfrak{so}_{3}C_{f})\)-object [Lemma 3.20]. Thus \(A_{f}\) defines a monotone function \(\mathcal{L}_{f}\to\mathcal{A}(\mathfrak{so}_{3}C_{f})\) natural in \(f\). Let \(n_{f;L}=\dim\;C_{A_{f}L}\). Let \(\theta_{f;L}=\theta_{A_{f}L}\). The restriction of \(f_{1}\) to \(L\) has image in \(|A_{f}L|\) and therefore corestricts to \(|\,A_{f}L\,|\) [Proposition 3.26]. Let \(f_{L}^{*}:L\to\!\!
of horizontal arrows in the right of the diagrams
(5)
_local pro-lifts \(\Gamma_{f}L\)_: Let \(\pi_{s;\phi}\) denote the commutative triangle of the form
\[\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)} )}{\overset{s_{0}\times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0)} \times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0) },x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{0}\times s_{1}\times\dots \times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{ (x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}} \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m) })}{\overset{s_{0}\times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0) }\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi (0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x _{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{0}\times s_{1}\times \dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times \dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times \dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1} \times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1} \times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n}) \mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)} \times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s _{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)}, \dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{ \phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots \times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0}, \dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{ \phi(0)}\times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times \dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{ \phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1} \times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)}, \dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n}) \mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}
in which the vertical arrows on the right are defined by the components of a limiting cone, commute by our choice of \(P_{f}\). Define \(f_{L_{1},L_{2}}^{*}\) by the commutative diagram
Fix an \(\mathcal{I}_{L_{1}}\)-morphism \(s:L_{1}\to\vec{\mathbb{I}}^{i_{s}}\). To show that the diagram
commutes, it suffices to show that the outer rectangle commutes because all of the inner triangles commute. It therefore suffices to show that both possible composites \(L_{1}\to\vec{\mathbb{I}}^{n_{f;L_{1}}+i_{s}}\) of maximal composable sequences of arrows in the diagram coincide. The image of \(f_{L_{1}}^{*}\) lies in the image of \(|\square[\delta_{f;L_{1},L_{2}}]|\) by naturality of the construction. Both such composites thus coincide on the first \(n_{f;L_{2}}\) coordinates. Both such composites thus also coincide on the next \(n_{f;L_{1},L_{2}}\) coordinates because the composite \(|\square[\delta_{f;L_{1},L_{2}}^{\dagger}]||\square[\delta_{f;L_{1},L_{2}}]|\) is the identity on the image of \(|\square[\delta_{f;L_{1},L_{2}}]|\) and \(sr_{L_{1},L_{2}}(L_{1}\hookrightarrow L_{2})=s\). Finally, both such composites coincide on the last \(i_{s}\) coordinates because \(sr_{L_{1},L_{2}}(L_{1}\hookrightarrow L_{2})=s\).
For each projection \(p:\vec{\mathbb{I}}^{i_{s}}\to\vec{\mathbb{I}}\), \(p(sr_{L_{1},L_{2}})\) is uniquely determined by \(ps\). For each \(\mathcal{I}_{L_{1}}\)-morphism of the form \(\pi_{s;\phi}:s^{\prime}\to s^{\prime\prime}\), \(f_{L_{1},L_{2}}^{*}\times\pi_{s;\phi}\) is defines an \(\mathcal{I}_{L_{2}}\)-morphism. It therefore follows that there exists a unique dotted (**pro**-\(\mathscr{F}_{f}\))-morphism making the right of the diagrams
in which the vertical arrows are the components of limiting cones, commute for each choice of bottom horizontal \(\mathscr{F}_{f}\)-morphism given by left commutative diagrams in which the unlabelled arrows are composites of projections, onto the first \(n_{f;L_{2}}\) and \(n_{f;L_{1}}\) coordinates, with stream maps \(|\theta_{f;L_{2}}|\) and \(|\theta_{f;L_{1}}|\) [Lemma B.1].
\(\Gamma_{f}\) _defines a functor_: In the case \(L=L_{1}=L_{2}\), \(\delta_{f;L_{1},L_{2}}=1_{[0]}\), hence the left commutative square above is an identity arrow in \(\mathscr{F}_{f}\). We therefore conclude \(\Gamma_{f}(L\hookrightarrow L)=1_{\Gamma_{f}L}\) [Lemma B.1].
For inclusions \(L_{1}\hookrightarrow L_{2}\hookrightarrow L_{3}\) in \(\mathcal{L}_{f}\),
\[\big{(}f^{*}_{L_{3}}\times f^{*}_{L_{2},L_{3}}\times f^{*}_{L_{1},L_{3}}\times sr _{L_{1},L_{2}}r_{L_{2},L_{3}}\big{)}=\big{(}f^{*}_{L_{3}}\times f^{*}_{L_{1},L_ {3}}\times sr_{L_{1},L_{3}}\big{)}\]
by \(r_{L_{1},L_{2}}r_{L_{2},L_{3}}=r_{L_{1},L_{3}}\), \(\delta_{f;L_{1},L_{3}}=(\delta_{f;L_{2},L_{3}}\otimes[1]^{n_{f;L_{1},L_{2}}}) (\delta_{f;L_{1},L_{2}})\) and hence also \(\delta^{\dagger}_{f;L_{1},L_{3}}=(\delta^{\dagger}_{f;L_{1},L_{2}})(\delta_{f ;L_{2},L_{3}}\otimes[1]^{n_{f;L_{1},L_{2}}})^{\dagger}\). We therefore conclude \(\Gamma_{f}(L_{1}\hookrightarrow L_{3})=\Gamma_{f}(L_{2}\hookrightarrow L_{3}) \Gamma_{f}(L_{1}\hookrightarrow L_{2})\) [Lemma B.1].
_global lifts_: The composite functor
\[\Gamma_{f;k,j}=\Gamma_{f}(\mathcal{L}_{f;k,j}\hookrightarrow\mathcal{L}_{f}): \mathcal{L}^{\mathrm{op}}_{f;k,j}\to\mathbf{pro\text{-}}\mathscr{F}_{f}\]
lifts under the natural quotient functor
\[\mathbf{pro\text{-}}\left(\mathscr{F}_{f}^{\mathcal{L}^{\mathrm{op}}_{f;k,j}} \right)\to(\mathbf{pro\text{-}}\mathscr{F}_{f})^{\mathcal{L}^{\mathrm{op}}_{f; k,j}}\,,\]
by \(\mathcal{L}^{\mathrm{op}}_{f;k,j}\) a finite poset [54, SS3], to a cofiltered limit of diagrams of the form
\[\Gamma_{f;(k,j,c)}:\mathcal{L}_{f;k,j}\to\mathscr{F}_{f},\]
naturally indexed by objects \(c\) in some small cofiltered category. Define \(\square\)-object \(I_{f;(k,j,c)}L\), stream map \(f^{*}_{L;(k,j,c)}\) and cubical function \(\theta_{f;(k,j,c)}\) natural in \(\mathcal{L}_{f;k,j}\)-object \(L\) so that \(\Gamma_{f;(k,j,c)}L\) is the left commutative triangle in (5) with \(g=f_{L;(k,j,c)}\), \([1]^{n}=I_{f;(k,j,c)}L\) and \(\theta=\theta_{f;(k,j,c)}\). Define cubical function \(\Lambda^{*}_{f;(k,j,c)}\) and cubical set \(C^{*}_{f;(k,j,c)}\), both natural in \(c\) and \(f\), by
\[\Lambda_{f;(k,j,c)}=\big{(}\mathrm{colim}\,I_{f;(k,j,c)}:(\bullet\leftarrow \bullet\rightarrow\bullet\leftarrow\bullet\cdots\rightarrow\bullet)^{n_{f}} \rightarrow(\square/C_{f})\big{)}:C^{*}_{f;(k,j,c)}\to C_{f},\]
a finite iterated pushout of inclusions of \((\mathscr{R}/C_{f})\)-objects and hence a \((\mathscr{R}/C_{f})\)-object. There exists a unique top horizontal dotted stream map, natural in \(f\), making the top trapezoid and hence entire diagram
commute for each \(\mathcal{L}_{f;k,j}\)-object \(X\) by \(|\square[1]^{n_{f}}\rfloor\) a \(\mathbf{DiTop}\)-colimit of \(\mathcal{L}_{f;k,j}\)-objects. Inclusions \(\mathcal{L}_{f;k_{2},j_{1}}\hookrightarrow\mathcal{L}_{f;k_{1},j_{2}}\) for all \(k_{1}\leqslant k_{2}\) and \(j_{1}\leqslant j_{2}\) imply that \(C^{*}_{f;(k,j,c)}\) is natural not only in \(f\) and \(c\) but also in \(\omega^{\mathrm{op}}\)-objects \(k\gg 0\) and \(\omega\)-objects \(j\). Taking cofiltered limits indexed over all objects \(o=(k,j,c)\) gives the desired constructions.
### Cubcats
Commutative diagrams of the form
\[\begin{CD}\square[1]^{n_{\theta}}@>{\mathfrak{a}\mathfrak{j}(\varphi_{ \square[1]^{n_{\theta}}})}>{}>\mathfrak{e}_{\mathbb{F}_{2}}(S\ |\square[1]^{n_{\theta}}\lfloor)\\ @V{\mathfrak{e}_{\mathbb{F}_{2}}\theta}V{}V\\ \mathfrak{colim}_{\square[1]^{n}\to C}\mathsf{sing}\ |\square[1]^{n}\lfloor\ \mathfrak{e}_{\mathbb{F}_{2}}(\mathrm{colim}_{\square[1]^{n}\to C}\mathsf{sing}\ |\square[1]^{n}\lfloor),\end{CD}\]
natural in \((\square/\mathrm{colim}_{\square[1]^{n}\to C}\mathsf{sing}\ \uparrow\square[1]^{n}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
minimal variant of \(\square\) admitting both the structure of a strict \(\infty\)-fold category and compatible connections [1, Theorem 8.8]. One one hand, the compositions that a cubcat must admit are not required to satisfy the associativity and unitality axioms of compositions in strict \(\infty\)-fold categories. On the other hand, a cubcat admits the symmetries implicit in our working definition of cubical sets and must admit many other compatible unary operations on cubes, parametrized by \(\blacksquare\)-morphisms, than just the connections.
**Proposition 3.35**.: _For each \(\mathscr{G}\)-stream \(X\), \(\mathsf{sing}\,X\) is a \(\mathscr{G}\)-cubcat._
The proof is formal.
Proof.: Let \(\eta^{\prime}\) and \(\epsilon^{\prime}\) denote the unit and counit of the adjunction
\[\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathsf{sing}\,^{\mathscr{G}}.\]
Let \(\epsilon^{\prime\prime}\) denote the counit of the adjunction
\[\mathsf{so}_{2}\dashv\mathfrak{ex}_{2}.\]
Let \(S=\mathsf{sing}\,\). Let \(S^{(2)}X(g)\) be the cubical set
\[S^{(2)}X(g)=\operatorname{colim}_{\square[1]^{n}\to SX(g)}S\mid\square[1]^{n} \mathord{\upharpoonright}\,.\]
natural in \(\mathscr{G}\)-streams \(X\) and \(\mathscr{G}\)-objects \(g\). Define \(\nu_{X}\) and \(\mu_{X}\) by commutative diagrams
in which the unlabelled arrows are cannonically defined. The commutativity of the diagram
implies that \(SX\) is a \(\mathscr{G}\)-cubcat.
**Proposition 3.36**.: _For each \(\mathscr{G}\)-category \(\mathcal{X}\), \(\mathsf{ncr}\,\mathcal{X}\) is a \(\mathscr{G}\)-cubcat._
The proof approximates directed topological cubes by \(\square\)-objects.
Proof.: Let \(S=\mathsf{sing}\,\) and \(N=\mathsf{ncr}\,\). In the left of the diagrams
there exists a dotted cubical function \(\zeta_{n}:S\mid\square[1]^{n}\mathord{\upharpoonright}N[1]^{n}\), natural in \(\square\)-objects \([1]^{n}\) and unique by \([1]^{n}\) a poset, sending each object \(x\in\mathbb{I}^{n}\) to \(\min\,\mathsf{supp}_{|-|}(x,\square[1]^{n})\) [Lemma 3.29]
and thereby making the left and hence also right squares commute. Define \(\nu_{\mathcal{X}}\) and \(\mu_{\mathcal{X}}\) by commutative diagrams
the latter of which is natural in \(\square\)-objects \([1]^{n}\). The top left horizontal cubical functions, natural in such \((\square/\mathcal{X})\)-objects \(\phi\), induces the dotted vertical cubical function making the entire diagram commute. The commutativity of the diagram
in which the left vertical arrow is induced by the unit of \(\,\mathord{\left\uparrow\right|\kern-1.0pt\left\downarrow\mathsf{sing}\,\right\rangle} \,\) implies that \(N\mathcal{X}\) is a \(\mathscr{G}\)-cubcat.
Cubcats are algebras over the underlying pointed endofunctor of \(\mathsf{sing}\,\mathord{\left\uparrow\right|\kern-1.0pt\left\downarrow\mathsf{ up to}\,\mathfrak{so}_{3}\).
**Lemma 3.37**.: _Fix a \(\mathscr{G}\)-cubcat \(C\). Then there exists a dotted \(\mathscr{G}\)-cubical function making_
_commute._
Proof.: Let \(S=\mathsf{sing}\,\). Let \(g\) denote a \(\mathscr{G}\)-object. Let \(C^{\sharp}(g)\) be the cubical set
\[C^{\sharp}(g)=\operatorname{colim}_{\square[1]^{n}\to C(g)}\mathsf{sing}\,\ \mathord{\left\uparrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left \downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left|\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \downarrow\right|\kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \right|\kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\downarrow\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left| \right|\kern-1.
therefore follows that there exists a dotted \(\mathscr{G}\)-cubical function making the rightmost triangle commute in the diagram
There exists a dotted \(\mathscr{G}\)-cubical function making the parallelogram commute [Lemma 3.21].
## 4. Homotopy
This section formalizes and compares different homotopy theories. Section SS4.1 fixes some definitions of homotopy in terms of an abstract _interval object_. Sections SS4.2, SS4.3, and SS4.4 explore specific instances of abstract homotopy, whether classical, directed, or categorical and whether continuous, cubical, or algebraic. Section SS4.5 compares the different homotopy theories. In particular, section SS4.5.2 gives the main results. Observations about the classical homotopy theory of cubical sets are essentially formal but included for completeness, given that our operating definition of cubical sets is not standard. Observations about classical homotopy theories of small categories and topological spaces, well-known, are included for comparison with their directed counterparts.
### Abstract
The simplest way to discuss the variety of homotopy theories of interest is in terms of abstract interval objects. The purpose of this section is to fix notation and terminology for standard concepts at this level of abstraction. Fix a closed monoidal category \(\mathscr{X}\) with terminal unit. Fix an _interval object_\(i\) in \(\mathscr{X}\), which we take in this paper to mean a functor \(\square_{1}\to\mathscr{X}\) preserving terminal objects.
**Example 4.1**.: The interval object in \(\mathbf{Top}\) naturally sending \(\delta_{\pm}\) to the functions
\[\{\nicefrac{{1}}{{2}}\pm\nicefrac{{1}}{{2}}\}\hookrightarrow\mathbb{I}\]
is the prototypical example of an interval object. Much of homotopy theory on \(\mathbf{Top}\) generalizes to a category equipped with an interval object.
We fix some general terminology for standard concepts, like relative homotopy and homotopy equivalences, in terms of the interval object \(i\). For a pair of parallel \(\mathscr{X}\)-morphisms \(\zeta_{1},\zeta_{2}:o_{1}\to o_{2}\), _left and right \(i\)-homotopies_ from \(\zeta_{1}\) to \(\zeta_{2}\) are choices of dotted \(\mathscr{X}\)-morphisms respectively making I,II commute in
Write \(\zeta_{1}\sim_{i}\zeta_{2}\) to denote a (left or right) \(i\)-homotopy from \(\zeta_{1}\) to \(\zeta_{2}\) or the existence of such an \(i\)-homotopy. Say that the dotted right \(i\)-homotopy on the right side is _relative_ a
morphism \(\iota:o\to o_{1}\) if additionally III commutes for \(\zeta=\zeta_{1}\) or equivalently for \(\zeta=\zeta_{2}\). We will repeatedly use the formal fact that there exists an \(\mathfrak{i}\)-homotopy (relative a \(\mathscr{X}\)-morphism \(\zeta\) to \(o_{1}\)) between a pair of parallel \(\mathscr{X}\)-morphisms \(\zeta_{1},\zeta_{2}:o_{1}\to o_{2}\) (whose precomposites with \(\zeta\) coincide) natural in \(\zeta_{1},\zeta_{2}\) and a choice of dotted lift making IV (and V) commute in
An \(\mathscr{X}\)-morphism \(\alpha:o_{1}\to o_{2}\) is an \(\mathfrak{i}\)-_equivalence_ if there exists an \(\mathscr{X}\)-morphism \(\beta:o_{2}\to o_{1}\) with \(1_{a}\leadsto_{i}\beta\alpha\) and \(1_{b}\leadsto_{i}\alpha\beta\). Define the interval object \(\mathfrak{i}_{n}\), informally the n-fold zig-zag of \(\mathfrak{i}\), by \(\mathfrak{i}_{0}=\mathfrak{i}\) and the following commutative diagrams among which the first is co-Cartesian:
An \(\mathfrak{i}_{*}\)_-homotopy_ is a \(\mathfrak{i}_{n}\)-homotopy for some \(n\). Write \(\zeta_{1}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\zeta_{2}\) to denote an \(\mathfrak{i}_{*}\)-homotopy or the existence of such an \(\mathfrak{i}_{*}\)-homotopy from \(\zeta_{1}\) to \(\zeta_{2}\). In other words, \(\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\) is the congruence on \(\mathscr{X}\) generated by the relation \(\leadsto_{i}\) on morphisms. An \(\mathfrak{i}_{*}\)_-equivalence_ is an \(\mathfrak{i}_{n}\)-equivalence for some \(n\), or equivalently a \(\mathscr{X}\)-morphism representing an isomorphism in the quotient category \(\mathscr{X}/\!\!\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\).
**Lemma 4.2**.: _Localization of \(\mathscr{X}\) by the \(\mathfrak{i}_{*}\)-equivalences is given by the quotient functor_
\[\mathscr{X}\to\mathscr{X}\left/\!\!\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt \hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}},\right. \tag{7}\]
_for each closed monoidal category \(\mathscr{X}\) with terminal unit and interval object \(\mathfrak{i}\) in \(\mathscr{X}\)._
Proof.: Fix a functor \(F:\mathscr{X}\to\mathscr{Y}\) mapping the \(\mathfrak{i}_{*}\)-equivalences to isomorphisms. Consider a pair of \(\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\)-equivalent \(\mathscr{X}\)-morphisms \(\alpha,\beta:o_{1}\to o_{2}\). Then there exists \(n\gg 0\) and \(\eta_{n}:\alpha\leadsto_{\mathfrak{i}_{n}}\beta\). In the diagram
the left triangle commutes because \(\delta_{\pm}\) admit as retractions \(\sigma\) and hence the two solid diagonal morphisms, isomorphisms, admit a common retraction and hence coincide. The top and bottom triangle commute by our choice of \(\eta_{n}\). The right triangle, degenerate, commutes. Thus the outer square commutes and hence \(F\alpha=F\beta\). Thus \(F\) factors through the quotient functor (7).
Let \([o_{1},o_{2}]_{\mathrm{i}}=\mathscr{X}(o_{1},o_{2})\left/\right.\right/_{ \left.\right<\leftrightarrows_{\mathrm{i}_{*}}}\), the **Set**-coequalizer
A natural transformation \(\mathrm{i}^{\prime}\to\mathrm{i}^{\prime\prime}\) of interval objects implies that
\[graph\left(\leadsto_{\mathrm{i}^{\prime}}\right)\subset graph\left(\leadsto_{ \mathrm{i}^{\prime\prime}}\right).\]
**Example 4.3**.: We have the following chain
\[graph\left(\leadsto_{\mathrm{i}_{0}}\right)\subset graph\left(\leadsto_{ \mathrm{i}_{1}}\right)\subset graph\left(\leadsto_{\mathrm{i}_{2}}\right) \subset\cdots graph\left(\leadsto_{\mathrm{i}}\right)\]
for each interval object \(\mathrm{i}\) in a cocomplete closed monoidal category.
Define the interval object \(\mathfrak{d}\) by the commutative diagram
**Example 4.4**.: The interval object defining classical homotopy [Example 4.1] is
\[|\mathfrak{d}|:\square_{1}\to\mathbf{Top}.\]
The different homotopies in the classical setting coincide: \(|\mathfrak{d}|\cong|\mathfrak{d}|_{1}\cong|\mathfrak{d}|_{2}\cdots\) and
\[\leadsto_{|\mathfrak{d}|}=\leadsto_{|\mathfrak{d}|_{1}}=\leadsto_{|\mathfrak{d }|_{2}}=\cdots=\leftrightarrows\searrow_{|\mathfrak{d}|}.\]
We recall and compare homotopy theories based on the interval objects \(\mathfrak{d}\), \(|\mathfrak{d}|\), \(|\mathfrak{d}|\) [Example 4.1], \(\mathfrak{h}=(\mathbf{Top}\hookrightarrow\mathbf{DiTop})|\mathfrak{d}|\), \(\mathrm{T}_{1}\mathfrak{d}:\square_{1}\hookrightarrow\mathbf{Cat}\), \(\Pi_{1}\mathfrak{d}\).
### Continuous
We recall some homotopy theories for the continuous setting. Let \(\pi_{0}X\) denote the set, natural in topological spaces \(X\), of path-components in \(X\).
#### 4.2.1. Classical
We have the natural identification
\[[X,Y]_{|\mathfrak{d}|}=\pi_{0}Y^{X}.\]
A continuous function \(f:X\to Y\) is a classical weak equivalence if
\[\pi_{0}f^{|C|}:\pi_{0}X^{|C|}\cong\pi_{0}Y^{|C|}.\]
for all cubical sets \(C\). The classical weak equivalences and maps having the right lifting property against all maps of the form \(|\square[\delta_{+}\otimes 1_{[1]n}]|:\mathbb{I}^{n}\to\mathbb{I}^{n+1}\) define the weak equivalences and fibrations of the _q-model structure_ on \(\mathbf{Top}\).
#### 4.2.2. Directed
We can make, by cocontinuity of \(|-|\), the identifications
\[|\mathfrak{d}_{n}|=|\mathfrak{d}|_{n},\quad n=0,1,\ldots.\]
A \(|\mathfrak{d}\mid_{*}\)-homotopy is sometimes referred to in the literature as a _d-homotopy_ (eg. [30].) Intuitively, a d-homotopy is a homotopy through stream maps that is additionally piecewise monotone and anti-monotone in its homotopy coordinate. The following natural convexity structure on directed hypercubes makes it possible to construct d-homotopies.
**Lemma 4.5**.: _There exists a \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy between both projections of the form_
\[\left\lvert\square[1]^{n}\right\rvert^{2}{\rightarrow}\left\lvert\square[1]^{n}\right\rvert\]
_natural in \(\square\)-objects \([1]^{n}\)._
Proof.: Let \(\pi_{1;n}\) and \(\pi_{2;n}\) denote the projections
\[\left\lvert\square[1]^{n}\right\rvert^{2}{\rightarrow}\left\lvert\square[1]^{n}\right\rvert\]
onto first and second factors, respectively. Linear interpolation defines \(\left\lvert\mathfrak{d}\right\rvert\)-homotopies
\[\pi_{1;n}\wedge_{\left\lvert\square[1]^{n}\right\rvert}\pi_{2;n}\leadsto_{ \left\lvert\mathfrak{d}\right\rvert}\pi_{1;n},\pi_{2;n}\]
natural in \(\square\)-objects \([1]^{n}\) because \(\left\lvert\square[-]\right\rvert\colon\square\rightarrow\mathbf{DiTop}\) sends each \(\square\)-morphism to a linear map of hypercubes that defines a lattice homomorphism between compact Hausdorff connected topological lattices in \(\mathbf{Pos}\). Concatenating these \(\left\lvert\mathfrak{d}\right\rvert\)-homotopies yields the desired \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy.
A simple consequence is that \(\epsilon_{C}:\mathfrak{so}_{3}C\to C\) defines a natural cubical approximation to \(\varphi_{C;3}\).
**Lemma 4.6**.: _There exists a \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy_
\[\left\lvert\epsilon_{C}\right\rvert\hook\hookrightarrow_{\left\lvert \mathfrak{d}_{1}\right\rvert}\varphi_{C;3}:\left\lvert\mathfrak{so}_{3}C \right\rvert{\rightarrow}\left\lvert C\right\rvert\]
_natural in cubical sets \(C\)._
Proof.: There exists the desired \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy natural in representable cubical sets \(C\) [Lemma 4.5] and hence natural in general cubical sets \(C\) by naturality of \(\left\lvert\epsilon_{C}\right\rvert\) and \(\varphi_{C;3}\).
Nearby stream maps to directed realizations are \(\left\lvert\mathfrak{d}_{*}\right\rvert\)-homotopic.
**Lemma 4.7**.: _There exists a \(\left\lvert\mathfrak{d}_{*}\right\rvert\)-homotopy between stream maps_
\[f,g:X_{(f,g)}\rightarrow\left\lvert\mathfrak{so}_{9}C_{(f,g)}\right\rvert,\]
_natural in objects \(f\times g\) in the full subcategory of \((\mathbf{Str}/\left\lvert\mathfrak{so}_{9}-\right\rvert^{2})\) consisting of those objects \(f\times g:X_{(f,g)}\rightarrow\left\lvert\mathfrak{so}_{9}C_{(f,g)}\right\rvert ^{2}\) for which \(X_{(f,g)}\) is covered by open substreams each of which has images under \(f\) and \(g\) that lie in the open star of the same vertex._
Proof.: For a stream map \(e:X\rightarrow\left\lvert\mathfrak{so}_{9}C\right\rvert\) and substream \(U\subset X\), let
\[e_{U}=e(U\hookrightarrow X):U\hookrightarrow\left\lvert\mathfrak{so}_{9}C \right\rvert.\]
Let \(\mathscr{X}\) denote the category defined by the proposition. Let \(f\times g:X_{(f,g)}\rightarrow\left\lvert\mathfrak{so}_{3}^{2}C_{(f,g)} \right\rvert^{2}\) denote a \(\mathscr{X}\)-object. Let \(\mathscr{O}_{(f,g)}\) be the category whose objects are all substreams of \(X_{(f,g)}\) whose images under \(f\) and \(g\) lie in the open star of the same vertex and whose morphisms are all inclusions between such substreams. Consider a commutative square of the form
in which the vertical arrows are \(\mathscr{X}\)-objects. The image of each \(\mathscr{O}_{(f_{1},g_{1})}\)-object \(U\) under the top horizontal stream map is a \(\mathscr{O}_{(f_{2},g_{2})}\)-object because the bottom horizontal stream map, the directed realization of a cubical function, maps open stars of vertices into open stars of vertices. It is in this sense that the subcategory \(\mathscr{O}_{(f,g)}\) of \(\mathbf{DiTop}\) is natural in \(\mathscr{X}\)-objects \(f\times g\).
Let \(U\) denote a \(\mathscr{O}_{(f,g)}\)-object. Thus \(f_{U},g_{U}\) corestrict to directed realizations of closed stars in \(\mathfrak{so}_{9}C_{(f,g)}\) [Proposition 3.26]. It therefore follows that \(\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\, \mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\, \mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\, \mathord{\upharpoonright}\,\mathord{\upharpoon
Define \(\mathscr{R}\)-morphisms \(\theta_{f;k}\) and \(\rho_{f;k}\), natural in \(f\), by commutative diagrams
There exists a unique dotted stream map \(f_{k}\), natural in \(f\) by uniqueness, making the following diagram, in which \(f_{I}\) is a suitable restriction and corestriction of the composite of the bottom row, commute:
(8)
_convexity structure on \(\mathopen{|}C^{*}_{f;k}\mathclose{|}\)_: Let \(\pi_{f;k;1}\) and \(\pi_{f;k;2}\) denote the respective projections
\[\pi_{f;k;1},\pi_{f;k;2}:\mathopen{|}C^{*}_{f;k}\mathclose{|}^{2}\mathclose{ \rightarrow}\mathopen{|}C^{*}_{f;k}\mathclose{|}\]
onto first and second factors. Define \(s_{f;k}\) by the commutative diagram
(9)
natural in \(f\). For each \(x\in\mathopen{|}C^{*}_{f;k}\mathclose{|}\), \(s_{f;k}(x)\) and \(x\) both lie in the same closed cell in \(\mathopen{|}C^{*}_{f;k}\mathclose{|}\), the directed realization of an atomic subpresheaf of \(C^{*}_{f;k}\) and hence the directed realization of a representable up to isomorphism by our assumption on \(C\). Thus there exists a \(\mathopen{|}\mathfrak{d}_{1}\mathclose{|}\)-homotopy \(s_{f;k}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\mathopen{\restriction}_{ \mathopen{|}C^{*}_{f;k}\mathclose{|}}\) natural in \(f\) [Lemma 4.5]. The stream maps \(\pi_{f;k;1}s_{f;k},\pi_{f;k;2}s_{f;k}\) both naturally factor through \(\mathopen{|}\Box[1]^{n_{f}}\mathclose{|}\). Thus there exists a \(\mathopen{|}\mathfrak{d}_{1}\mathclose{|}\)-homotopy \(\pi_{f;k;1}s_{f;k}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\mathopen{\restriction}_{ \mathopen{|}\mathfrak{d}\mathclose{|}}\pi_{f;k;2}s_{f;k}\) natural in \(f\) [Lemma 4.5]. Concatenating the \(\mathopen{|}\mathfrak{d}_{1}\mathclose{|}\)-homotopies
\[\pi_{f;k;1}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;1}s_{f;k}\mathrel{ \mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;2}s_{f;k}\mathrel{ \mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;2}\]
yields a \(\mathfrak{d}_{3}\)-homotopy \(h^{*}:_{f;k}:\pi_{f;k;1}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;2}\) natural in \(f\).
_constructing the requisite directed homotopy_: Consider the solid arrows in the diagram
(10)
_contr
The top triangle commutes by construction of \(h^{*}_{f;k}\) and the left triangle commutes up to \(\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{ \restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction} \,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{
weak equivalences the _classical homotopy category of \(\hat{\square}\)_. Call the fibrant cubical sets in the test model structure simply _fibrant_. The fundamental groupoid \(\Pi_{1}\) is a classical homotopy invariant in the sense of the following proposition, whose proof is given at the end of SS4.5.1.
**Proposition 4.10**.: _For each classical weak equivalence \(\psi:A\to B\) of cubical sets,_
\[\Pi_{1}\psi:\Pi_{1}A\to\Pi_{1}B\]
_is a categorical equivalence._
As a consequence, cubical nerves of small groupoids are fibrant (cf. Proposition 3.36.)
**Corollary 4.11**.: _For each small groupoid \(\mathcal{G}\), \(\mathfrak{ner}\,\mathcal{G}\) is fibrant._
Proof.: Consider the solid functors in the left of the diagrams
Suppose \(A\hookrightarrow B\) is an acyclic cofibration in the test model structure. There exists a dotted functor \(\phi\) making the entire right diagram commute by \(\Pi_{1}(A\hookrightarrow B)\) a faithful equivalence of small categories. Therefore there exists exists a dotted functor making the left diagram commute.
Classical weak equivalences and monos form the respective weak equivalences and cofibrations of a model structure on presheaves over the minimal variant of \(\square\). In this model structure, the _set_ of inclusions (2) generate the acyclic cofibrations [12, SS8.4.34]. It therefore follows that each inclusion (2) is an acyclic cofibrations in the test model structure on \(\hat{\square}\). For each fibrant cubical set \(C\) having vertex \(v\), let
\[\pi_{n}(C,v)=\pi_{0}\Omega^{n}(C,v).\]
The set \(\pi_{n+1}(C,v)\) naturally admits the extra structure of a group whose operations come from the right lifting properties of the fibration \(C\to\star\) against (2). The groups \(\pi_{1}(C,v),\pi_{2}(C,v),\dots\) are analogous to combinatorial homotopy groups on Kan simplicial sets [41].
**Example 4.12**.: For each group \(G\), \(\pi_{n}(\mathfrak{ner}\,G,\star)=\begin{cases}G&n=1\\ 0&n\neq 1\end{cases}\).
Write \(H^{1}(C;\pi)\) for classical cubical 1-cohomology
\[H^{1}(C;\pi)=[C,\mathfrak{ner}\,\pi]_{\mathfrak{g}}=\pi_{0}(\mathfrak{ner}\, \pi)^{C},\]
an Abelian group natural in Abelian groups \(\pi\) and cubical sets \(C\). Classical cubical 1-cohomology sends classical weak equivalences to isomorphisms by \(\mathfrak{ner}\,\pi\) fibrant [Corollary 4.11]. The higher cubical cohomology groups are obtained by generalizing the cubical nerve of a (discrete cubical) Abelian group \(\pi\) to a suitable iterated fibrant cubical delooping construction \(W^{n}\pi\) (cf. [41].)
#### 4.3.2. Directed
Just as classical cubical homotopy theory is the \(\mathfrak{d}\)-homotopy theory of fibrant cubical sets, we can take _directed cubical homotopy_ to mean the \(\mathfrak{d}\)-homotopy theory of cubcats. This cubical directed theory extends classical cubical homotopy theory by the following proposition, whose proof is given in SS4.5.2.
**Proposition 4.13**.: _For each fibrant cubical set \(B\), there exists a monic cubical function_
\[B\to C\]
_and retraction \(\rho:C\to B\) such that \(1_{C}\rightsquigarrow(B\hookrightarrow C)\rho\)._
We can generalize \(\pi_{n}\) as follows. For each cubcat \(C\) having vertex \(v\), let
\[\tau_{n}(C,v)=\pi_{0}\Omega^{n}(C,v).\]
The set \(\tau_{n+1}(C,v)\) admits the extra structure of a monoid whose products are induced by \(\infty\)-fold compositions on \(C\) compatible with the extension of \(C\) to \(\blacksquare\)\({}^{\operatorname{op}}\).
**Example 4.14**.: For each monoid \(M\), \(\tau_{n}(\operatorname{\mathfrak{n}\mathfrak{e}\mathfrak{r}}M,\star)=\begin{cases} M&n=1\\ 0&n\neq 1\end{cases}\).
Extend first classical cohomology to a first _directed \(1\)-cohomology_
\[H^{1}(C;\tau)=[C,\operatorname{\mathfrak{n}\mathfrak{e}\mathfrak{r}}\tau]_{ \mathfrak{d}}=\pi_{0}(\operatorname{\mathfrak{n}\mathfrak{e}\mathfrak{r}} \tau)^{C},\]
a commutative monoid natural in commutative monoids \(\tau\) and cubical sets \(C\).
**Example 4.15**.: For a cubical model \(\square[1]/\partial\square[1]\) of the circle,
\[H^{1}(\square[1]/\partial\square[1];\tau)=[\mathbb{N},\tau]_{\operatorname{ T}_{1}\mathfrak{d}}=\tau\,/_{\equiv}\]
where \(\equiv\) is the smallest monoid congruence on \(\tau\) equating two elements if they coincide after adding a common element to both of them. This congruence \(\equiv\) is trivial precisely when \(\tau\) is cancellative.
Group-completion induces a monoid homomorphism
\[H^{1}(C;\tau\to\tau[\tau]^{-1}):H^{1}(C;\tau)\to H^{1}(C;\tau[\tau]^{-1})\]
from directed cohomology to classical cohomology, natural in commutative monoid coefficients \(\tau\). Directed \(1\)-cohomology and this natural comparison homomorphism generalize to higher \(n>1\) by representing \(H^{n}(-;\tau)\) with a suitable iterated delooping construction \(W^{n}\tau\) on the (discrete cubical) commutative monoid \(\tau\).
### Algebraic
We recall three homotopy theories on the category \(\mathbf{Cat}\) of small categories and functors between them, in order of increasing refinement. All three of these homotopy theories coincide on the full subcategory \(\mathbf{Gpd}\) of small groupoids.
#### 4.4.1. Classical
The class of _Thomason weak equivalences_ is the smallest retract-closed class \(\mathscr{W}\) of \(\mathbf{Cat}\)-morphisms having the \(2\)-out-of-\(3\) property and containing all terminal functors such that a functor \(\alpha:\mathcal{X}\to\mathcal{Y}\) lies in \(\mathscr{W}\) whenever the induced functor \(\beta\alpha/o\to\beta/o\) lies in \(\mathscr{W}\) for each functor \(\beta:\mathcal{Y}\to\mathcal{Z}\) and \(\mathcal{Z}\)-object \(o\)[11, Theorem 2.2.11]. The localization of \(\mathbf{Cat}\) by the Thomason weak equivalences exists [65] and will be referred to as the _classical homotopy category of \(\mathbf{Cat}\)_.
**Example 4.16**.: A sufficient and intrinsic condition for a \(\mathbf{Cat}\)-morphism
\[\zeta:\mathcal{X}\to\mathcal{Y}\]
to be a Thomason weak equivalence is if \(o/\zeta\) has a terminal object for each \(\mathcal{X}\)-object \(o\) by Quillen's Theorem A.
It is difficult to give a complete characterization of the Thomason weak equivalences that is at once explicit and intrinsic, at least without reference to the simplex category \(\Delta\) (cf. [16].) We write \(h(\mathbf{Cat})\) and \(h(\mathbf{Gpd})\) for the respective localizations of \(\mathbf{Cat}\) and \(\mathbf{Gpd}\) by their Thomason weak equivalences. Thomason weak equivalences can be defined more generally for \(n\)-fold functors between \(n\)-fold categories. These weak equivalences, part of Thomason model structures on categories of \(n\)-fold small categories for each \(n=1,2,\ldots\)[22], model classical homotopy theory in terms of strict (higher) categorical structure.
#### 4.4.2. Directed
Let \(\mathrm{T}_{1}\mathfrak{d}_{n}\) denote the interval object
\[\mathrm{T}_{1}\mathfrak{d}_{n}=\mathrm{T}_{1}(\mathfrak{d}_{n})=(\mathrm{T}_{ 1}\mathfrak{d})_{n}:\square_{1}\to\mathbf{Cat}.\]
In particular, \(\mathrm{T}_{1}\mathfrak{d}\) is the cannonical interval object
\[\mathrm{T}_{1}\mathfrak{d}:\square_{1}\hookrightarrow\mathbf{Cat}.\]
The homotopy theory in which weak equivalences are the \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalences [56], as well a slightly weaker homotopy theory [36] in which homotopy is defined by a single path object in terms of \(\mathfrak{d}_{1},\mathfrak{d}_{2},\ldots\), have been studied previously. The \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalences, while not the weak equivalences of a model structure, are the weak equivalences of a _\(\Lambda\)-cofibration category_[36] structure on \(\mathbf{Cat}\). While each \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalence is a Thomason weak equivalence, not each Thomason weak equivalence is a \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalence. Write \(d(\mathbf{Cat})\) for the quotient of \(\mathbf{Cat}\) by the congruence relation \(\rightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}\).
**Example 4.17**.: For parallel \(\mathbf{Cat}\)-morphisms \(\alpha,\beta\), a \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy
\[\alpha\leadsto\beta\]
is exactly a natural transformation \(\alpha\to\beta\). In particular, a (left or right) adjoint in \(\mathbf{Cat}\) is a \(\mathrm{T}_{1}\mathfrak{d}_{1}\)-equivalence.
**Example 4.18**.: Consider a functor of small categories
\[F:\mathcal{X}\to\mathcal{Y}.\]
The functor \(F\) is sometimes referred to as a _future equivalence_[31] if \(F\) is a \(\mathrm{T}_{1}\mathfrak{d}\)-equivalence and a _past equivalence_[31] if \(F\) is a \((\mathrm{T}_{1}\mathfrak{d})^{\mathrm{op}}\)-equivalence. Future equivalences and past equivalences preserve certain properties of interest in state space analyses, such as terminal objects and initial objects respectively [27].
#### 4.4.3. Categorical
There exist natural isomorphisms
\[\Pi_{1}\mathfrak{d}\cong\Pi_{1}(\mathfrak{d}_{n})=(\Pi_{1}\mathfrak{d})_{n},\quad n=0,1,2,\ldots\]
A categorical equivalence between small categories is exactly a \(\Pi_{1}\mathfrak{d}\)-equivalence. Every categorical equivalence is a \(\mathrm{T}_{1}\mathfrak{d}\)-equivalence because localization defines a natural transformation \(\mathrm{T}_{1}\mathfrak{d}\to\Pi_{1}\mathfrak{d}\).
### Comparisons
The different homotopy theories can be compared. The classical homotopy theories of small categories, cubical sets, simplicial sets, and topological spaces are all equivalent to one another. The directed homotopy theories of cubical sets and streams are equivalent to one another, with the directed homotopy theory of small categories acting as a special case.
#### 4.5.1. Classical
We can compare different classical homotopy theories.
**Proposition 4.19**.: _Topological realization defines the left map of a Quillen equivalence_
\[|-|:\hat{\Box}\leftrightarrow\mathbf{Top}\]
_between \(\hat{\Box}\) equipped with its test model structure and \(\mathbf{Top}\) equipped with its q-model structure._
A simple consequence is a cubical description of homotopy groups.
**Corollary 4.20**.: _For each \(n\), the function_
\[\tau_{n}(C,v)\to\tau_{n}(|C|,|v|)\]
_induced from the unit of the adjunction with left adjoint \(|-|:\hat{\Box}\to\mathbf{Top}\) is bijective, and in particular is a group isomorphism in the case \(n>0\), for all fibrant cubical sets \(C\) and vertices \(v\in C_{0}\)._
Previously established equivalences [[11, Theorem 2.2.11], [60, SSII.3, SSVI.3.3.1]] and a categorical equivalence between classical homotopy categories of simplicial sets and cubical sets in the sense of this paper [Proposition C.4] imply the following.
**Corollary 4.21**.: _The functor \(\mathfrak{n}\mathfrak{e}\) induces a categorical equivalence_
\[h(\mathbf{Cat})\simeq h(\hat{\Box}).\]
proof of Proposition 4.10.: Consider a cubical function \(\psi:A\to B\). The diagram
in which \(\Pi_{1}\) in the bottom row denotes the fundamental groupoid of a topological space and the vertical arrows are inclusions of fundamental groupoids induced by topological realization, commutes. The vertical arrows are categorical equivalences because every point in a topological realization is path-connected to a vertex. Thus if \(\psi\) is a classical weak equivalence, \(|\psi|\) is a classical homotopy equivalence [Proposition 4.19], hence \(\Pi_{1}|A|\to\Pi_{1}|B|\) is a categorical equivalence, and hence \(\Pi_{1}\psi:\Pi_{1}A\to\Pi_{1}B\) is a categorical equivalence.
#### 4.5.2. Directed
We now give the main results.
**Theorem 4.22**.: _There exist \(\left\lvert\mathfrak{hol}\right\rvert_{*}\)-homotopies_
\[\left\lvert\mathfrak{hol}\right\rvert\,\epsilon_{\left\lvert C\right\rvert} \left\lvert\mathfrak{hol}\right\rvert_{*}\,\,\mathfrak{l}_{\left\lvert \mathfrak{sing}\,\,\left\lvert C\right\rvert\right\rvert}\]
_natural in cubical sets \(C\)._
Proof.: Write \(S\) for \(\mathfrak{sing}\). Write \(\eta^{\prime},\epsilon^{\prime}\) for the respective unit and counit of \(\left\lvert-\right\rvert\cdot\left\lvert\mathfrak{sing}\right\rvert\). Let \(\mathscr{R}\) denote the full subcategory of \(\hat{\Box}\) consisting of cubical sets whose atomic subpresheaves are all isomorphic to representables.
Consider the solid stream maps in the diagram
Let \(\theta\) denote a \((\square/S\upharpoonright-\!\!\!\upharpoonright)\)-object. For \(k\gg 0\), \(\mathfrak{adj}(\theta)\varphi_{\square[1]^{n_{\theta}};2^{k}}:\mathopen{\left. \left\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{ \left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left. \mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{ \left.\mathopen{\mathopen{\leftleft.\mathopen{\left.\mathopen{\mathopen{\leftleftleft. \mathopen{\mathopen{\leftleftleft.\mathopen{\mathopen{\leftleftleftleft.{ \mathopen{\mathopen{\mathopen{\mathopen{\mathopen{ \mathopen{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}} {\}}}}{\}{\}{\}{\}\}\}\}\}\}\}\}\ \{\}\}\}\{\}}\{\{\\\\\\\\\\\\\\\}}}}}}{\{\{
1. \(\mathsf{sing}\) \(f\) _is a_ \(\mathfrak{d}_{*}^{\mathscr{G}}\)_-equivalence_
2. \(\left\lvert\mathsf{sing}\,f\right\rvert\) _is a_ \(\left\lvert\mathfrak{d}\right\rvert^{\mathscr{G}}\)_-equivalence_
Proof.: Let \(\eta\) denote the unit of the adjunction \(\left\lvert-\right\rvert^{\mathscr{G}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: If (3) then \([\mathsf{n}\mathsf{r}\,\zeta,C]_{\mathfrak{G}^{\mathscr{G}}}\) is a bijection for each \(\mathscr{G}\) cubcat \(C\) and hence (2) [Corollary 4.25] because cubical nerves are cubcats [Proposition 3.36]. If (2) then (1) because \(\mathrm{T}_{1}^{\mathscr{G}}\) sends \(\mathfrak{d}_{*}^{\mathscr{G}}\)-equivalences to \((\mathrm{T}_{1}\mathfrak{d})_{*}^{\mathscr{G}}\)-equivalences. If (1) then (3) because \(\mathord{\upharpoonright}\mathsf{n}\mathsf{r}\mathsf{e}-\mathord{\upharpoonright} ^{\mathscr{G}}\) sends \((\mathrm{T}_{1}\mathfrak{d})_{*}^{\mathscr{G}}\)-equivalences to \(\mathord{\upharpoonright}\mathsf{d}_{*}^{\mathscr{G}}\)-equivalences.
Our main result, when specialized for the case \(\mathscr{G}=\star\) of trivial diagrams, is a directed analogue of the classical Quillen equivalence between cubical sets and topological spaces. Recall that a class \(\mathscr{W}\) of morphisms in a category \(\mathscr{X}\) for which the localization \(\mathscr{X}[\mathscr{W}^{-1}]\) exists is _saturated_ if it coincides with the isomorphisms in the localization \(\mathscr{X}[\mathscr{W}^{-1}]\) of \(\mathscr{X}\) by \(\mathscr{W}\).
**Corollary 4.27**.: _There exist dotted localizations in the diagram_
_by the following respective saturated classes of morphisms: the \(\mathrm{T}_{1}\mathfrak{d}^{\mathscr{G}}\)-equivalences; those \(\mathscr{G}\)-cubical functions \(\psi\) for which \(\mathord{\upharpoonright}\psi\mathord{\upharpoonright}^{\mathscr{G}}\) is a \(\mathord{\upharpoonright}\mathord{\upharpoonright}^{\mathscr{G}}\)-equivalence; and those \(\mathscr{G}\)-stream maps \(f\) for which \(\mathsf{s}\mathsf{s}\mathsf{n}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} f\) is a \(\mathord{\upharpoonright}^{\mathscr{G}}\)-equivalence. There exist dotted horizontal functors making the entire diagram commute up to natural isomorphism with the left dotted horizontal functor a fully faithful embedding and the right dotted horizontal functor an adjoint categorical equivalence. A \(\mathscr{G}\)-cubical function \(\psi\) represents an isomorphism in \(d(\hat{\Box}^{\mathscr{G}})\) if and only if \([\psi,C]_{\mathfrak{d}^{\mathscr{G}}}\) is a bijection for all cubcats \(C\)._
Proof.: Let \(\hat{d}(\hat{\Box}^{\mathscr{G}})\) and \(\hat{d}(\mathbf{DiTop}^{\mathscr{G}})\) denote the quotient categories
\[\hat{d}(\hat{\Box}^{\mathscr{G}})=\hat{\Box}^{\mathscr{G}}\mathord{\upharpoonright }\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright
**Corollary 4.28**.: _The unit of the adjunction \(\left\uparrow\!-\!\right\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Example 4.33**.: Fix a cancellative commutative monoid \(\tau\). Then
\[H^{1}(T;\tau)=\tau^{2},\]
where \(T\) is the unique underlying stream of a time-oriented Lorentzian torus [Figure 2], by identifying \(T\) with the directed realization of \((\square[1]/\partial\square[1])^{\otimes 2}\), whose fundamental category is \(\mathbb{N}^{2}\).
**Example 4.34**.: Fix a cancellative commutative monoid \(\tau\). Then
\[H^{1}(K;\tau)=\tau\times_{2\tau}\tau,\]
where \(K\) is the unique underlying stream of a time-oriented Lorentzian Klein bottle [Figure 2], by identifying \(K\) with the directed realization of the quotient \(C\) of \(\square[1]^{2}\) by the smallest equivalence relation identifying \(\delta_{\pm 1;2}\) with \(\delta_{\mp 2;2}\) and calculating \(\operatorname{T}_{1}C\) to be the monoid \(\langle x,y\mid x^{2}=y^{2}\rangle\).
Small categories are insufficient for modelling all directed homotopy types.
**Example 4.35**.: For each \(n>1\) and small category \(\mathcal{X}\), every stream map
\[\upharpoonright[1]^{n}/\partial\square[1]^{n}|\to\upharpoonright\mathsf{n} \mathsf{n}\mathsf{c}\mathcal{X}|\]
is \(\leftrightsquigarrow_{\upharpoonright[\mathfrak{d}]}\) -homotopic to a constant stream map. It therefore follows that higher directed spheres \(\upharpoonright[1]^{n}/\partial\square[1]^{n}|\) do not have the h-homotopy, much less d-homotopy type, type of directed realizations of cubical nerves of small categories. Intuitively, the cubical model \(\square[1]^{n}/\partial\square[1]^{n}\) of a directed sphere presents a cubcat freely generated by a single \(n\)-cell between a single vertex. In fact, these higher directed spheres likely do not have the h-homotopy type of directed realizations of cubical models of \((1,\infty)\)-categories [9, 17]. Thus directed homotopy types encode higher categories, albeit up to directed homotopy, more general than \((1,\infty)\)-categories (cf. [18]).
The theorem in SS1 follows from Corollary 4.27, Example 4.35, and equivalent formulations of the classical homotopy category.
## 5. Conclusion
Much early work in directed homotopy theory went into generalizing categorical equivalences between groupoids to notions of equivalences between small categories that preserve computational behavior of interest (eg. [21, 26, 27] and [Example 4.18]). These directed equivalences are stronger than \(\operatorname{T}_{1}\mathfrak{d}_{*}\)-equivalences but weaker than categorical equivalences. Unfortunately, these directed equivalences have poor formal properties compared to \(\operatorname{T}_{1}\mathfrak{d}_{*}\)-equivalences; \(\mathbf{Cat}\) admits a localization with respect to the latter but not the former. The typical application was to capture the behavior of executions in a concurrent program having a directed state space \(X\) by computing a minimal model of \(\operatorname{T}_{1}\mathsf{sing}X\) with respect to the relevant notion of directed equivalence. It is in this sense that many original applications of directed homotopy were \(1\)-categorical in nature, albeit up to generalizations of \(1\)-categorical equivalence.
It is also in this sense that later applications have often [23, 62] been \((1,\infty)\)-categorical in nature. For example, more subtle computational behavior of executions in a concurrent program having directed state space \(X\) appears in the properties of _Moore path categories_ on \(X\), topological categories in which the morphisms form spaces of directed paths on \(X\). In fact, directed state spaces \(X\) have sometimes been _defined_ as topological categories of some sort [23]. Moore path categories on directed spaces in the applications have minimal models with tractable descriptions [62] and are in fact conjectured to model all \((1,\infty)\) categories
[18]. Unfortunately, the class of stream maps preserving the relevant \((1,\infty)\)-categories of interest also have poor formal properties compared to stream maps \(f\) for which \(\mathsf{sing}\,f\) are \(\mathfrak{d}_{*}\)-equivalences; **DiTop** admits a localization with respect to the latter but likely not the former.
Recent years have seen computations modelled abstractly by homotopy types. A (dependently) typed higher order programming language for reversible computations has been shown to admit semantics in \(\infty\)-groupoids (fibered over other \(\infty\)-groupoids) [3]. Objects represent states, 1-morphisms represent reversible executions, and higher order morphisms represent reversible transformations of those executions, or equivalently, concurrent executions of sequential computations. Since \(\infty\)-equivalences between \(\infty\)-groupoids ignore differences like subdivisions, state space reduction is built into the very syntax of the language. This language, higher order, can thus be used to reason efficiently about computations expressed in the same language. The recent literature has seen extensions of (dependent) type theory to synthetic theories of (fibered) higher categories [50, 64]. These more expressive languages model irreversible computations [50] because the morphisms in higher categories need not be invertible.
Ideally, (dependent) type theory can be alternatively extended so that edges in (fibered) cubcats represent computations, higher cubes in (fibered) cubcats represent higher order transformations, and directed homotopy invariance is built into the syntax of the language (cf. [58]). Such a language ought to share both some of the efficiency of automated reasoning within dependent type theory as well as some of the expressiveness of synthetic higher category theory.
## 6. Acknowledgements
This work was supported by AFOSR grant FA9550-16-1-0212. The author is grateful to Robert Ghrist for conceiving of and producing the visualizations of conal manifolds behind Figure 2. The author would like to thank Emily Rudman for pointing out some simplifications in earlier proofs of Lemmas 3.8 and 3.9.
## Appendix A Lattices
A lattice \(L\) is _modular_ if for all \(x,y,z\in L\),
\[(x\wedge_{L}y)\vee_{L}(x\wedge_{L}z)=((x\wedge_{L}y)\vee_{L}z)\wedge_{L}x.\]
**Example A.1**.: Distributive lattices are modular.
The following _Diamond Isomorphism Theorem_ characterizes modular lattices.
**Diamond Isomorphism Theorem**.: _The following are equivalent for a lattice \(L\)._
1. \(L\) _is modular._
2. _For each_ \(x,y\in L\)_, the rules_ \(x\vee_{L}-\) _and_ \(y\wedge_{L}-\) _define respective bijections_ \([x\wedge_{L}y,y]\cong[x,x\vee_{L}y]\) _and_ \([x,x\vee_{L}y]\cong[x\wedge_{L}y,y]\)_, where_ \([x^{\prime},y^{\prime}]\) _denotes the smallest interval in_ \(L\) _containing_ \(x^{\prime}\) _as its minimum and_ \(y^{\prime}\) _as its maximum._
**Theorem**, [49].: _The following are equivalent for a finite lattice \(L\)._
1. \(L\) _is distributive_
2. _For all_ \(x,y,z\in L\) _with_ \(y,z\) _either both immediate successors to_ \(x\) _or both immediate predecessors to_ \(x\) _in_ \(L\)_,_ \(\{y\wedge_{L}z,y\lor_{L}z,y,z\}\) _is a Boolean interval in_ \(L\)_._
3. _The smallest interval in_ \(L\) _containing Boolean intervals_ \(I,J\) _in_ \(L\) _with_ \(\max\,I=\max\,J\) _or_ \(\min\,I=\min\,J\) _is also Boolean._
**Lemma A.2**.: _For Boolean intervals \(I,J\) in a finite distributive lattice \(L\), the images_
\[I\vee_{L}J,I\wedge_{L}J\]
_of \(I\times J\) under \(\vee_{L},\wedge_{L}\) are Boolean intervals in \(L\)._
Proof.: The intervals \(I\vee_{L}\min\,J,J\vee_{L}\min\,I\) are Boolean by the Diamond Isomorphism for Modular Lattices and hence \(I\vee_{L}J\) is Boolean [Theorem, [49]]. Thus \(I\wedge_{L}J\) is also Boolean by duality.
While every finite poset, including every finite lattice, is a colimit of its (1-dimensional) Boolean intervals in the category of posets and monotone functions, not every finite lattice is such a colimit _in the category_ **Cat**_._
**Lemma A.3**.: _Every finite distributive lattice is a_ **Cat**_-colimit of its Boolean intervals._
Proof.: Consider a finite distributive lattice \(L\). Let \(\mathcal{X}\) be the **Cat**-colimit
\[\mathcal{X}=\operatorname{colim}_{I\to L}I\]
over the Boolean intervals \(I\) in \(L\). The object sets of \(\mathcal{X}\) and \(L\) coincide. There exists a relation \(x\leqslant_{L}y\) if and only if there exists a \(\mathcal{X}\)-morphism \(x\to y\) because \(\mathcal{X}\) admits as generators relations of the form \(x\leqslant_{L}y\) with \(y\) an immediate successor to \(x\) in \(L\). Consider parallel \(\mathcal{X}\)-morphisms \(\alpha,\beta:x\to y\). It thus suffices to show
\[\alpha=\beta.\]
We induct on the length \(k\) of a maximal chain in \(L\) having minimum \(x\) and maximum \(y\). In the base case \(k=1\), \(\alpha,\beta\) are identities and hence \(\alpha=\beta\). Inductively assume that \(\alpha=\beta\) when there exists a maximal chain in \(L\) having minimum \(x\) and maximum \(y\) with length less than \(k\). Consider the case \(k>1\). Then \(\alpha,\beta\) both factor as composites \(x\to a\to y\) and \(x\to b\to y\) with \(a\) and \(b\) both immediate successors to \(x\) in \(L\). Then \(\alpha\) and \(\beta\) are choices of dotted monotone function respectively making the left and bottom triangles commute in
a diagram in \(\mathcal{X}\). There exists a dotted morphism making the top and right triangles commute by the inductive hypothesis. The outer square, whose elements form a Boolean interval in \(L\) [Theorem, [49]], commutes in \(\mathcal{X}\). Thus \(\alpha=\beta\).
We can now give a proof of Lemma 3.12.
proof of Lemma 3.12.: Suppose (1). Let \(I\) be a Boolean interval in \(L\). The restriction of \(\phi\) to \(I\) corestricts to a surjection \(\phi_{I}:I\to J_{I}\) with \(J_{I}\) a Boolean interval in \(M\) because \(\phi\) preserves Boolean intervals. The function \(\phi_{I}:I\to J_{I}\), surjective by construction, is a lattice homomorphism by \(I\hookrightarrow L\) and \(J\hookrightarrow M\) both inclusions of sublattices into lattices. Thus (2).
Suppose (2). Consider \(x,y\in L\). It suffices to show that
\[\phi(x\vee_{L}y)=\phi(x)\vee_{M}\phi(y). \tag{11}\]
by double induction on the minimal lengths \(m,n\) of maximal chains in \(L\) having as their extrema \(x\wedge_{L}y\) and, respectively, \(x\) and \(y\). For then \(\phi\) preserves binary suprema, hence also binary infima by duality, and hence \(\phi\) is a lattice homomorphism, mapping Boolean intervals onto Boolean intervals by (2).
In the case \(m=1\), \(x\wedge_{L}y=x\), hence \(x\lor_{L}y=y\), hence \(\phi(x)\leqslant_{M}\phi(x\lor_{L}y)=\phi(y)\), and consequently (11). The case \(n=1\) follows by symmetry.
Consider the case \(m=n=2\). Then \(x,y,x\wedge_{L}y,x\lor_{L}y\) form the elements of a Boolean interval \(I\) in \(L\) [Theorem, [49]]. Then the restriction of \(\phi\) to \(I\) corestricts to a Boolean interval \(J_{I}\) in \(M\). It therefore follows from (2) and the preservation of finite non-empty suprema and infima by \(I\hookrightarrow L\) and \(J_{I}\hookrightarrow M\) that
\[\phi(x\lor_{L}y)=\phi(x\lor_{I}y)=\phi(x)\lor_{J_{I}}\phi(y)=\phi(x)\lor_{M} \phi(y).\]
Consider the case \(m\leqslant 2\). Suppose \(n>2\). Then there exists an immediate successor \(y^{\prime}\neq y\) to \(x\wedge_{L}y\) such that \(y^{\prime}\leqslant_{L}y\). Then \(y\wedge_{L}(x\lor_{L}y^{\prime})=(x\wedge_{L}y)\lor_{L}y^{\prime}=y^{\prime}\) by \(L\) distributive and hence the length of a maximal chain in \(L\) having as its extrema \(y\wedge_{L}(x\lor_{L}y^{\prime})\) and \(y\) is strictly less than \(n\). And \(x\wedge_{L}y^{\prime}=x\wedge_{L}y\) and hence the length of a maximal chain in \(L\) having as its extrema \(x\wedge_{L}y^{\prime}\) and \(y^{\prime}\) is \(m=2\). Inductively assume \(\phi(x\lor_{L}y^{\prime})=\phi(x)\lor_{L}\phi(y^{\prime})\) and \(\phi(y\lor_{L}(x\lor_{L}y^{\prime}))=\phi(y)\lor_{M}\phi(x\lor_{L}y^{\prime})\). It therefore follows that \(\phi(x\lor_{L}y)=\phi(x\lor_{L}y^{\prime}\lor_{L}y)=\phi(x\lor_{L}y^{\prime}) \lor_{M}\phi(y)=\phi(x)\lor_{M}\phi(y)\). Then (11) follows from induction on \(n\) for the case \(m\leqslant 2\). Thus (11) holds whenever \(\min(m,n)\leqslant 2\) by symmetry.
Consider the general case. To show (11), it suffices to take the case \(m>2\). Then there exists an immediate successor \(x^{\prime}\neq x\) to \(x\wedge_{L}y\) such that \(x^{\prime}\leqslant_{L}x\). Then \(x\wedge_{L}(x^{\prime}\lor_{L}y)=(x\wedge_{L}y)\lor_{L}x^{\prime}=x^{\prime}\) by \(L\) distributive and hence the length of a maximal chain in \(L\) having as its extrema \(x\wedge_{L}(x^{\prime}\lor_{L}y)\) and \(x\) is strictly less than \(m\). And \(x^{\prime}\wedge_{L}y=x\wedge_{L}y\) and hence the length of a maximal chain from \(x^{\prime}\wedge_{L}y\) to \(x\) is \(2\). Inductively assume \(\phi(x\lor_{L}(x^{\prime}\lor_{L}y))=\phi(x)\lor_{M}\phi(x^{\prime}\lor_{L}y)\). Then \(\phi(x\lor_{L}y)=\phi(x)\lor_{L}x^{\prime}\lor_{L}y)=\phi(x)\lor_{M}\phi(x \lor_{L}y^{\prime})=\phi(x)\lor_{M}\phi(y)=\phi(x)\lor_{M}\phi(y)\). Hence (11).
Besides the previous observations, the preservation of fully faithful embeddings by **C**at-pushouts[66] is used in the following proof of Proposition 3.13.
proof of Proposition 3.13.: Let \(F_{k}\) denote the bottom left Kan extension. Uniqueness follows by the right vertical arrow an inclusion. To show existence, it suffices to show \(F_{k}\) preserves **Dis**-objects and **Dis**-morphisms.
\(F_{k}\) _preserves_ **Dis**_-objects:_ Let \(I\) denote a Boolean interval in \(L\). Inclusions of the forms \((I\hookrightarrow L)^{[k]}:I^{[k]}\to L^{[k]}\) are fully faithful embeddings. It follows that the natural functor \(F_{k}L\to L^{[k]}\), an iterated pushout of inclusions of the form \((I\to L)^{[k]}\) [Lemma A.3], is a full and faithful embedding and hence can henceforth be regarded as an inclusion of posets. In other words, we can identify \(\mathfrak{so}_{k+1}L\) with the poset of all monotone functions \([k]\to L\) which corestrict to Boolean intervals in \(L\), with partial order \(\leqslant_{\mathfrak{so}_{k+1}L}\) defined by \(\alpha\leqslant_{\mathfrak{so}_{k+1}L}\beta\) if and only if \(\alpha(i)\leqslant_{L}\beta(i)\) for each \(0\leqslant i\leqslant k\).
Consider \(\alpha,\beta\in\mathfrak{so}_{k+1}L\). The monotone functions \(\alpha\lor_{L}\beta\) and \(\alpha\wedge_{L}\beta\) corestrict to Boolean intervals in \(L\) [Lemma A.2]. Thus \(F_{k}L\) is a sublattice of the finite distributive lattice \(L^{[k]}\) and hence finite distributive.
\(F_{k}\) _preserves_ **Dis**_-morphisms:_ Consider a general **Dis**-morphism \(\phi:L\to M\). To show that \(F_{k}\phi\) is a **Dis**-morphism, it suffices to take the case \(\phi\) a \(\square\)-morphism [Lemma 3.12]. Then \(\phi\) is an iterated a Cartesian monoidal product in **Cat** of \(\delta_{\pm},\sigma\). Then \(\phi^{[k]}\) is an iterated Cartesian monoidal product in **Cat** of \(\delta_{\pm}^{[k]}\) and \(\sigma^{[k]}\) by \((-)^{[k]}\) a right adjoint and
hence product-preserving. The functions \(\delta_{\pm}^{[k]}\) and \(\sigma^{[k]}\) are monotone functions to or from a terminal object and hence **Dis**-morphisms. Hence \(\phi\) is a **Dis**-morphism.
_last claim:_ In order to show that the natural transformation \(F_{m}\to F_{n}\) induced from \(\phi\) component-wise corestricts to the desired natural transformation \(\mathfrak{so}_{m+1}\to\mathfrak{so}_{n+1}\), it suffices to show that \(J^{\phi}\) is a **Dis**-morphism for each \(\square\)-object \(J\) [Lemma 3.12]. It therefore suffices to take the case \(J=[1]\) because \((-)^{\phi}\) is a Cartesian monoidal natural transformation \(\mathbf{Cat}^{\mathrm{op}}\to\mathbf{Cat}\). In that case, non-singleton Boolean intervals in \(J^{[m]}=[m+1]\) and \(J^{[n]}=[n+1]\) are intervals between elements and their immediate successors. Consider a non-singleton Boolean interval \(I\) in \(J^{[m]}=[1]^{[m]}\). Let \(\zeta_{-}=\min I\) and \(\zeta_{+}=\max I\). Then there exists \(0\leqslant j\leqslant m\) such that \(\zeta_{-}(i)=\zeta_{-}(i+1)=\zeta_{+}(i)=0\) for all \(i<j\), \(\zeta_{+}(i)=1\) for all \(i\geqslant j\), and \(\zeta_{-}(i)=1\) for all \(i>j\). The preimage of \(j\) under \(\phi\) is either a singleton or empty by \(\phi\) injective.
In the case that the preimage is empty, then \(\zeta_{-}\phi=\zeta_{+}\phi\).
In the case that the preimage contains the unique element \(j^{*}\), then \(\phi(i)<j\) for all \(i<j^{*}\), \(\phi(i)\geqslant j\) for all \(i\geqslant j^{*}\), and consequently \(\zeta_{-}\phi(i)=\zeta_{-}\phi(i+1)=\zeta_{+}(i)=0\) for all \(i<j^{*}\), \(\zeta_{+}\phi(i)=1\) for all \(i\geqslant j^{*}\), \(\zeta_{-}\phi(i)=1\) for all \(i>j^{*}\), and consequently \(\zeta_{+}\phi\) is an immediate successor to \(\zeta_{-}\phi\) in \([1]^{[n]}\).
In either case, \(\{\phi\zeta_{-},\phi\zeta_{+}\}\) is a Boolean interval in \([1]^{[n]}\).
Thus \(J^{\phi}\) is a **Dis**-morphism.
## Appendix B Pro-objects
We recall a characterization of the data of pro-morphisms as follows.
**Lemma B.1**.: _Fix a category \(\mathscr{X}\). Consider the following data:_
1. _cofiltered diagrams_ \(X:\mathcal{X}\to\mathscr{X}\) _and_ \(Y:\mathcal{Y}\to\mathscr{X}\)_._
2. _choices of_ \(\mathcal{X}\)_-object_ \(x_{y}\) _and_ \(\mathscr{X}\)_-morphism_ \(\zeta_{y}:X(x_{y})\to Y(y)\) _for each choice_ \(y\) _of_ \(\mathcal{Y}\)_-object such that for each_ \(\mathcal{Y}\)_-morphism_ \(v:y_{1}\to y_{2}\)_, there exist_ \(\mathcal{X}\)_-morphisms_ \(\chi_{1}:x\to x_{y_{1}}\) _and_ \(\chi_{2}:x\to x_{y_{2}}\)__
_Suppose that for each \(\mathcal{Y}\)-morphism \(v:y_{1}\to y_{2}\), there exist \(\mathcal{X}\)-morphisms \(\chi_{1}:x\to x_{y_{1}}\) and \(\chi_{2}:x\to x_{y_{2}}\) such that the left of the diagrams below commutes. Then there exists a unique \((\mathbf{pro}\)-\(\mathscr{X})\)-morphism \(\zeta:\lim X\to\lim Y\) such that the following diagram, in which the vertical arrows are canonically defined, commutes for each \(\mathcal{Y}\)-object \(y\)._
## Appendix C Test Categories
A _test model structure_ on a presheaf category \(\hat{\bigcirc}\) is a model structure on \(\hat{\bigcirc}\) in which the cofibrations are the monos and the weak equivalences are those \(\hat{\bigcirc}\)-morphisms \(\psi:A\to B\) for which \(\bigcirc/\psi\) are Thomason weak equivalences. A small category \(\hat{\bigcirc}\) is a _test category_ if there is a Thomason weak equivalence \(\hat{\bigcirc}\to\star\) and \(\hat{\bigcirc}\) admits a test model structure [Theorem 1.4.3, [12]]. The reader is referred elsewhere [12] for details. Test categories can be recognized by the following criteria.
**Proposition, p.86 44(d), [35].**_A small category \(\bigcirc\) is a test category if there exist functors_
\[\zeta:\bigcirc\to\mathbf{Cat},\quad\mathfrak{i}:\mathscr{I}\to\hat{\bigcirc},\]
_with \(\mathfrak{i}\) an interval object in \(\hat{\bigcirc}\), satisfying the following:_
1. \(\bigcirc\to\star\) _is a Thomason weak equivalence_
2. \(\zeta(o)\) _has a terminal object for each_ \(\bigcirc\)_-object_ \(o\)__
3. _the equalizer of_ \(\bigcirc\![\mathfrak{i}(\delta_{-})],\bigcirc\![\mathfrak{i}(\delta_{+})]\) _is initial in_ \(\hat{\bigcirc}\)__
4. \((\bigcirc\!/\mathfrak{i}([1]))\to\star\) _is a Thomason weak equivalence_
5. _there exists a natural transformation_ \(\zeta_{\mathfrak{i}}\mathfrak{i}\to(\square_{1}\hookrightarrow\mathbf{Cat})\)_, where_ \(\zeta_{\mathfrak{i}}\) _denotes the left Kan extension of_ \(\zeta\) _along the Yoneda embedding_ \(\bigcirc\![-]\)_._
In abstract homotopical parlance [35], conditions (1), (4) require that \(\bigcirc,\mathfrak{i}\) are _aspherical_ and condition (3) requires that \(\mathfrak{i}\) be _separated_.
**Proposition C.1**.: _Consider a small category \(\bigcirc\) contained in a chain_
\[\square\subset\bigcirc\subset\mathbf{Cat}\]
_of subcategories, with \(\square\) wide in \(\bigcirc\). Then \(\bigcirc\) is a test category. In particular, \(\square\) is a test category._
Proof.: Let \(\zeta\) denote inclusion \(\bigcirc\hookrightarrow\mathbf{Cat}\). Let \(\mathfrak{i}\) be the interval object \(\bigcirc\![-](\mathscr{I}\hookrightarrow\bigcirc)\) in \(\hat{\bigcirc}\).
The functor \(\bigcirc\to\star\) is a Thomason weak equivalence by \([0]\) a terminal object in \(\square\), \(\mathbf{Cat}\), and hence \(\bigcirc\).
Each small category \(\zeta([1]^{n})=[1]^{n}\) has terminal object \((1,\cdots,1)\) for each \(\bigcirc\)-object \([1]^{n}\).
The equalizer of \(\mathfrak{i}(\delta_{-})\) and \(\mathfrak{d}(\delta_{+})\), whose restriction to a cubical set is the initial cubical set because it defines the empty equalizer of \(\mathfrak{d}(\delta_{-})\) and \(\mathfrak{d}(\delta_{+})\), is the initial presheaf on \(\hat{\bigcirc}\).
The category \((\bigcirc\!/\bigcirc\![1])\) has final object \(1_{\bigcirc\![1]}:\bigcirc\![1]\to\bigcirc\![1]\) by \(\square\) wide in \(\bigcirc\) and therefore admits a Thomason weak equivalence to \(\star\).
There exists natural isomorphisms \(\zeta_{\mathfrak{i}}\mathfrak{i}\cong(\square\hookrightarrow\mathbf{Cat})_{ \mathfrak{i}}\mathfrak{d}\cong\mathrm{T}_{1}\mathfrak{d}:\square_{1} \hookrightarrow\mathbf{Cat}\).
The desired conclusion follows [Proposition, p.86 44(d), [35]].
**Lemma C.2**.: _There exists a \(\mathrm{T}_{1}\mathfrak{d}_{1}\)-equivalence, natural in cubical sets \(C\), of the form_
\[(\Delta/(\mathfrak{tri}_{\bigcirc}C))\simeq(\bigcirc\!/C)\]
_for each small subcategory \(\bigcirc\subset\mathbf{Cat}\) containing \(\square\) as a wide subcategory such that each \(\bigcirc\)-morphism admits a factorization, unique up to isomorphism, into a surjective \(\bigcirc\)-morphism followed by an injective \(\bigcirc\)-morphism and the poset of subobjects of every \(\bigcirc\)-object is a lattice._
Proof.: Let \(\bigtimes\) denote one of \(\Delta,\bigcirc\). It is possible to define an endofunctor
\[E_{P}:(\bigcirc\!/P)\to(\bigcirc\!/P),\]
natural in \(\hat{\bigtimes}\)-objects \(P\), naturally sending each \((\bigtimes\!/P)\)-object \(\theta\) to the terminal \((\bigtimes\!/P)\)-object having the same image as the \(\hat{\bigtimes}\)-morphism \(\theta\) because each \(\bigtimes\)-morphism admits a unique factorization up to isomorphism into a surjection followed by an injection. Then \(E_{P}\) is pointed, uniquely and hence naturally in \(\hat{\bigtimes}\)-objects \(P\). Thus there exists a \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy \(1_{\bigtimes\!/P}\rightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}E_{P}\).
Define functors \(F_{C},G_{C}\), natural in \(\hat{\bigcirc}\)-objects \(C\), of the forms
\[F_{C}:(\Delta/(\mathfrak{tri}_{\bigcirc}C))\to(\bigcirc\!/C)\quad G_{C}:( \bigcirc\!/C)\to\Delta/(\mathfrak{tri}_{\bigcirc}C)\]
as follows. We can take the \((\bigcirc/C)\)-object \(F_{C}\psi\), natural in \((\Delta/(\mathsf{tri}_{\bigcirc}C))\)-objects \(\psi\), to be terminal among all \((\bigcirc/C)\)-objects \(\theta\) with \(\mathsf{im}\,\psi\subset\mathsf{im}\,\mathsf{tri}\,\theta\) because the poset of subobjects of each \(\bigcirc\)-object is a finite and hence complete lattice. The \((\Delta/(\mathsf{tri}_{\bigcirc}C))\)-object \(G_{C}\theta\), natural in \((\bigcirc/C)\)-objects \(\theta:\bigcirc[1]^{n}\to C\), is defined by the commutative diagram
Dotted simplicial nerves of extrema-preserving monotone functions \([1]\to[n]\) make
commute and therefore define the components of a natural transformation or equivalently a \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy \(G_{C}F_{C}\leadsto_{\mathrm{T}_{1}\mathfrak{d}}E_{\mathsf{tri}_{\bigcirc}C}\). Concatenating this \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy with \(1_{\bigcirc/\mathsf{tri}_{\bigcirc}C}\leadsto_{\mathrm{T}_{1}\mathfrak{d}}E_{ \mathsf{tri}_{\bigcirc}C}\) and the \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy \(1_{\bigcirc/C}\leadsto_{\mathrm{T}_{1}\mathfrak{d}}E_{C}=F_{C}G_{C}\) with a constant \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy yield the desired \(\mathrm{T}_{1}\mathfrak{d}_{1}\)-homotopies.
**Lemma C.3**.: _There exists a \(\mathrm{T}_{1}\mathfrak{d}_{2}\)-equivalence, natural in simplicial sets \(S\), of the form_
\[(\mathsf{tri}_{\bigcirc}\bigcirc[-]/S)\simeq(\Delta/S)\]
_for each small subcategory \(\bigcirc\subset\mathbf{Cat}\) containing \(\square\) as a wide subcategory._
Proof.: Define functors \(F_{S},G_{S}\), natural in simplicial sets \(S\), of the forms
\[F_{S}:(\mathsf{tri}_{\bigcirc}\bigcirc[-]/S)\to(\Delta/S)\quad G_{S}:(\Delta/ S)\to(\mathsf{tri}_{\bigcirc}\bigcirc[-]/S)\]
by natural commutative diagrams of the following forms:
Let \(\mathfrak{diag}_{[1]}=1_{[1]}^{\times_{\mathbf{Cat}}n}:[1]\to[1]^{n}\) and \(\delta_{++}=\delta_{+1}\cdots\delta_{+1}:[1]\to[1]^{n}\). For all \(x\in[1]\),
\[\mathfrak{diag}_{[1]}(x) =(x,\ldots,x)\] \[\leqslant_{[1]^{n}}(1,\ldots,1,x)\] \[=\delta_{++}(x)\]
Therefore the function \(\phi:[1]^{2}\to[1]^{n}\) characterized by
\[\phi\delta_{-1}=\mathfrak{diag}_{[1]}\quad\phi\delta_{+}=\delta_{++}\]
is monotone. Hence there exists a dotted simplicial nerve of \(\phi\) making
commute. The arrows in the top row define the components of \(3\)\(\mathrm{T}_{1}\mathfrak{d}\)-homotopies, which, when concatenated with a constant \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy, yields a \(\mathrm{T}_{1}\mathfrak{d}_{2}\)-homotopy \(G_{S}F_{S}\leftrightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}1_{\bigcirc/S}\)
Dotted simplicial nerves of extrema-preserving monotone functions \([1]\to[n]\) make
commute and hence define the components of natural transformations \(F_{S}G_{S}\to 1_{(\Delta/S)}\) or equivalently \(\mathrm{T}_{1}\mathfrak{d}\)-homotopies \(F_{S}G_{S}\leftrightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}1_{(\Delta/S)}\).
Fix a subcategory \(\bigcirc\) of \(\mathbf{Cat}\) defining a test category. Even though there exists a zig-zag of Quillen equivalences between \(\hat{\bigcirc}\) and \(\hat{\Delta}\), it is not necessarily the case that triangulation directly defines a Quillen equivalence between them.
**Proposition C.4**.: _Triangulation defines the left map of a Quillen equivalence_
\[\mathsf{tri}_{\bigcirc}:\hat{\bigcirc}\leftrightarrows\hat{\Delta}\]
_between presheaf categories equipped with test model structures, where \(\bigcirc\) is a subcategory of \(\mathbf{Cat}\) containing \(\square\) as a wide subcategory such that each \(\bigcirc\)-morphism admits a factorization, unique up to isomorphism, into a surjective \(\bigcirc\)-morphism followed by an injective \(\bigcirc\)-morphism and the poset of subobjects of each \(\bigcirc\)-object is a complete lattice._
Proof.: Both \(\hat{\Delta}\) and \(\hat{\bigcirc}\) admit test model structures [Proposition C.1]. There exist dotted Thomason weak equivalences making the left [Lemma C.2] and right [Lemma C.3] diagrams below commute for maps \(\psi\) of presheaves, where \(\mathsf{qua}\vdash\mathsf{tri}_{\bigcirc}\):
In each of the commutative diagrams, the top horizontal arrow is a classical weak equivalence if and only if the bottom horizontal arrow is a classical weak equivalence. Thus \(\mathsf{tri}\,\bigcirc\) and its right adjoint both preserve and reflect weak equivalences in test model structures. Additionally \(\mathsf{tri}_{\bigcirc}\) preserves monos, cofibrations. | (This is a longer translation. If you need a shorter one, please let me know.) |
2309.09739 | Improving Neural Indoor Surface Reconstruction with Mask-Guided Adaptive
Consistency Constraints | 3D scene reconstruction from 2D images has been a long-standing task. Instead
of estimating per-frame depth maps and fusing them in 3D, recent research
leverages the neural implicit surface as a unified representation for 3D
reconstruction. Equipped with data-driven pre-trained geometric cues, these
methods have demonstrated promising performance. However, inaccurate prior
estimation, which is usually inevitable, can lead to suboptimal reconstruction
quality, particularly in some geometrically complex regions. In this paper, we
propose a two-stage training process, decouple view-dependent and
view-independent colors, and leverage two novel consistency constraints to
enhance detail reconstruction performance without requiring extra priors.
Additionally, we introduce an essential mask scheme to adaptively influence the
selection of supervision constraints, thereby improving performance in a
self-supervised paradigm. Experiments on synthetic and real-world datasets show
the capability of reducing the interference from prior estimation errors and
achieving high-quality scene reconstruction with rich geometric details. | Xinyi Yu, Liqin Lu, Jintao Rong, Guangkai Xu, Linlin Ou | 2023-09-18T13:05:23 | http://arxiv.org/abs/2309.09739v1 | # Improving Neural Indoor Surface Reconstruction with Mask-Guided Adaptive Consistency Constraints
###### Abstract
3D scene reconstruction from 2D images has been a long-standing task. Instead of estimating per-frame depth maps and fusing them in 3D, recent research leverages the neural implicit surface as a unified representation for 3D reconstruction. Equipped with data-driven pre-trained geometric cues, these methods have demonstrated promising performance. However, inaccurate prior estimation, which is usually inevitable, can lead to suboptimal reconstruction quality, particularly in some geometrically complex regions. In this paper, we propose a two-stage training process, decouple view-dependent and view-independent colors, and leverage two novel consistency constraints to enhance detail reconstruction performance without requiring extra priors. Additionally, we introduce an essential mask scheme to adaptively influence the selection of supervision constraints, thereby improving performance in a self-supervised paradigm. Experiments on synthetic and real-world datasets show the capability of reducing the interference from prior estimation errors and achieving high-quality scene reconstruction with rich geometric details.
## I Introduction
3D scene reconstruction from multiple images is a fundamental vision task with diverse applications, including robotics [1, 2, 3], virtual reality, augmented reality, etc. In robotics, reconstructions are used in trajectory planning [2] and mapping [4]. Given posed images, traditional algorithms usually estimate depth maps and lift them into 3D space, which can be categorized into multi-view stereo methods and monocular depth estimation methods. Multi-view stereo (MVS [5, 6, 7]) leverages accurate feature correspondences between keyframes to recover the 3D structure, and monocular depth estimation relies on large-scale training datasets to improve the generalization to diverse scenes. While feature matching lacks confidence in lighting changes, occlusion, and low-texture regions, and robust monocular depth estimation usually suffers from the unknown scale, their performance can hardly deal with multi-frame inconsistency and is less satisfactory. Although some RGB-D fusion algorithms [8, 9] and post-processing optimization methods [10, 11, 12] are committed to ensuring consistency, it is found that they have difficulty handling some inaccurate depth predictions.
In order to tackle the consistency problem, there is an urgent need for a unified 3D representation instead of per-frame 2D depth maps. Some learning-based methods [13, 14] project 2D features to spatial and directly predict TSDF value in 3D position. On the other hand, armed with the volume rendering theory, optimization-based methods usually encode a specific scene with a neural implicit scene representation by overfitting the pair of the input 3D position and the output color and geometry. However, unlike object-centric cases, the sparse views and texture-less areas of indoor scenes may lead to limited surface quality and local minima in optimization. To address the issue, some approaches integrate affine-invariant depth [15, 16, 17, 18, 19] and predicted normal priors [20, 21] as supervision. Although promising results have been achieved, they struggle to handle both the unknown scale-shift values of depth priors and inaccurate prior estimations, which results in poor reconstruction quality of complex geometric details.
In this study, we propose mask-guided adaptive consistency constraints to improve the detail reconstruction performance of neural surface representation. Similar to previous works [20, 21, 22], we also use two kinds of MLPs to predict signed distance and color information and employ the surface normal predicted by a pre-trained model [23] as priors. Specifically, we divide our training process into two stages. In the first stage, we focus on optimizing the RGB image reconstruction constraint as our primary objective while incorporating the estimated normal vector cues as additional supervision signals. This allows us to obtain an initial scene geometry. In the second stage, based on the principle that excellent reconstruction quality is closely associated with multi-view consistency, we categorize the reconstructed components into two groups: the accurate part and the inaccurate part, based on the difference in the rendered normal vectors at different viewpoints. During training, except for the color reconstruction constraint, to the sampled rays passing through the inaccurate part, we employ a geometric consistency constraint to improve the accuracy of rendered depth; For the accurate part, we continue to apply normal priors to supervision. Besides, we decouple the color map into a view-dependent one and a view-independent one, and leverage another photometric consistency constraint to supervise the view-independent color. This two-stage training strategy adaptively distinguishes the accuracy of surface normal priors and adopts different supervision paradigms separately. Experimental results on both synthetic and real-world datasets demonstrate that our method achieves high-quality reconstructions with rich geometric details, outperforming other existing methods. Our main contributions are summarized as follows:
* We propose a two-stage training process for neural surface reconstruction, which decouples the view-dependent and view-independent colors and leverages two novel consistency constraints to improve the quality.
* Based on the essential mask scheme, our model can reduce the side effects of inaccurate surface normal priors and enhance performance in a self-supervised paradigm during training.
* Experimental results on both synthetic and real-world datasets show that we can achieve high-quality scene reconstruction performance with rich geometric details.
## II related work
### _Multi View Stereo_
For years, 3D reconstruction from multi-view perspectives has remained a challenging yet significant task. Multi-view stereo (MVS) is a traditional reconstruction approach that leverages feature matching and triangulation methods to estimate 3D positions corresponding to pixels or features across multi-views. Using estimated positions and bundle adjustment, MVS recovered depth or normal to recover geometry [24, 25, 5, 7]. While MVS has achieved significant success, the estimated depth suffers from inaccurate and scale-inconsistent in abundant specular reflection or untextured regions, such as indoor scenes. Although some RGB-D fusion [8, 9] and post-processing optimization methods [10, 11, 12] have handled scale consistency issues, they still fail in inaccurate estimations. With the advancements in deep learning, learning-based methods have witnessed significant development in recent years. These methods employ neural networks to estimate depth or TSDF (truncated signed distance function) end-to-end. Depth-based methods estimate depth maps and utilize fusion procedures to reconstruct, which encounter challenges in noisy surfaces, and scale ambiguities. TSDF-based methods like NeuralRecon [14] propose a novel framework to lift the 2D features, fuse them spatially and temporally in 3D space, and predict the TSDF volume directly. Those methods always produce overly smooth reconstructions.
### _Implicit Representation of Geometry_
Recently, with the success of volume rendering theory, some methods leverage Multi-Layer Perceptrons (MLPs) to implicitly represent geometry. These approaches are supervised by RGB images, overfit the pair of the 3D coordinates, and the corresponding color and geometric properties, like volume density or occupancy [26, 27, 28, 29, 30]. One notable advancement in this area is the Neural Radiance Field (NeRF) technique, which has demonstrated remarkable results in both novel view synthesis [31, 32] and implicit surface reconstruction [33, 22, 30]. These approaches utilize volume rendering methods to supervise implicit scene representation through a 2D photometric loss. However, volume density representations often fall short in geometric details due to a lack of sufficient constraints. Some methods have attempted to improve the volume rendering framework, such as VolSDF [33] and NeuS [30], which have achieved better surface reconstructions but still face challenges in reconstructing geometric details. Consequently, certain methods aim to integrate geometric cues acquired from sensors [34, 35] or predicted by models [20, 21, 36] to strengthen geometric constraints. MonoSDF [20] integrates estimated monocular geometric clues into a neural volume rendering framework to enhance the overall quality of the reconstruction. NeuRIS [21] adaptively utilizes predicted normal cues by patch matching between neighboring images. While these methods have achieved accurate reconstruction results, they are sensitive to the accuracy in priors, especially when applied to real scenes. Compared to these methods, our method utilizes normal vector cues more efficiently and performs better in real-world scenarios.
## III Method
Aiming at 3D indoor scene reconstruction from posed images \(\{I_{k}\}_{k=0\cdots M}\), we use a neural surface representation optimized through the supervision of rendered RGB images. To enhance the robustness, normal priors obtained from pre-trained models are adopted. However, directly supervising the rendered normal may disturb the training process if the normal priors are inaccurate. Therefore, We optimize it with color and normal constraints to get the initial shape in the first stage. Then, we introduce geometric and photometric constraints to further improve reconstruction quality. What's more, all the training constraints except the color one are guided by our proposed mask scheme in stage two. The overall pipeline of the second stage is shown in Fig. 1.
Concretely, for each selected image \(I_{k}\), we sample \(m\) rays \(\{\mathbf{r}_{j}\}_{j=0\cdots m}\) passing though it. Randomly generating a virtual ray \(\mathbf{r}_{j}^{r}\) to pair with the current sampled ray \(\mathbf{r}_{j}\). Using MLPs and volume rendering framework, we render color \(\hat{\mathbf{C}}\), decomposed color \(\hat{\mathbf{C}}_{vi}\), depth \(\hat{D}\), and normal \(\hat{\mathbf{N}}\) for both \(\mathbf{r}_{j}\) and \(\mathbf{r}_{j}^{v}\). To train our model, we minimize the color and normal differences, between \(\hat{\mathbf{C}}(r_{j})\) and the given color \(\mathbf{C}(\mathbf{r}_{j})\), \(\hat{\mathbf{N}}(\mathbf{r}_{j})\) and the normal prior \(\hat{\mathbf{N}}(\mathbf{r}_{j})\) predicted by pre-trained models [23], respectively. Furthermore, we introduce a multi-view consistency constraint to guide the training process. During optimization, we use a masking scheme to adaptively select different training constraints for different sampled rays.
### _Neural Surface Representation_
As shown in Fig. 2, the neural surface representation is composed of two kinds of MLPs: a geometric geometry network \(f_{g}\) and two color networks \(f_{c}\). The geometry network maps a 3D coordinate \(\mathbf{x}\in\mathbb{R}^{3}\) to a feature vector \(\mathbf{z}\) and a signed distance function (SDF) value \(\hat{s}\in\mathbb{R}\), which indicates the shortest distance to the closest geometry surface, and its sign shows whether the point is inside or outside the object.
\[[\hat{s},\mathbf{z}]=f_{g}(\mathbf{x};\theta_{g}) \tag{1}\]
where \(\theta_{g}\) is the trainable parameters. The surface \(S\) is defined as the set of \(\mathbf{x}\) where the corresponding \(\hat{s}\) is equal to 0.
The color networks \(f_{c}\) generate color \(\hat{\mathbf{c}}\in\mathbb{R}^{3}\) from the input feature vectors. We use two color networks to obtain
view-dependent color \(\hat{\mathbf{c}}^{vd}\) and view-independent color \(\hat{\mathbf{c}}^{vi}\). This setting helps us to decompose color which we will discuss in Section III-B3.
To recover the surface under the supervision of input RGB images, we render the colors of sampled rays. Taking render in the ray \(\mathbf{r}\) as an example, we feed \(n\) sampling points \(\mathbf{x}_{i}=\mathbf{o}+t_{i}\mathbf{v}\) along the ray into the geometric network \(f_{g}\) to obtain the corresponding SDF values \(\hat{s}_{i}\) and geometric feature \(\mathbf{z}_{i}\). Here, \(\mathbf{o}\in\mathbb{R}^{3}\) and \(\mathbf{v}\in\mathbb{R}^{3}\) represents the camera position and ray direction with \(|\mathbf{v}|=1\) and \(t_{i}\geq 0\), respectively. Subsequently, we concatenate \(\mathbf{x}_{i}\), \(\mathbf{v}\), \(\hat{\mathbf{n}}_{i}\) and \(\mathbf{z}_{i}\) into two feature vectors and obtain the corresponding view-dependent color \(\hat{\mathbf{c}}^{vi}_{i}=f_{c}(\mathbf{x}_{i},\hat{\mathbf{n}}_{i},\mathbf{v },\mathbf{z}_{i};\theta_{c})\) and view-independent color \(\hat{\mathbf{c}}^{vi}_{i}=f_{c}(\mathbf{x}_{i},\hat{\mathbf{n}}_{i},\mathbf{z }_{i};\theta_{c})\). The normal vector \(\hat{\mathbf{n}}_{i}\in\mathbb{R}^{3}\) is the analytical gradient of the corresponding SDF value \(\hat{s}_{i}\), and the final color \(\hat{\mathbf{c}}_{i}\) is obtained by adding two color components together:
\[[\hat{s}_{i},\mathbf{z}_{i}]=f_{g}(\mathbf{x}_{i}),\quad\hat{\mathbf{n}}_{i} =\frac{\partial\hat{s}_{i}}{\partial\mathbf{x}_{i}} \tag{2}\]
\[\hat{\mathbf{c}}_{i}=\hat{\mathbf{c}}^{vd}_{i}+\hat{\mathbf{c}}^{vi}_{i} \tag{3}\]
Following the volume rendering framework [22, 26], the color \(\mathbf{C}(\mathbf{r})\) is accumulated along the ray.
\[\hat{\mathbf{C}}(\mathbf{r})=\sum_{i=1}^{n}T_{i}\alpha_{i}\hat{\mathbf{c}}_{i} \tag{4}\]
where \(T_{i}=\prod_{j=1}^{i-1}(1-\alpha_{j})\) and \(\alpha_{i}=1-exp(-\sigma_{i}\delta_{i})\) denote the transmittance and alpha value, respectively. \(\delta_{i}\) is the distance between neighbouring points, and \(\sigma_{i}\) is the density value corresponding to \(\mathbf{x}_{i}\). To improve the geometric representation and enhance the smoothness of the reconstructed surface, we compute density values \(\sigma_{i}\) from \(\hat{s}_{i}\)[20, 33]:
\[\sigma_{i}(\hat{s}_{i})=\left\{\begin{array}{ll}\frac{1}{2\beta}exp(\frac{ -\hat{s}_{i}}{\beta}),&if\ \hat{s}_{i}>0.\\ \frac{1}{\beta}(1-\frac{1}{2}exp(\frac{\hat{s}_{i}}{\beta})),&if\ \hat{s}_{i}\leq 0. \end{array}\right. \tag{5}\]
where \(\beta\) is trainable. As \(\beta\) approach 0, the sensitivity of \(\sigma_{i}(\hat{s}_{i})\) to \(\hat{s}_{i}\) increases, contributing to edge reconstruction.
### _Supervision Constraints_
#### Iii-B1 Color and Normal Constraints
Since we have obtained each sample rays' rendering color \(\hat{\mathbf{C}}(\mathbf{r})\), we can learn the weights of \(f_{g}\) and \(f_{c}\) by minimizing the difference between \(\hat{\mathbf{C}}(\mathbf{r})\) and the given color \(\mathbf{C}(\mathbf{r})\) :
\[\mathcal{L}_{rgb}=\sum_{\mathbf{r}\in\mathcal{R}}\left\|\hat{\mathbf{C}}( \mathbf{r})-\mathbf{C}(\mathbf{r})\right\|_{1} \tag{6}\]
where \(\mathcal{R}\) represents the sampled rays in a batch.
Fig. 1: We sample rays pass through selected image \(I_{k}\). Randomly generating virtual ray \(\mathbf{r}_{j}^{v}\) corresponding to each sample ray \(\mathbf{r}_{j}\). Rendering the color \(\mathbf{C}\), view-independent color \(\hat{\mathbf{C}}_{vi}\), depth \(\hat{D}\) and normal \(\hat{\mathbf{N}}\) along these rays. Using a pre-trained model to estimate normal priors \(\hat{\mathbf{N}}\) of \(\mathbf{r}_{j}\). To learn MLPs’ weights, we minimize the difference between \(\hat{\mathbf{C}}(\mathbf{r}_{j})\) and the given color \(\mathbf{C}(\mathbf{r}_{j})\). Besides, we utilize the mask-guided consistency and normal constraints.
Fig. 2: Network architecture of our method.
Geometric properties like normal vectors \(\hat{\mathbf{N}}(\mathbf{r})\) and depth \(\hat{D}(\mathbf{r})\) can be rendered by accumulating sample points' features along the ray, similar to rendering colors. We utilize normal constraints to guide the training process:
\[\hat{D}(r)=\sum_{i=1}^{n}T_{i}\alpha_{i}t_{i},\quad\hat{\mathbf{N}}(\mathbf{r})= \sum_{i=1}^{n}T_{i}\alpha_{i}\hat{\mathbf{n}}_{i} \tag{7}\]
\[\mathcal{L}_{normal}=\frac{1}{|\mathcal{M}_{r}|}\sum_{\mathbf{r} \in\mathcal{M}_{r}}\left\|\hat{\mathbf{N}}(\mathbf{r})-\bar{\mathbf{N}}( \mathbf{r})\right\|_{1} \tag{8}\] \[+\left\|1-\hat{\mathbf{N}}(\mathbf{r})^{T}\bar{\mathbf{N}}( \mathbf{r})\right\|_{1}\]
where \(\mathcal{M}_{r}\) denotes the ray mask. We will describe the details of \(\mathcal{M}_{r}\) in Section III-C.
#### Iii-B2 Geometric Consistency Constraint
The proposed geometric consistency is based on the principle that geometric properties of the surface, such as depth or normal, should be consistent among different viewpoints in unobstructed regions. We utilize these consistencies, visualized in Fig. 3, to constrain the optimization process.
Specifically, for each sampled ray \(\mathbf{r}\) passing through the current sampled pixel, we calculate the corresponding rendered depth \(\hat{D}(\mathbf{r})\) and normal \(\hat{\mathbf{N}}(\mathbf{r})\) (Eq. 7). Using render depth \(\hat{D}(\mathbf{r})\), we compute the target point \(\mathbf{x}_{t}\) (Eq. 9), which serves a similar purpose as the feature point in MVS but in 3D form. It is worth noting that this setting helps to avoid inaccuracies in feature extraction and matching errors. After that, we randomly generate a virtual viewpoint \(\mathbf{o}^{v}\). Based on target point \(\mathbf{x}_{t}\) and \(\mathbf{o}^{v}\), we can calculate the virtual ray's direction \(\mathbf{v}^{v}\). Consequently, we obtain a virtual ray \(\mathbf{r}_{v}\) originating from \(\mathbf{o}^{v}\) in direction \(\mathbf{v}^{v}\), and virtual sampled points \(\mathbf{x}_{i}^{v}=\mathbf{o}^{v}+t_{i}^{v}\mathbf{v}^{v}\), where \(t_{i}^{v}\geq 0\), positioned along this ray.
\[\mathbf{x}_{t}=\mathbf{o}+\hat{D}(\mathbf{r})\mathbf{v},\quad\mathbf{v}^{v}= \frac{\mathbf{x}_{t}-\mathbf{o}^{v}}{\left\|\mathbf{x}_{t}-\mathbf{o}^{v} \right\|_{2}} \tag{9}\]
Using the volume rendering framework, we render the depth \(\hat{D}(\mathbf{r}_{v})\) and normal \(\hat{\mathbf{N}}(\mathbf{r}_{v})\) of \(\mathbf{r}_{v}\). Due to the geometric consistency between the depth of both rays, we propose a novel optimization target:
\[\mathcal{L}_{gc}=\frac{1}{2|\mathcal{M}_{v}|}\sum_{\mathbf{r}_{v}\in\mathcal{M }_{v}}|\hat{D}(\mathbf{r}_{v})-\bar{D}(\mathbf{r}_{v})|^{2} \tag{10}\]
where \(\bar{D}(\mathbf{r}_{v})=\left\|\mathbf{x}_{t}-\mathbf{o}^{v}\right\|_{2}\), and \(\mathcal{M}_{v}\) denotes the mask for valid sample rays but failed in multi-view normal consistency. The details of \(\mathcal{M}_{v}\) will be described in Section III-C.
#### Iii-B3 Photometric Consistency Constraint
Similar to the geometric consistency across views, the appearance of the scene also exhibits consistency. Due to changes in illumination or material properties, colors may appear different from various viewpoints. Inspired by [37], we decompose the render color of each sample point. Concretely, we leverage two color networks to predict view-dependent color \(hat{\mathbf{c}}_{i}^{vd}\) and view-independent color \(\hat{\mathbf{c}}_{i}^{vi}\), as shown in Fig. 2. The final rendering color \(\hat{\mathbf{c}}_{i}\) is obtained by summing these two terms, as shown in Eq. 3.
The view-independent colors \(\hat{\mathbf{C}}_{vi}\) of two kinds of rays are accumulated (Eq. 4). We propose an additional photometric consistency:
\[\mathcal{L}_{pc}=\frac{1}{|\mathcal{M}_{r}|}\sum_{\mathbf{r}\in\mathcal{M}_{r }}\left\|\hat{\mathbf{C}}_{vi}(\mathbf{r})-\hat{\mathbf{C}}_{vi}(\mathbf{r}_{ v})\right\|_{1} \tag{11}\]
### _Mask Scheme_
In this section, we will introduce our mask scheme applied in the second training stage and utilize the AND operation to combine masks into \(\mathcal{M}_{v}\) and \(\mathcal{M}_{r}\). It is worth noting that we present the valid rays as 1 in all of our masks.
#### Iii-C1 Sample Mask
To enforce multi-view consistency, a virtual viewpoint \(\mathbf{o}^{v}\) is randomly generated for each sampled ray. However, this setting may result in \(\mathbf{o}^{v}\) being positioned outside the scene or inside objects. To address this issue, we propose a sample mask \(\mathcal{M}_{s}\) to select valid virtual viewpoints. Specifically, we primarily utilize the SDF value in \(\mathbf{o}^{v}\). Our reconstruction starts with a sphere that encloses all the given camera poses. As the training progresses, this sphere gradually approaches our target. Consequently, the outside part can be considered as the interior of the object, which means that if \(\mathbf{o}^{v}\) is valid, we will get a positive corresponding SDF value \(\hat{s}(\mathbf{o}_{v})\). Our sample mask is as follows:
\[\mathcal{M}_{s}=\left\{\begin{array}{ll}1,&if\ \hat{s}(\mathbf{o}_{v})>0.\\ 0,&otherwise.\end{array}\right. \tag{12}\]
#### Iii-C2 Occlusion Mask
To address the problem of errors in depth consistency caused by occlusion along both rays, we propose an occlusion mask \(\mathcal{M}_{o}\). Following the sampling algorithm in [33], our sampled points are concentrated near the surfaces where the rays pass through. Hence, we can identify the presence of occlusion by analyzing the sign change in the SDF values associated with the sampling points along the ray.
\[\mathcal{M}_{o}^{s}=\left\{\begin{array}{ll}1,&if\ \left\|\text{diff}\left(\text{sgn}(\hat{ \mathbf{s}})\right)\right\|_{1}\leq 2.\\ 0,&otherwise.\end{array}\right.\]
Fig. 3: Illustration of Consistency Constraints
\[\mathcal{M}_{o}^{v}=\left\{\begin{array}{ll}1,&if\ \ \left\|\mathit{diff}(\mathit{ sgn}(\hat{\mathbf{s}}^{v}))\right\|_{1}\leq 2.\\ 0,&otherwise.\end{array}\right.\]
\[\mathcal{M}_{o}=\mathcal{M}_{o}^{s}\ \&\ \mathcal{M}_{o}^{v} \tag{13}\]
where \(\mathit{diff}(\cdot)\) computes the \(n\)-th forward difference along the given vector's dimension, and \(\mathit{sgn}(\cdot)\) is the sign function. \(\hat{\mathbf{s}}\) and \(\mathcal{M}_{o}^{s}\) denote the vector of SDF values along the sample ray and the occlusion mask of this ray. Similarly, \(\hat{\mathbf{s}}^{v}\) and \(\mathcal{M}_{o}^{v}\) represent the corresponding values for the virtual rays. Finally, the final occlusion mask \(\mathcal{M}_{o}\) is obtained through the AND operation between \(\mathcal{M}_{o}^{s}\) and \(\mathcal{M}_{o}^{v}\).
#### Iii-C3 Adaptive Check Mask
As described in Section III-B2, high-quality reconstruction conforms to geometric consistency in multi-views. Therefore, we utilize the consistency of the rendered normal in multi-views as an adaptive check. Specifically, we use the normal Cosine Similarity (Eq. 14) to compute the difference between the sample ray's render normal \(\hat{\mathbf{N}}(\mathbf{r})\) and the virtual ray's render normal \(\hat{\mathbf{N}}(\mathbf{r}_{v})\). We compare this difference to a certain threshold value \(\epsilon\). Rays with significant differences are identified by \(\mathcal{M}_{a}\):
\[cos(\hat{\mathbf{N}}(\mathbf{r}),\hat{\mathbf{N}}(\mathbf{r}_{v}))=\frac{ \hat{\mathbf{N}}(\mathbf{r})\cdot\hat{\mathbf{N}}(\mathbf{r}_{v})}{\left\| \hat{\mathbf{N}}(\mathbf{r})\right\|_{2}\left\|\hat{\mathbf{N}}(\mathbf{r}_{v })\right\|_{2}} \tag{14}\]
\[\mathcal{M}_{a}=\left\{\begin{array}{ll}1,&if\ cos(\hat{\mathbf{N}}(\mathbf{ r}),\hat{\mathbf{N}}(\mathbf{r}_{v}))<\epsilon.\\ 0,&otherwise.\end{array}\right. \tag{15}\]
#### Iii-C4 Mask integration
To better utilize the estimated normal cues and multi-view consistency, we organize the sample ray masks by AND operations:
\[\mathcal{M}_{v}=\mathcal{M}_{s}\ \&\ \mathcal{M}_{o}\ \&\ \mathcal{M}_{a} \tag{16}\] \[\mathcal{M}_{r}=\mathcal{M}_{s}\ \&\ \mathcal{M}_{o}\ \&\ (1-\mathcal{M}_{a})\]
Rays selected by \(\mathcal{M}_{v}\) have valid virtual viewpoints and no occlusion issues but fail to check in multi-view normal consistency. We put the geometric consistency constraint on these rays' training process. \(\mathcal{M}_{r}\) chooses rays that conform to the normal consistency check. In those rays, predicted normal cues contribute to the reconstruction and we continue to apply them. In addition, we incorporate photometric consistency to further improve the quality.
## IV Experiments
### _Implementation Detail_
We implement our method with PyTorch and the network training is performed on one NVIDIA RTX 3090 GPU. The normal priors in our method are predicted by Omnidata model [23]. Each batch consists of 1024 sampled rays and training the network over 200k iterations. We first optimize the model directly guided with normal priors and RGB images over 25k iterations. In the second stage, in addition to color constraint, our model is trained under mask-guided normal and geometric consistency constraints until 75k iterations. After that, we add photometric consistency into our optimization target. Following the optimization, we discrete our implicit function into voxel grids with a resolution of 512 and extract the mesh using Marching Cubes [38].
### _Experimental Settings_
#### Iv-B1 Datasets
Since our method primarily focuses on indoor scenes, we conduct quantitative evaluations on the Replica dataset [39] and the ScanNet dataset [40]. Replica consists of high-quality reconstructions of various indoor spaces. Each scene in Replica offers clean dense geometry and high-resolution images captured from multiple viewpoints. ScanNet is an RGB-D video dataset that comprises over 1500 indoor scenes with 2.5 million views. It is annotated with ground-truth camera poses, surface reconstructions, and instance-level semantic segmentations.
#### Iv-B2 Baselines
We conduct a comparative analysis of our method with other methods. (1) COLMAP [5]: Traditional MVS reconstruction method, using screened Poisson Surface reconstruction (sPSR) to reconstruct mesh from point clouds. (2) NeuralRecon [14]: A learning-based TSDF fusion module. (3) MonoSDF(MLP version) [20]: Implicit method using predicted normal and depth priors directly. (4) NeuRIS [21]: Implicit method adaptive using normal priors.
#### Iv-B3 Evaluation Metrics
To evaluate the quality of scene representation, following [13, 20, 41], we mainly report _Accuracy_, _Completeness_, _Chamfer Distance_, _Precision_, _Recall_ and _F-score_ with the threshold of 0.05 meter. We further report _Normal Consistency_ measure [20] to better evaluate reconstructions under the synthetic dataset.
### _Results in Realistic Dataset_
We conducted experiments using ScanNet dataset, which provides real-world data. We selected four scenarios and chose every 10th image from the original sets (about 2k-4k images). After the whole training process, we saved the reconstructed mesh and evaluated the final trained model. In Fig. 4, we compare our generated mesh with the original reconstructions from baselines. It shows that ours has more geometric detail and fixes missing caused by inaccurate estimation of normals. Additionally, we quantitatively evaluated our method compared to others in Table I. It can be seen that our method achieves more accurate results.
COLMAP [5] exhibits limitations in low-texture regions. NeuralRecon [14] relies on TSDF values for supervision and suffers from limitations in unseen scenarios. The performance of MonoSDF [20] is dependent on the quality of predicted geometric cues, which may lead to inaccuracy. NeuRIS [21] incorporates normal priors adaptively but is susceptible to noise in real-world datasets, resulting in artifacts in reconstructions.
### _Results in Synthetic Datasets_
We conducted a quantitative evaluation of five scenarios from the Replica and averaged the results for comparison. It is worth noting that Replica is a synthetic dataset with little noise, which makes the predicted cues have minimal errors. To demonstrate the effectiveness of our approach in utilizing normal priors, we compare ours with modified MonoSDF* that directly utilizes normal priors and images for supervision. We conducted a comparison in Table II, and it indicates that our method achieves more accurate results, though the majority of the normal vectors are accurate.
### _Ablation Studies_
We conducted an ablation study to analyze the impact of consistency constraints and the adaptive mask in our method. To do this, we removed each setting from our framework and evaluated the results. We randomly selected three scenarios from the ScanNet and averaged the final evaluated metrics to perform the ablation experiment. The comprehensive results of the ablation study can be found in Table III.
#### Iv-E1 Effectiveness of Geometric Consistency
In regions that failed in normal consistency checks, we add geometric constraints into the training process to improve geometric detail reconstruction. In Table III, geometric constraints improved quantitative results in terms of F-score and Chamfer-\(L_{1}\) metrics, compared with directly utilizing normal priors. This demonstrates the effectiveness of the geometric consistency.
#### Iv-E2 Effectiveness of Photometric Consistency
We utilize photometric consistency for further improvement in areas that conform to normal consistency. Cause the majority of normal priors are accurate, most optimization will constrained by photometric consistency rather than geometric consistency. Therefore, this setting is more effective than only using geometric consistency, as shown in Table III
#### Iv-E3 Effectiveness of Adaptive Mask
To enhance the effectiveness of constraints, we employ an adaptive mask for selection. As shown in Table III, applying all constraints without selection has limited improvement, due to the conflict between the inaccurate normal priors constraint and consistency constraints.
Table III shows that all of the settings contribute to enhancing reconstruction results, and lead to higher-quality reconstructions when employed in conjunction. Furthermore, to demonstrate the effectiveness of color decomposition, we conducted a comparative experiment on color. As shown in Table IV, color decomposition also contributes to reconstruction.
## V Conclusion
In this paper, we propose a 3D indoor scene reconstruction method using a two-stage training process. We decompose the color by perspective dependency and utilize two novel consistency constraints in both geometry and photography to improve the reconstruction quality. Besides, we introduce an essential mask to effectively select constraints in the training process. The experiments show that our approach achieves more geometric detail and high-quality reconstruction.
Fig. 4: Qualitative comparison of indoor 3D surface reconstruction on the ScanNet dataset. | 2D画像からの3次元シーンの再構築は、 longstanding taskです。代わりに、フレームごとに深さを推定して3Dで融合するのではなく、近年では、神経Implicit surfaceという統一的な表現として3次元再構築を処理する研究が進んでいます。データ駆動の事前トレーニングされた幾何学的ヒントを備えるこれらの手法は、期待通りのパフォーマンスを示しています。しかし、避けられない不正確な事前推定は、通常は避けられないため、その結果として、特に幾何学的に複雑な領域では、再構築の質が低下する可能性があります。この論文では、2段階のトレーニングプロセスを提案し、視覚依存的なと視覚独立的な色の分離、そして、詳細な再構築性能を向上させるための2つの新しい一致の制約を組み込みます。さらに、この論文では、监督制約の選択を適応的に影響させるための重要なマスクスキームを導入し、自己教師化パラダイ |
2309.11176 | Effects of electrons on nuclear clock transition frequency in $^{229}$Th
ions | We perform calculations of the energy shift of the nuclear clock transition
frequency $^{229}$Th as a function of the number of electrons in Th ion.
We demonstrate that the dependence of the nuclear frequency on electron
configuration is significant. E.g., removing one electron from the atom leads
to relative shift of the nuclear frequency $\sim 10^{-7}$, which is twelve
orders of magnitude larger than expected relative uncertainty of the nuclear
clock transition frequency ($\sim 10^{-19}$). This leads to difference of the
nuclear clock frequencies in Th~IV, Th~III, Th~II and Th~I.
The relative change of the nuclear frequency between neutral Th and its bare
nucleus is 1\%. We also calculate the field shift constants for isotopic and
isomeric shifts of atomic electron transitions in Th ions. | V. A. Dzuba, V. V. Flambaum | 2023-09-20T09:52:14 | http://arxiv.org/abs/2309.11176v1 | # Effects of electrons on nuclear clock transition frequency in \({}^{229}\)Th ions
###### Abstract
We perform calculations of the energy shift of the nuclear clock transition frequency \({}^{229}\)Th as a function of the number of electrons in Th ion. We demonstrate that the dependence of the nuclear frequency on electron configuration is significant. E.g., removing one electron from the atom leads to relative shift of the nuclear frequency \(\sim 10^{-7}\), which is twelve orders of magnitude larger than expected relative uncertainty of the nuclear clock transition frequency (\(\sim 10^{-19}\)). This leads to difference of the nuclear clock frequencies in Th IV, Th III, Th II and Th I. The relative change of the nuclear frequency between neutral Th and its bare nucleus is 1%. We also calculate the field shift constants for isotopic and isomeric shifts of atomic electron transitions in Th ions.
Nucleus of the \({}^{229}\)Th isotope has a unique feature of having very low-energy excitation connected to the ground state by the magnetic dipole (M1) transition (see, e.g. Reviews [1; 2] and references therein). The latest, most precise measurements, give the value of 8.338(24) eV [3] (see also [4; 5; 6; 7; 8]) for the energy of this excitation, which is very small on nuclear scale. This feature attracted many researches for plans to build nuclear clock of exceptionally high accuracy - see e.g. [9; 10]. The projected relative uncertainty is expected to reach \(10^{-19}\)[11]. In addition, there are strong arguments that this nuclear clock would be very sensitive to physics beyond standard model including space-time variation of the fundamental constants, violation of the Lorentz invariance and Einstein equivalence principle, and search for scalar and axion dark matter fields [12; 13; 14; 15; 16; 17; 18; 19; 20]. There are plans to use Th ions of different ionisation degree [11; 21; 22] and even solid-state Th nuclear clock [23; 24; 25]. In this work we show that in all these systems the frequency of the nuclear clock will be different. This is due to the Coulomb interaction of atomic electrons with the nucleus, leading to the significant electronic shift of the nuclear transition frequency. There is also a smaller shift due to the magnetic interaction.
This electronic shift depends on electron configuration and it is different in different systems, like Th IV, Th III, Th II and Th I, leading to different nuclear frequencies. This shift for electronic state \(a\) is given by
\[\Delta E_{a}=F_{a}\delta\langle r^{2}\rangle, \tag{1}\]
where \(F_{a}\) is the field shift constant of state \(a\) which can be obtained from atomic calculations; \(\delta\langle r^{2}\rangle\) is the change of the nuclear root-mean square radius between the excited and ground nuclear states. The most accurate value for \(\delta\langle r^{2}\rangle\) was recently derived in Ref. [22], \({}^{229m,229}\delta\langle r^{2}\rangle=0.0105(13)\) fm\({}^{2}\). This enables us to determine the electronic shift of nuclear transition frequency for different thorium systems by calculating the field shift constants \(F_{a}\) and using (1). For example, difference of the nuclear frequencies between Th III and Th IV is given by
\[\Delta\omega_{N}=(F_{a}(\mbox{Th III})-F_{a}(\mbox{Th IV}))\delta\langle r^{2}\rangle, \tag{2}\]
State \(a\) in this case is the ground electronic state of the ion.
Note that these field shift constants \(F\) appear also in the calculations of the isotopic and isomeric field shifts of electronic transition frequencies. The difference is that in the isotopic and isomeric shifts we need difference of \(F\) for final state \(b\) and initial state \(a\) of the electronic transition. The nuclear state does not change in this electronic transition. For isotope shift it is usually the ground nuclear state. For isomeric shift it is isomeric (excited) state or ground state of the same nucleus. The isotopic and isomeric field shifts of the electronic transition frequency are given by
\[\Delta\omega_{ab}=(F_{b}-F_{a})\delta\langle r^{2}\rangle, \tag{3}\]
Numerical values of \(\Delta\omega_{N}\) and \(\Delta\omega_{ab}\) can be calculated using values of the constants \(F\) for different electron states in Th IV, Th III, Th II and Th I presented in the Table I. Note that we do not include a contribution of core electrons which cancels out in the difference of the values of \(F\). For the isomeric shifts one may use \({}^{229m,229}\delta\langle r^{2}\rangle=0.0105(13)\) fm\({}^{2}\) measured in Ref. [22].
We use the combination of the single-double coupled cluster and the configuration interaction methods (SD+CI, [26]) and random-phase approximation (RPA) method to perform the calculations. The SD+CI method gives us the wave functions, while the RPA method gives an effective operator of the field shift. Corresponding equations have a form (see e.g. [27])
\[(\hat{H}^{\rm HF}-\epsilon_{c})\delta\psi_{c}=-(\hat{F}+\delta V_{\rm core}), \tag{4}\]
where \(H^{\rm HF}\) is the relativistic Hartree-Fock operator for the atomic core, index \(c\) numerates single-electron states in the core, \(\psi_{c}\) and \(\delta\psi_{c}\) are corresponding single-electron functions and corrections due to the field shift operator \(\hat{F}\), and \(\delta V_{\rm core}\) is the change of the self-consistent Hartree-Fock potential due to the change in all core functions. Solving Eqs. (4) self-consistently allows to determine \(\delta V_{\rm core}\). Note that the core is the same for Th IV, Th III, Th II and Th I. Therefore the SD+CI and RPA equations need to be solved only once. Then the field shift
constant is given by
\[F_{a}=\langle a|\hat{F}+\delta V_{\rm core}|a\rangle. \tag{5}\]
We use hat to distinguish between the field shift constant \(F\) and the field shift operator \(\hat{F}=\delta V_{\rm nuc}/\delta\langle r^{2}\rangle\). The wave function \(|a\rangle\) in (5) is the many-electron wave function for valence electrons found in the SD+CI calculations. It has one, two, three or four valence electrons.
The results of the calculations are presented in Table 1. We present energy levels and field shift constants for the ground and some excited states of Th IV, Th III, Th II, and Th I. We have chosen low-energy excited states and also some other states of Th III and Th I for which other calculations and experimental data on isotope shift are available [22]. The values of the field shift constants are compared with earlier calculations in Ref. [22].
The difference of the field shift constants between our calculations and calculations in Ref. [22] is few per cent. This difference may be used as an accuracy estimate since the calculations have been done by different methods. The largest difference is for the ground state of Th II, which is 10%. However, our number leads to more consistent results for values of \(\delta\langle r^{2}\rangle\) extracted from the isotope shift measurements in ions Th II and Th III. Indeed, using our numbers, \(F\)=49.6 GHz/fm\({}^{2}\) for the ground state and \(F\)=-29.1 GHz/fm\({}^{2}\) for the state at \(E\)=17122 cm\({}^{-1}\), for extracting the difference in root-mean-square radii \(\langle r^{2}\rangle^{232,229}\) from the isotope shift data [22] leads to the value \(\delta\langle r^{2}\rangle^{232,229}=0.321(32)\) fm\({}^{2}\) (we assume 10% uncertainty for the values of \(F\)), which is closer to the data extracted from four transitions in Th III (0.315(32), 0.312(42), 0.338(44), 0.322(53), see Table 1 in [22]). When all five numbers are taken into account, four numbers for Th III from Ref. [22] and our number for Th II, 0.321(32), the final result is \(\delta\langle r^{2}\rangle^{232,229}=0.320(15)\) fm\({}^{2}\) (the final value of [22] is \(\delta\langle r^{2}\rangle^{232,229}=0.299(15)\) fm\({}^{2}\)). Our result is in better agreement with the latest most accurate literature value \(\delta\langle r^{2}\rangle^{232,229}\)=0.334(8) fm\({}^{2}\) presented in Ref. [29]. The new value of \(\delta\langle r^{2}\rangle^{232,229}\) leads to slightly different value of \(\delta\langle r^{2}\rangle^{229m,229}\). Using the ratio of the isomeric and isotopic shifts from Ref. [10] we get \(\delta\langle r^{2}\rangle^{229m,229}=0.0112(13)\) fm\({}^{2}\). It is 7% larger but agrees within error bars wth the value \(\delta\langle r^{2}\rangle^{229m,229}=0.0105(13)\) fm\({}^{2}\) presented in [22]. We are going to use our new number in further analysis.
It is instructive to explain why the field shift constants \(F\) have different signs for different electron states. Orbitals \(s_{1/2}\) and \(p_{1/2}\) penetrate nucleus and are highly sensitive to the nuclear radius (remind the reader that the lower component of the Dirac spinor of the relativistic \(p_{1/2}\) orbital has angular quantum numbers of \(s_{1/2}\) orbital). An increase of the nuclear radius leads to decrease of the attraction to the nucleus, therefore energies \(s_{1/2}\) and \(p_{1/2}\) move up and constant \(F\) is positive. Higher orbitals \(p_{3/2}\), \(d\) and \(f\) do not penetrate nucleus, so the direct term \(\hat{F}\) in Eq. (5) is negligible. The effect comes from the correction to the electron core potential \(\delta V_{\rm core}\) which is dominated by the Coulomb field of \(s_{1/2}\) electrons. Increase of the nuclear radius makes attraction to the nucleus weaker, increases the radii of the \(s_{1/2}\) orbitals and makes negative correction \(\delta V_{\rm core}\) to the core electron Coulomb potential. This is why \(F\) for \(p_{3/2}\), \(d\) and \(f\) electrons is negative. We may also explain this sign from another end. Adding valence \(p_{3/2}\), \(d\) or \(f\) electron increases positive Coulomb energy of the electron repulsion. As a result, the \(s_{1/2}\) electron energies and distances from the nucleus increase and their sensitivity to the change of the nuclear radius decreases. Thus, the effect of the higher wave valence electron is negative.
Using the field shift constants for the ground states of each ion from Table 1 (we use our numbers for consistency), the value \(\delta\langle r^{2}\rangle^{229m,229}=0.0112(13)\) fm\({}^{2}\) (see above) and formula similar to (2) we obtain the differences between nuclear frequencies in different thorium ions. The results are presented in Table 2. We see that the difference is huge. It exceeds the projected relative uncertainty of the nuclear clocks by many orders of magnitude. It is worth noting that the shift does not contribute to the uncertainty budget. It only means that the frequency of the nuclear transition is different in different thorium systems.
It is interesting to determine the nuclear frequency difference between neutral (or nearly neutral) \({}^{229}\)Th and bare \({}^{229}\)Th nucleus. This difference is strongly dominated by contributions from \(1s\) electrons. Using the RPA calculation (4) we get \(F(1s)=8.23\times 10^{8}\) MHz/fm\({}^{2}\). The
\begin{table}
\begin{tabular}{l l l c c} Atom & State & Expt. energy & \(F\) (GHz/fm\({}^{2}\)) \\ or ion & & & (cm\({}^{-1}\)) [28] & Present & Ref.[22] \\ \hline Th IV & \(5f\) & \({}^{2}\)F\({}^{0/2}_{5}\) & 0 & -55.0 & \\ & \(5f\) & \({}^{2}\)F\({}^{0/2}_{7/2}\) & 4325 & -53.0 & \\ & \(6d\) & \({}^{2}\)D\({}_{3/2}\) & 9193 & -23.3 & \\ & \(6d\) & \({}^{2}\)D\({}_{5/2}\) & 14586 & -20.5 & \\ & \(7s\) & \({}^{2}\)S\({}_{1/2}\) & 23130 & 92.1 & \\ & \(7p\) & \({}^{2}\)P\({}^{0}_{1/2}\) & 60239 & 2.7 & \\ & \(7p\) & \({}^{2}\)P\({}^{0}_{3/2}\) & 73055 & -5.3 & \\ Th III & \(5f\)6d & \({}^{3}\)H\({}^{0}_{4}\) & 0 & -68.0 & -68.7 \\ & \(6d\)\({}^{2}\) & F\({}_{2}\) & 63 & -39.9 & -36.6 \\ & \(5f^{2}\) & H\({}_{4}\) & 15148 & -83.3 & -89.5 \\ & \(5f\)\(6d\) & \({}^{1}\)P\({}^{0}_{1}\) & 20711 & -62.2 & -63.6 \\ & \(5f^{2}\) & \({}^{3}\)F\({}_{4}\) & 21784 & -86.5 & -85.5 \\ & \(5f^{2}\) & \({}^{3}\)P\({}_{0}\) & 29300 & -82.2 & -84.1 \\ Th II & \(6d^{2}\)\(7s\) & \({}^{2}\)D\({}_{3/2}\) & 0 & 49.6 & 54.6 \\ & \(5f\)\(6d\)\({}^{2}\) & \({}^{*}\)\({}_{3/2}\) & & -65.0 & \\ & \(5f\)\(6d\)\({}^{2}\) & \({}^{*}\)\({}_{3/2}\) & 15145 & -45.8 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{3/2}\) & 15711 & -36.9 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{3/2}\) & 17122 & -29.1 & -31.6 \\ & \(5f\)\(6d\)\(7s\) & \({}^{2}\)F\({}_{5/2}\) & 12472 & -18.3 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{5/2}\) & & -36.3 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{4}\)D\({}^{*}_{5/2}\) & 14545 & -63.9 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{5/2}\) & 16033 & -46.8 & \\ Th I & \(6d^{2}\)\(7s^{2}\) & \({}^{3}\)F\({}_{2}\) & 0 & 58.6 & \\ \end{tabular}
\end{table}
Table 1: Field shift constant \(F\) for the ground and some excited states of Th IV, Th III, Th II, and Th I.
total energy shift caused be two \(1s\) electrons is \(1.73\times 10^{7}\) MHz; the total shift from all core electrons is \(2.07\times 10^{7}\) MHz=8.57 \(\cdot 10^{-2}\) eV, which is \(\sim 1\%\) of the nuclear frequency.
Electronic correction to the nuclear frequency comes also from magnetic interaction between electrons and nucleus. The first order gives ordinary magnetic hyperfine splitting of the transition frequencies. The magnetic shift is given by the second-order magnetic dipole hyperfine correction to the energy
\[\delta E_{g}^{\rm hfs}=\sum_{n}\frac{\langle g|\hat{H}^{\rm hfs}|n\rangle^{2}} {E_{g}-E_{n}} \tag{6}\]
Here index \(g\) stands for the ground electronic state, \(\hat{H}^{\rm hfs}\) is the magnetic dipole hyperfine structure operator. Values of \(\delta E_{g}^{\rm hfs}\) are different for the ground and isomeric nuclear states since their magnetic moments and spins are different. Magnetic moment values can be found in Ref. [10]. In addition, there is the second order contribution from the mixing of the ground and isomeric nuclear states by the magnetic filed of electrons. Preliminary estimations show that the second order magnetic shift is significantly smaller than the electronic shift considered in the present work. More detailed analysis might be a subject of further work.
###### Acknowledgements.
This work was supported by the Australian Research Council Grants No. DP230101058 and DP200100150.
| 核clock遷移周波数のエネルギーシフトを計算する。
その計算結果は、Thイオン中の電子数によって変化する。
電子配置に依存する核周波数の変化は、重要な役割を果たす。
例として、原子の1個の電子を取り除くと、核周波数の相対的なシフトは、10<sup>-7</sup>、つまり原子時計遷移周波数の相対的な不確実性の12倍に達する。
これは、原子時計遷移周波数の相対的な不確実性(約10<sup>-19</sup>)よりもはるかに大きい。
これは、Th IV、Th III、Th II、Th Iの核時計周波数の違いを生じる。
中性ThとBarenucleusとの核周波数の相対的な変化は1%である。
さらに、Thイオンにおける原子電子遷移の同位体および異性体シフトの磁場シフト定 |
2309.05103 | AGent: A Novel Pipeline for Automatically Creating Unanswerable
Questions | The development of large high-quality datasets and high-performing models
have led to significant advancements in the domain of Extractive Question
Answering (EQA). This progress has sparked considerable interest in exploring
unanswerable questions within the EQA domain. Training EQA models with
unanswerable questions helps them avoid extracting misleading or incorrect
answers for queries that lack valid responses. However, manually annotating
unanswerable questions is labor-intensive. To address this, we propose AGent, a
novel pipeline that automatically creates new unanswerable questions by
re-matching a question with a context that lacks the necessary information for
a correct answer. In this paper, we demonstrate the usefulness of this AGent
pipeline by creating two sets of unanswerable questions from answerable
questions in SQuAD and HotpotQA. These created question sets exhibit low error
rates. Additionally, models fine-tuned on these questions show comparable
performance with those fine-tuned on the SQuAD 2.0 dataset on multiple EQA
benchmarks. | Son Quoc Tran, Gia-Huy Do, Phong Nguyen-Thuan Do, Matt Kretchmar, Xinya Du | 2023-09-10T18:13:11 | http://arxiv.org/abs/2309.05103v1 | # _Agent_: A Novel Pipeline for Automatically Creating Unanswerable Questions
###### Abstract
The development of large high-quality datasets and high-performing models have led to significant advancements in the domain of Extractive Question Answering (EQA). This progress has sparked considerable interest in exploring unanswerable questions within the EQA domain. Training EQA models with unanswerable questions helps them avoid extracting misleading or incorrect answers for queries that lack valid responses. However, manually annotating unanswerable questions is labor-intensive. To address this, we propose _AGent_, a novel pipeline that automatically creates new unanswerable questions by re-matching a question with a context that lacks the necessary information for a correct answer. In this paper, we demonstrate the usefulness of this _AGent_ pipeline by creating two sets of unanswerable questions from answerable questions in SQuAD and HotpotQA. These created question sets exhibit low error rates. Additionally, models fine-tuned on these questions show comparable performance with those fine-tuned on the SQuAD 2.0 dataset on multiple EQA benchmarks. 1
Footnote 1: Our code is publicly available at [https://github.com/sonqt/agent-unanswerable](https://github.com/sonqt/agent-unanswerable).
## 1 Introduction
Extractive Question Answering (EQA) is an important task of Machine Reading Comprehension (MRC), which has emerged as a prominent area of research in natural language understanding. Research in EQA has made significant gains thanks to the availability of many challenging, diverse, and large-scale datasets Rajpurkar et al. (2016, 2018); Kwiatkowski et al. (2019); Yang et al. (2018); Trivedi et al. (2022). Moreover, recent advancements in datasets also lead to the development of multiple systems in EQA Huang et al. (2018); Zaheer et al. (2020) that have achieved remarkable performance, approaching or even surpassing human-level performance across various benchmark datasets.
Matching the rapid progress in EQA, the sub-field of unanswerable questions has emerged as a new research area. Unanswerable questions are those that cannot be answered based only on the information provided in the corresponding context. Unanswerable questions are a critical resource in training EQA models because they allow the models to learn how to avoid extracting misleading answers when confronted with queries that lack valid responses. Incorporating unanswerable questions in the training set of EQA models enhances the overall reliability of these models for real-world applications Tran et al. (2023).
Nevertheless, the manual annotation of unanswerable questions in EQA tasks can be prohibitively labor-intensive. Consequently, we
Figure 1: Examples of an answerable question \(Q1\) from SQuAD 1.1, and two unanswerable questions \(Q2\) from SQuAD 2.0 and \(Q3\) from SQuAD _AGent_. In SQuAD 2.0, crowdworkers create unanswerable questions by replacing “large numbers” with “decimal digits.” On the other hand, our automated _AGent_ pipeline matches the original question \(Q1\), now \(Q3\), with a new context \(C3\). The pair \(C3-Q3\) is unanswerable as context \(C3\) does not indicate whether the **trial division** can **conveniently** test the primality of **large** numbers.
present a novel pipeline to automate the creation of high-quality unanswerable questions given a dataset comprising answerable questions. This pipeline uses a retriever to re-match questions with paragraphs that lack the necessary information to answer them adequately. Additionally, it incorporates the concept of adversarial filtering for identifying challenging unanswerable questions. The key contributions of our work can be summarized as follows:
1. We propose _AGent_ which is a novel pipeline for automatically creating unanswerable questions. In order to prove the utility of _AGent_, we apply our pipeline on two datasets with different characteristics, SQuAD and HotpotQA, to create two different sets of unanswerable questions. In our study, we show that the two unanswerable question sets created using _AGent_ pipeline exhibit a low error rate.
2. Our experiments show that the two unanswerable question sets created using our proposed pipeline are challenging for models fine-tuned using human annotated unanswerable questions from SQuAD 2.0. Furthermore, our experiments show that models fine-tuned using our automatically created unanswerable questions show comparable performance to those fine-tuned on the SQuAD 2.0 dataset on various EQA benchmarks, such as SQuAD 1.1, HotpotQA, and Natural Questions.
## 2 Related Work
### Unanswerable Questions
In the early research on unanswerable questions, Levy et al. (2017) re-defined the BiDAF model (Seo et al., 2017) to allow it to output whether the given question is unanswerable. Their primary objective was to utilize MRC as indirect supervision for relation extraction in zero-shot scenarios.
Subsequently, Rajpurkar et al. (2018) introduced a crowdsourcing process to annotate unanswerable questions, resulting in the creation of the SQuAD 2.0 dataset. This dataset later inspired similar works in other languages, such as French (Heinrich et al., 2022) and Vietnamese (Nguyen et al., 2022). However, recent research has indicated that models trained on SQuAD 2.0 exhibit poor performance on out-of-domain samples (Sulem et al., 2021).
Furthermore, apart from the adversarially-crafted unanswerable questions introduced by Rajpurkar et al. (2018), Natural Question (Kwiatkowski et al., 2019) and Tydi QA (Clark et al., 2020) present more naturally constructed unanswerable questions. While recent language models surpass human performances on adversarial unanswerable questions of SQuAD 2.0, natural unanswerable questions in Natural Question and Tidy QA remain a challenging task (Asai and Choi, 2021).
In a prior work, Zhu et al. (2019) introduce a pair-to-sequence model for generating unanswerable questions. However, this model requires a substantial number of high-quality unanswerable questions from SQuAD 2.0 during the training phase to generate its own high-quality unanswerable questions. Therefore, the model introduced by Zhu et al. (2019) cannot be applied on the HotpotQA dataset for generating high-quality unanswerable questions. In contrast, although our _AGent_ pipeline cannot generate questions from scratch, it distinguishes itself by its ability to create high-quality unanswerable questions without any preexisting sets of unanswerable questions.
### Robustness of MRC Models
The evaluation of Machine Reading Comprehension (MRC) model robustness typically involves assessing their performance against adversarial attacks and distribution shifts. The research on adversarial attacks in MRC encompasses various forms of perturbations (Si et al., 2021). These attacks include replacing words with WordNet antonyms (Jia and Liang, 2017), replacing words with words having similar representations in vector space (Jia and Liang, 2017), substituting entity names with other names (Yan et al., 2022), paraphrasing question (Gan and Ng, 2019; Ribeiro et al., 2018), or injecting distractors into sentences (Jia and Liang, 2017; Zhou et al., 2020). Recently, multiple innovative studies have focused on enhancing the robustness of MRC models against adversarial attacks (Chen et al., 2022; Zhang et al., 2023; Tran et al., 2023).
On the other hand, in the research line of robustness under distribution shift, researchers study the robustness of models in out-of-domains settings using test datasets different from training dataset (Miller et al., 2020; Fisch et al., 2019; Sen and Saffari, 2020).
Tasks and Models
In the task of EQA, models are trained to extract a list of prospective outputs (answers), each accompanied by a probability (output of softmax function) that represents the machine's confidence in the answer's accuracy. When the dataset includes unanswerable questions, a valid response in the extracted list can be an "empty" response indicating that the question is unanswerable. The evaluation metric commonly used to assess the performance of the EQA system is the F1-score, which measures the average overlap between the model's predictions and the correct answers (gold answers) in the dataset. For more detailed information, please refer to the work by Rajpurkar et al. (2016).
### Datasets
In our work, we utilize three datasets: SQuAD Rajpurkar et al. (2016, 2018), HotpotQA Yang et al. (2018), and Natural Questions Kwiatkowski et al. (2019). In the SQuAD dataset, each question is associated with a short paragraph from Wikipedia. HotpotQA is a dataset designed for multi-hop reasoning question answering where each question requires reasoning over multiple supporting paragraphs. Additionally, the Natural Questions dataset comprises real queries from the Google search engine, and each question is associated with a Wikipedia page.
### Models
We employ three transformer-based models in our work: BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), and SpanBERT Joshi et al. (2020). **BERT** is considered the pioneering application of the Transformer model architecture Vaswani et al. (2017). BERT is trained on a combination of English Wikipedia and BookCorpus using masked language modeling and next-sentence prediction as pre-training tasks. Later, a replication study by Liu et al. (2019) found that BERT was significantly under-trained. Liu et al. (2019) built **RoBERTa** from BERT by extending the pre-training time and increasing the size of the pre-training data. Joshi et al. (2020) developed **SpanBERT** by enhancing BERT's ability to represent and predict text spans by masking random contiguous spans and replacing NSP with a span boundary objective.
Each of these three models has two versions: base and large. Our study uses all six of these models.
## 4 Automatically Creating Unanswerable Questions
### Criteria
In order to guarantee the quality of our automatically created unanswerable questions, we design our pipeline to adhere to the following criteria:
**Relevance.** The created unanswerable questions should be closely related to the subject matter discussed in the corresponding paragraph. This criterion ensures that the unanswerability of the question is not easily recognizable by simple heuristic methods and that the created question "makes sense" regarding the provided context.
**Plausibility.** Our pipeline also ensures that the created unanswerable questions have at least one plausible answer. For instance, when considering a question like "What is the name of one algorithm useful for conveniently testing the primality of large numbers?", there should exist a plausible answer in the form of the name of an algorithm in Mathematics that is closely linked to the primality within the corresponding context. See Figure 1 for an example showcasing an unanswerable question with strong plausible answer(s).
**Fidelity.** Our pipeline adds an additional step to ensure a minimal rate of error or noise in the set of automatically created unanswerable questions. It is important that the newly created questions are genuinely unanswerable. This quality control measure bolsters the reliability of the pipeline. The effectiveness of this step is verified in the study in Section 4.3.
### _AGent_ Pipeline
Figure 2 provides a summary of all the steps in the _AGent_ pipeline for automatically creating unanswerable questions corresponding to each dataset of answerable questions. Our proposed _AGent_ pipeline consists of three steps which align with the three criteria discussed in Section 4.1:
**Step 1**
**Matching questions with new contexts.** In the EQA task, the input consists of a question and a corresponding context. By matching the question with a new context that differs from the original context, we can create a new question-context pair that is highly likely to be unanswerable. This step prioritizes the criterion of **relevance**. We employ the term frequency-inverse document frequency (TF-IDF) method to retrieve the \(k\) most relevant
paragraphs from the large corpus containing all contexts from the original dataset (while obviously discarding the context that was originally matched with this question). The outcome of this step is a set of **unanswerable candidates**. It's important to note that the unanswerable candidates created in this step may includes some answerable questions, and these answerable questions will be filtered out in step 3 of the pipeline.
**Step 2**
**Identifying hard unanswerable questions.** In this step, we give priority to both the **relevance** and **plausibility** criteria. We aim to identify unanswerable questions with a highly relevant corresponding context and at least one strong plausible answer. To achieve this, we leverage the concept of adversarial filtering where the adversarial model(s) is applied to filter out easy examples (Yang et al., 2018; Zellers et al., 2018; Zhang et al., 2018).
We first fine-tune six models using a dataset comprising answerable questions from the original dataset and randomly selected unanswerable candidates. We acknowledge that some unanswerable questions in this training set may be answerable. Nevertheless, the percentage of answerable questions among the unanswerable candidates is minimal and within an acceptable range (Appendix A.2). To ensure training integrity, we then exclude all unanswerable questions utilized for training these six models from the set of unanswerable candidates. Then, we employ the six fine-tuned models to evaluate the difficulty of each sample in the set of unanswerable candidates. If at least two of the six models predict that a given question is answerable, we consider it to be a challenging unanswerable question and include it in our set of **challenging unanswerable candidates**.
**Step 3**
**Filtering out answerable questions.** The set of challenging unanswerable questions consists of questions that at least two out of the six models predict as answerable. Consequently, there may be a considerable percentage of questions that are indeed answerable. Therefore, this specific step in our pipeline aims to ensure the **fidelity** of the _AGent_ pipeline, ensuring that all questions created by our pipeline are genuinely unanswerable. We leverage the predicted answers and confidence scores from
Figure 2: The _AGent_ pipeline for generating challenging high-quality unanswerable questions in Extractive Question Answering given a dataset with answerable questions. The six models used in this pipeline are the base and large versions of BERT, RoBERTa, and SpanBERT. In step 3 of the pipeline, the blue dots represent the calculated values (using formula discussed in §4.2) for unanswerable questions, while the red dots represent the calculated values for answerable questions. The threshold for discarding questions from the final extracted set of unanswerable questions is determined by finding the minimum value among all answerable questions. Any question with a calculated value greater than the threshold will not be included in our final extracted set.
the six deployed models in the previous step to achieve this. Subsequently, we devise a filtering model with four inputs: \(c_{a}\), representing the cumulative confidence scores of the models attempting to answer (or predicting as answerable); \(c_{u}\), representing the cumulative confidence scores of the models not providing an answer (or predicting as unanswerable); \(n_{a}\), denoting the number of models attempting to answer; and \(n_{u}\), indicating the number of models not providing an answer. The output of this filtering model is a value \(V(q)\) for each question \(q\). The filtering models must be developed independently for different datasets.
In order to determine the filtering threshold and develop the filtering model, we manually annotate \(200\) challenging unanswerable candidates from each dataset. The filtering threshold is established by identifying the minimum value \(V(q_{a})\) where \(q_{a}\) represents an answerable question from our annotated set. This approach ensures a precision of \(100\%\) in identifying unanswerable questions on the annotated 200 questions. The filtering model then acts to minimize the number of false positives (number of unanswerable candidates that are answerable) at the expense of tossing out some candidate questions that are unanswerable. However, as the filtering model is applied on unseen challenging unanswerable candidates, the precision of the filtering model in this step would not be \(100\%\) as on the \(200\) manually annotated samples. Therefore, in next section, we use human experts to evaluate the precision exhibited by the filtering model.
Further details for the _AGent_ pipeline are outlined in Appendix A.
### Human Reviewing
This section presents our methodology for evaluating the data quality of unanswerable questions automatically created by _AGent_.
We use three experts to validate 100 random unanswerable questions from each development set of SQuAD _AGent_ and HotpotQA _AGent_. In order to prevent an overwhelming majority of unanswerable questions in our annotation set, which could potentially undermine the integrity of the annotation, we incorporate 20 manually annotated answerable questions during step 3 of the pipeline. Consequently, we provide a total of 120 questions to each expert for each set.
The process of expert evaluation involves two distinct phases. During the first phase, each of the three experts independently assesses whether a given question is answerable and provides the reasoning behind their annotation. In the second phase, all three experts are presented with the reasons provided by the other experts for any conflicting samples. They have the opportunity to review and potentially modify their final set of annotations based on the reasons from their peers.
We observe that the annotations provided by our three experts demonstrate exceptional quality. Table 1 presents the Fleiss' Kappa score [17] for our three experts after the completion of both phases, as well as the error rate of the _AGent_ development set. Notably, the Fleiss' Kappa score in phase 1 is remarkably high (\(0.76\) on SQuAD _AGent_, and \(0.83\) on HotpotQA _AGent_), suggesting that the annotations obtained through this process are reliable. Besides, after the second phase, all three experts agree that the \(20\) answerable questions we include in the annotation sets are indeed answerable.
As demonstrated in Table 1, the high-quality annotations provided by three experts indicate an exceptionally low error rate for the unanswerable questions created using _AGent_ (\(6\%\) for SQuAD and \(5\%\) for HotpotQA). For comparison, this error rate is slightly lower than that of SQuAD 2.0, a dataset annotated by humans.
## 5 Experiments and Analysis
We now shift our attention from the _AGent_ pipeline to examining the effectiveness of our _AGent_ questions in training and benchmarking EQA models.
### Training Sets
The models in our experiments are trained using SQuAD 2.0, SQuAD _AGent_, and HotpotQA _AGent_. It is important to note that the two _AGent_ datasets includes all answerable questions from the original datasets and _AGent_ unanswerable questions.
\begin{table}
\begin{tabular}{c l l l} \hline \hline & & \multicolumn{1}{c}{**Phase**} & \multicolumn{1}{c}{**Phase**} \\ & & **1** & **2** \\ \hline
**SQuAD** & Fleiss’ Kappa & \(0.76\) & \(0.95\) \\ _AGent_ & Data Error & \(0.10\) & **0.06** \\ \hline
**HotpotQA** & Fleiss’ Kappa & \(0.83\) & \(0.97\) \\ _AGent_ & Data Error & \(0.09\) & **0.05** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Fleiss’ Kappa score and _AGent_ data error for the annotations collected from human experts after two distinct phases.
### Testing Sets
In our experiments, we use eight sets of EQA questions as summarized in Table 2. In addition to two sets of _AGent_ unanswerable questions, we also incorporate the following six types of questions.
**SQuAD.** We use all **answerable** questions from SQuAD 1.1. We use all **unanswerable** questions from SQuAD 2.0.
**HotpotQA.** In preprocessing **answerable** questions in HotpotQA, we adopt the same approach outlined in MRQA 2019 (Fisch et al., 2019) to convert each dataset to the standardized EQA format. Specifically, we include only two supporting paragraphs in our answerable questions and exclude distractor paragraphs.
In preprocessing **unanswerable** questions in HotpotQA, we randomly select two distractor paragraphs provided in the original HotpotQA dataset, which are then used as the context for the corresponding question.
**Natural Questions (NQ).** In preprocessing **answerable** questions in NQ, we again adopt the same approach outlined in MRQA 2019 to convert each dataset to the standardized EQA format. This format entails having a single context, limited in length. Specifically, we select examples with short answers as our answerable questions and use the corresponding long answer as the context.
For **unanswerable** questions in NQ, we select questions with no answer and utilize the entire Wikipedia page, which is the input of original task of NQ, as the corresponding context. However, in line with the data collection process of MRQA 2019, we truncate the Wikipedia page, limiting it to the first 800 tokens.
### Main Results
Table 2 presents the results of our experiments. Firstly, our findings demonstrate that unanswerable questions created by _AGent_ pose significant challenges for models fine-tuned on SQuAD 2.0, a dataset with human-annotated unanswerable questions. The average performance of the six models fine-tuned on SQuAD 2.0 and tested on SQuAD _AGent_ is \(49.38\); the average score for testing these models on HotpotQA _AGent_ data is \(58.98\). Notably, unanswerable questions from HotpotQA _AGent_ are considerably more challenging compared to their unanswerable counterparts from HotpotQA.
Secondly, models fine-tuned on two _AGent_ datasets exhibit comparable performance to models fine-tuned on SQuAD 2.0. On unanswerable questions from HotpotQA and NQ, models fine-tuned on _AGent_ datasets significantly outperform those fine-tuned on SQuAD 2.0. On answerable questions from SQuAD and HotpotQA, models fine-tuned on SQuAD _AGent_ also demonstrate significant improvement over those fine-tuned on SQuAD 2.0 (\(86.96-84.55\) on SQuAD and \(63.26-51.05\) on HotpotQA). This finding highlights the applicability of models fine-tuned on _AGent_ datasets to various question types.
However, on answerable questions from NQ and unanswerable questions from SQuAD 2.0, models fine-tuned on _AGent_ datasets exhibit lower performance than those fine-tuned on SQuAD 2.0. On the one hand, the lower performance on unanswerable questions from SQuAD 2.0 of models fine-tuned on AGent datasets is due to the unfair comparision as models fine-tuned on AGent datasets are tested with out-of-domain samples, and models fine-tuned with SQuAD 2.0 are tested with in-domain samples.In the next section, we provide a comprehensive explanation for the lower performance on NQ answerable questions of models fine-tuned on AGent datasets.
\begin{table}
\begin{tabular}{|c|c c c|c c c|c c|} \hline _Test_\(\rightarrow\) & \multicolumn{3}{c|}{**SQuAD**} & \multicolumn{3}{c|}{**HotpotQA**} & \multicolumn{2}{c|}{**Natural Questions**} \\ _Train_\(\downarrow\) & answerable & unanswerable & _AGent_ & answerable & unanswerable & _AGent_ & answerable & unanswerable \\ \hline SQuAD & \(84.55\pm 3.43\) & \(\textbf{79.16}\pm 5.16\) & \(49.38\pm 5.21\) & \(51.05\pm 15.15\) & \(86.28\pm 2.68\) & \(58.98\pm 4.64\) & \(\textbf{44.30}\pm 6.36\) & \(60.55\pm 12.95\) \\ \hline SQuAD & \multirow{3}{*}{**86.96\(\pm 1.86\)**} & \multirow{3}{*}{\(29.63\pm 3.97\)} & \multirow{3}{*}{\(81.38\pm 4.52\)} & \multirow{3}{*}{\(63.26\pm 2.88\)} & \multirow{3}{*}{\(90.01\pm 4.20\)} & \multirow{3}{*}{\(50.61\pm 5.56\)} & \multirow{3}{*}{\(41.05\pm 6.81\)} & \multirow{3}{*}{\(78.66\pm 13.22\)} \\ _AGent_ & & & & & & & \\ \hline HotpotQA & \multirow{3}{*}{\(59.06\pm 5.26\)} & \multirow{3}{*}{\(46.13\pm 4.46\)} & \multirow{3}{*}{\(\textbf{87.61}\pm 2.72\)} & \multirow{3}{*}{\(\textbf{77.75}\pm 1.92\)} & \multirow{3}{*}{\(\textbf{99.70}\pm 0.06\)} & \multirow{3}{*}{\(\textbf{95.94}\pm 2.13\)} & \multirow{3}{*}{\(24.11\pm 7.04\)} & \multirow{3}{*}{\(\textbf{84.20}\pm 11.37\)} \\ _AGent_ & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: Performance of 6 models fine-tuned on SQuAD 2.0, SQuAD _AGent_, and HotpotQA _AGent_ datasets evaluated on SQuAD, HotpotQA, and Natural Questions. Each entry in the table is the mean and standard deviation of the F1 scores of the six MRC models. The left column indicates the dataset used to train the six MRC models. The top row indicates the dataset used to test the six MRC models. _AGent_ refers to the unanswerable questions generated using the _AGent_ pipeline. For a more detailed version of this table, refer to Table 8.
### Analysis on Natural Questions
To delve deeper into the underperformance of models fine-tuned on _AGent_ dataset on answerable questions of NQ, we analyze two sets of answerable questions. The first set is \(100\) answerable questions that models fine-tuned on SQuAD _AGent_ predict as unanswerable; the second one is 100 answerable questions that models fine-tuned on SQuAD 2.0 predict as unanswerable. For the sake of simplicity, we limit our reporting in this section to the analysis of models RoBERTa-base. Our analysis uncovers two potential issues that can arise when evaluating models with answerable questions from the NQ dataset. Table 3 summarizes our findings in this section.
Firstly, a considerable difference between the original NQ dataset and the NQ used in the EQA task following a prevailing approach in the research community is the difference in the provided context. While the EQA task uses the long answer as the context (Fisch et al., 2019), NQ supplies an entire Wikipedia page as the context for a given question. This difference presents a potential problem of inadequate context for answering the question. For instance, in Table 3, we observe that the long answer associated with the question "Who dies in the lost city of z?" fails to mention "the lost city of z". Using a long answer as the context causes this question unanswerable due to the insufficient context provided. We find that most answerable questions predicted as unanswerable by models fine-tuned on SQuAD 2.0 and SQuAD _AGent_ belong to this specific question type (\(65\%\) and \(76\%\) respectively). This finding highlights the potential unreliability when comparing models using the NQ dataset in the same way as it is commonly done in multiple EQA studies. This analysis forms the basis for our decision not to employ our _AGent_ pipeline on the NQ dataset.
Secondly, the questions in the NQ dataset are sourced from real users who submitted information-seeking queries to the Google search engine under natural conditions. As a result, a small portion of these questions may inevitably contain typographical errors or misspellings. In our analysis, we observe that models fine-tuned on our _AGent_ training set tend to predict the questions of this type as unanswerable more frequently. Nevertheless, due to the relatively small proportion of questions with typographical errors in our randomly surveyed sets, we refrain from drawing a definitive conclusion at this point. Therefore, in the subsequent section, we will delve further into this matter by adopting an adversarial attack on the EQA task.
## 6 Robustness against Syntactic Variations
In this section, we apply the adversarial attack technique TextBugger into EQA.
### TextBugger
Our adversarial attack in this section is inspired by the TextBugger attack (Li et al., 2019). We use black-box TextBugger in this section, which means that the attack algorithm does not have access to the gradients of the model. TextBugger generates attack samples that closely resemble the typographical errors commonly made by real users. We perform adversarial attacks on questions from the SQuAD 1.1 dataset.
Algorithm 1 in Appendix E provides the pseudocode outlining the process of generating attacked questions. Table 4 provides examples of how
\begin{table}
\begin{tabular}{l|c c} \hline \hline & **SQuAD** & **SQuAD** \\ & **2.0** & _AGent_ \\ \hline \multirow{4}{*}{\begin{tabular}{l} Insufficient \\ context for \\ question \\ \end{tabular} } & \begin{tabular}{l} Murray survives and, in front of the RGS trustees, accuses Fawcett of abandoning him in \\ the junge. Fawcett effects to resign from the society rather than apologize. World War I \\ breaks out in Europe, and Fawcett goes to France to fight. Manley dies in the trenches at \\ the Battle of the Sonme, and Fawcett is temporarily blinded in a chlorine gas attack. Jack, \\ Fawcett’s edket son \(-\) who had long accused Fawcett of abandoning the family \(-\) reconciles \\ with his father as he recovers. \\ \cline{2-3} & **Question**: who dies in the lost city of z? \\ \hline \multirow{3}{*}{\begin{tabular}{l} typographical \\ errors of key \\ words \\ \end{tabular} } & \begin{tabular}{l} Gimme Gimme has broadcast three series and 19 episodes in total. The first series \\ premiered on BBC Two on 8 January 1999 and lasted for six episodes, concluding on 12 \\ February 1999. [-] \\ **Question**: when did gim me gim me gim me start? \\ \hline \hline \end{tabular} & \multirow{3}{*}{
\begin{tabular}{l} 3 \\ 3 \\ 3 \\ 3 \\ \end{tabular} } \\ \end{tabular}
\end{table}
Table 3: Examples of two types of answerable questions in Natural Questions that can pose challenges for EQA models fine-tuned solely on unanswerable questions. We conduct a survey to measure the failure rates of RoBERTa models fine-tuned on both SQuAD 2.0 and SQuAD _AGent_ for these question types.
TextBugger generates bugs in a given token.
### Robustness against TextBugger
We investigate the impact of TextBugger attacks on models fine-tuned using different datasets, namely SQuAD 1.1, SQuAD 2.0, and SQuAD _AGent_. To accomplish this, we generate attacked questions by modifying 1, 2, 3, and 4 tokens in the questions from the SQuAD 1.1 dataset.
Figure 3 reports the performance of three models RoBERTa-base fine-tuned on SQuAD 1.1, SQuAD 2.0, and SQuAD _AGent_. Firstly, we see that the performance of the model fine-tuned on SQuAD 1.1 show small decreases (from \(92.2\) to \(71.9\)). Adversarial attack TextBugger does not present a significant challenge to the EQA model when the model is designed only to handle answerable questions.
Secondly, we can observe that the model fine-tuned on unanswerable questions from SQuAD 2.0 demonstrates significantly better robustness compared to the model fine-tuned on SQuAD _AGent_ (\(86.1-56.8\) compared to \(88.6-34.5\)). This finding confirms our initial hypothesis that the lower performance of models fine-tuned on _AGent_ datasets for answering questions in the NQ dataset is partly attributable to misspelled keywords in the questions from the NQ dataset.
## 7 Conclusion and Future Works
In this work, we propose _AGent_, a novel pipeline designed to automatically generate two sets of unanswerable questions from a dataset of answerable questions. We systematically apply _AGent_ on SQuAD and HotpotQA to generate unanswerable questions. Through a two-stage process of human reviewing, we demonstrate that _AGent_ unanswerable questions exhibit a low error rate.
Our experimental results indicate that unanswerable questions generated using AGent pipeline present significant challenges for EQA models fine-tuned on SQuAD 2.0. We also demonstrate that models fine-tuned using _AGent_ unanswerable questions exhibit competitive performance compared to models fine-tuned on human-annotated unanswerable questions from SQuAD 2.0 on multiple test domains. The good performance of models fine-tuned on two _AGent_ datasets with different characteristics, SQuAD _AGent_ and HotpotQA _AGent_, demonstrate the utility of _AGent_ in creating high-quality unanswerable questions and its potential for enhancing the performance of EQA models.
Furthermore, our research sheds light on two potential issues when utilizing EQA models designed to handle both answerable and unanswerable questions. Specifically, we identify the problems of insufficient context and typographical errors as considerable challenges in this context. In calling for further study on typographical errors, we propose the inclusion of the TextBugger adversarial attack in EQA. Our analysis reveals that TextBugger presents a novel challenge for EQA models designed to handle both answerable and unanswerable questions. It is important to address this challenge comprehensively before the real-world deployment of EQA models. By acknowledging and effectively tackling the influence of typographical errors, we can enhance the robustness and reliability of EQA models in practical applications.
## Limitations
We acknowledge certain limitations in our work. Firstly, our study primarily focuses on evaluating the pipeline using multiple pre-trained transformers-based models in English, which can be prohibitively expensive to create, especially for
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Original** & Insert & Delete & Swap &
\begin{tabular}{c} Substitute \\ Character \\ \end{tabular} \\ \hline
**South** & Sou th & Souh & Souht & S0uth \\ \hline \multicolumn{5}{|c|}{What **Souh** African law **recognized** two **typ es**} \\ \multicolumn{5}{|c|}{of schools?} \\ \hline \end{tabular}
\end{table}
Table 4: Examples of how TextBugger generates bugs in a given token ”South” and a full question after the TextBugger attack. The attacked tokens are highlighted in red.
Figure 3: Robustness of RoBERTa-base trained on SQuAD 1.1, SQuAD 2.0, SQuAD _AGent_ against TextBugger.
languages with limited resources. Furthermore, given the empirical nature of our study, there is no guarantee that all other transformer-based models or other deep neural networks would demonstrate the same level of effectiveness when applied in the _AGent_ pipeline. Consequently, the impact of the AGent pipeline on low-resource languages may be challenged due to this limitation. Potential future research could complement our findings by investigating the effectiveness of implementing _AGent_ pipeline in other languages.
Secondly, our analysis does not encompass a comprehensive examination of the models' robustness against various types of adversarial attacks in EQA when fine-tuned on _AGent_ datasets. We believe that such an analysis is crucial in determining the effectiveness of the _AGent_ pipeline in real-world applications, and its absence deserves further research.
Finally, our study has not discussed underlying factors for the observed phenomenon: a model fine-tuned on SQuAD AGent is less robust against TextBugger attack than its peer model fine-tuned on SQuAD 2.0. The study in this direction requires remarkably intricate investigation, which we deem beyond the scope of our present research. We leave this for our future work where we will propose our hypotheses that may shed light on this phenomenon and potential solutions to improve the robustness of EQA models against TextBugger.
| 大規模な高品質なデータセットと高性能なモデルの開発により、Extractive Question Answering (EQA) の分野において大きな進歩が得られています。この進歩は、EQAの領域における回答できない質問を探求する関心の高まりをもたらしました。EQAモデルを回答できない質問でトレーニングすることで、誤った回答や回答がない質問に対して誤った回答を抽出することを回避することができます。ただし、回答できない質問を人間の手でアノテーションすることは、労力を要します。この問題に対処するため、私たちはAGentという新しいパイプラインを提案しています。AGentは、質問を適切な回答を欠くコンテキストと再マッチすることで、自動的に回答できない質問を作成します。この論文では、AGentパイプラインの有用性を示すために、SQuADとHotpotQAの回答可能な質問から2つのセットの回答できない質問を作成しました。これらの作成された質問セットは、低いエラー率を示しています。また |
2303.00020 | A shared accretion instability for black holes and neutron stars | Accretion disks around compact objects are expected to enter an unstable
phase at high luminosity. One instability may occur when the radiation pressure
generated by accretion modifies the disk viscosity, resulting in the cyclic
depletion and refilling of the inner disk on short timescales. Such a scenario,
however, has only been quantitatively verified for a single stellar-mass black
hole. Although there are hints of these cycles in a few isolated cases, their
apparent absence in the variable emission of most bright accreting neutron
stars and black holes has been a lingering puzzle. Here we report the presence
of the same multiwavelength instability around an accreting neutron star.
Moreover, we show that the variability across the electromagnetic spectrum-from
radio to X-ray-of both black holes and neutron stars at high accretion rates
can be explained consistently if the accretion disks are unstable, producing
relativistic ejections during transitions that deplete or refill the inner
disk. Such new association allows us to identify the main physical components
responsible for the fast multiwavelength variability of highly accreting
compact objects. | F. M. Vincentelli, J. Neilsen, A. J. Tetarenko, Y. Cavecchi, N. Castro Segura, S. del Palacio, J. van den Eijnden, G. Vasilopoulos, D. Altamirano, M. Armas Padilla, C. D. Bailyn, T. Belloni, D. J. K. Buisson, V. A. Cuneo, N. Degenaar, C. Knigge, K. S. Long, F. Jimenez-Ibarra, J. Milburn, T. Muñoz Darias, M. Ozbey Arabaci, R. Remillard, T. Russell | 2023-02-28T19:00:22 | http://arxiv.org/abs/2303.00020v1 | # A shared accretion instability for black holes and neutron stars
###### Abstract
Accretion disks around compact objects are expected to enter an unstable phase at high luminosity[1]. One instability may occur when the radiation pressure generated by accretion modifies the disk viscosity, resulting in the cyclic depletion and refilling of the inner disk on short timescales[2]. Such a scenario, however, has only been quantitatively verified for a single stellar-mass black hole[3; 4; 5]. Although there are hints of these cycles in a few isolated cases[6; 7; 8; 9; 10], their apparent absence in the variable emission of most bright accreting neutron stars and black holes has been a lingering puzzle[11]. Here we report the presence of the same multiwavelength instability around an accreting neutron star. Moreover, we show that the variability across the electromagnetic spectrum--from radio to X-ray--of both black holes and neutron stars at high accretion rates can be explained consistently if the accretion disks are unstable, producing relativistic ejections during transitions that deplete or refill the inner disk. Such new association allows us to identify the main physical components responsible for the fast multiwavelength variability of highly accreting compact objects.
Swift J1858.6\(-\)0814 (hereafter Swift J1858) is a low mass X-ray binary (LMXB) that was first detected in November 2018[12] and reached a maximum X-ray luminosity of \(\approx\) 10\({}^{37}\) erg s\({}^{-1}\) (0.6-79 keV)[13]. Spectral analysis showed peculiar properties, including significant obscuration[13, 14] (N\({}_{H}\)\(\approx\) 10\({}^{23}\) cm\({}^{-2}\)) and outflows in X-rays[15], optical[16] and UV[17]. Moreover, for more than a year after its discovery, the source showed remarkable flaring activity from radio to hard X-rays[13, 18, 15, 19]. The source returned to quiescence in 2020, but not before exhibiting X-ray eclipses[19] and Type-I X-ray bursts[20] indicating the presence of an accreting neutron star with an orbital inclination \(>\)70\({}^{\circ}\) at a distance of \(\approx\)13 kpc.
On the 6th of August 2019, we coordinated a multiwavelength campaign to observe Swift J1858 simultaneously for \(\sim\)4 h with high time resolution in 5 bands: X-rays (3-79 keV) with _NuSTAR_ ; UV (150 nm) with the _Cosmic Origins Spectrograph_ onboard the Hubble Space Telescope; optical (_i+z sdss_ band, effective wavelength \(\lambda_{\rm eff}\)\(=\) 720 nm) with the _RISE_ at the Liverpool Telescope; near-IR (\(K_{s}\) band, \(\lambda_{\rm eff}\)\(=\) 2.2 \(\mu\)m) with HAWK-I on the Very Large Telescope; and radio (4.5 and 7.5 GHz) with the Karl G. Jansky Very Large Array. The source showed very strong variability with similar patterns in UV, optical, IR (UV/O/IR), and X-ray (see Figure 1-a-b). On long timescales, Swift J1858 exhibited a repetitive behaviour, alternating between quiet and active/variable phases (Figure 1 and Figure 2). The active phases showed oscillatory behavior on timescales of \(\approx\)100 s; we refer to these as "beats," given their visual similarity to the "heartbeat" variability pattern in GRS 1915+105[5]. On timescales of seconds, the source showed episodic fast flaring events (seen only in IR), which we refer to as "flares".
To explore the multiwavelength temporal behavior, we computed the cross-correlation function (CCF) between _NuSTAR_ and HAWK-I for all the simultaneous segments in our dataset (see Methods). We measured a clear correlation between the two bands, but the IR lags the X-ray variability with a delay that changes from \(\approx\) 2.5 s to \(\approx\) 5.5 s (see Figure 1-c). The magnitude and orbital phase dependence of these lags are fully consistent with a model where the UV/O/IR beats originate from the irradiation of X-ray beats on a disk and donor star with high orbital inclination (\(\approx\) 80\({}^{\circ}\)) and the orbital period of Swift J1858 (\(\approx\)21.3 h[19]).
Simple mass accretion rate variations in a hot inflow are not likely to explain the driving X-ray lightcurve[2]. The X-ray variability observed in Swift J1858 shows significant spectral evolution not compatible with standard variability of accreting compact objects[21, 3, 4]. In addition, similar variability has been seen in the archetypal high accretion rate stellar-mass black holes GRS 1915+105 and V404 Cyg[13]. These sources also share other important properties with Swift J1858, such as high luminosity (40% of the Eddington luminosity for Swift J1858), obscuration and ouflows[13, 14]. This association is strengthened by the remarkable similarity of the IR lightcurve of Swift J1858 and the X-ray lightcurve of the so-called "\(\beta\)" variability class of GRS 1915+105[21] (Figure 2). Even though the patterns are less discernible in the X-ray band for Swift J1858 (probably due to variable line-of-sight obscuration, given its high inclination[9, 13, 15, 16]), the irradiation origin of the UV/O/IR lightcurve strongly suggests a common physical mechanism for the driving variability in both sources.
From a physical point of view, it is commonly accepted that the recurrent behaviour of GRS 1915+105 (i.e., heartbeats and other limit cycles) is due to a radiation pressure instability in the disk at high accretion rates[2, 3, 4, 5]. Although not fully confirmed by GRMHD simulations, this instability is believed to drive cyclic accretion or ejection and rebuilding of the inner disk, generating repeating patterns in X-rays on 10-1000 s timescales[3, 4, 5]. If this emission irradiates the disk and companion star, it will give rise to a delayed UV/O/IR lightcurve, such as the one observed in Swift J1858. The interpretation of beats as a disk instability can be tested: both models[4] and observations[5] of GRS 1915+105 need short-lived jet ejections near the peak luminosity (roughly coincident with the depletion of the disk).
The fast IR flares in Swift J1858 appear to verify this hypothesis, giving credence to the radiation
pressure instability interpretation of the limit cycles. Aligning and averaging the flares, including 200 s of data before and after each flare, reveals that they take place after the peak of the slower IR beats (see Figure 1-d). But these flares are inconsistent with a thermal origin (see Methods), and, given their red color, we interpret them as direct evidence of optically-thin synchrotron emission from transient/short-lived relativistic jet ejections expected to occur[4] during these beat oscillations.
Swift J1858 also showed significant radio variability throughout our campaign[18], which requires explanation. The fast IR flares cannot be responsible for the observed low-frequency variability because their amplitude and duration would naturally lead to their radio emission being completely self-absorbed (\(\tau\gg 1\) at 10 GHz; see Methods). However, observations of GRS 1915+105 also show "baby jets": strong radio flares (though their synchrotron emission can contribute significantly in the IR band[22, 23]) that are consistent with emission from adiabatically expanding blobs[24] (although their launching mechanism is still not clear). To search for baby jets in Swift J1858 and make a comparison to GRS 1915+105, we modeled its variable radio emission as the sum of multiple ejecta[25], performing the same analysis on an archival radio observation of GRS 1915+105 (coincident with the \(\beta\)-class X-ray lightcurve shown in Figure 2). The results presented in Figure 3 show that the radio variability of both sources is well reproduced by our modelling. For Swift J1858, the model suggests baby jet ejection times (grey shaded areas in Figure 3) near quiet/active phase transitions; most of the ejecta in GRS 1915+105 occur during quiet phases but several fall close to quiet/active transitions as well.
For self-consistency, we then tested whether Swift J1858's baby jets would be detectable in the IR as for GRS 1915+105. Past studies[24, 5] show accretion instabilities in GRS 1915+105 when the X-ray and radio luminosity are \(L_{\rm BH_{x}}\approx 10^{38}\) erg s\({}^{-1}\) and \(L_{\rm BH_{radio}}\approx 10^{30}\) erg s\({}^{-1}\), respectively. For Swift J1858, we find \(L_{\rm NS_{X}}\approx 10^{37}\) erg s\({}^{-1}\) and \(L_{\rm NS_{radio}}\approx 10^{29}\) erg s\({}^{-1}\)[18]. Even under the conservative assumption that the ratio between the IR and radio flux from the jet in Swift J1858 is the same as the one observed in GRS 1915+105 during the \(\beta\)-class instability (IR/radio \(\approx 1.4\))[24], then we expect an IR baby jet flux of only \(\approx\)0.24 mJy. This is almost a factor of two fainter than the reprocessed emission during the beats (\(\approx\)0.4 mJy). This indicates that the two sources share the same disk-jet coupling, despite having qualitatively different radio and IR lightcurves. More broadly, regardless of the jet launching mechanism, this shows how the appearance of accretion instabilities can depend not only on the accretion rate and disk-jet geometry, but also on the binary orbit and the mass of the compact object.
There is growing evidence that high-accretion rate black hole sources such as GRS 1915+105, V4641 Sgr, Cyg X-3, and V404 Cygni all share common X-ray spectral variability properties[14]. However, multi-wavelength parallels have proven more difficult due to their different extinctions, hampering efforts to produce a unified physical scenario for this class of sources. Yet, as envisioned from our conclusions, Swift J1858 shows clear analogies with all these objects. Simultaneous multiwavelength observations of the 2015 outburst of V404 Cygni revealed repetitive optical/X-ray patterns with a lag consistent with reprocessing[26, 10, 27] and fast non-thermal flares[28]. Furthermore, its extreme radio variability is consistent with jet ejections taking place _during X-ray spectral transitions[25]_. Moreover, similar O-IR recurrent patterns with comparable timescales have also been observed in V4641 Sgr[29] and Cyg X-3[30]. Finally, we note that X-ray heartbeats have also been detected in sources like the LMXB IGR J17091\(-\)3624[7] and the ULX NGC 3261[31], which also shows significant line-of-sight obscuration despite having a lower inclination. Thus, the recent association of Swift J1858 as a low-luminosity Z-source[32], and the isolated presence of X-ray "GRS 1915-like" patterns in other accreting NSs such as the Rapid Burster[6] and the Bursting Pulsar[33], strongly indicate that Swift J1858 represents the missing link for multiwavelength variability in high accretion rate sources (Figure 2, and Extended Data Figure 1).
It was also noted during review that while the limit cycle timescale is similar in GRS 1915+105 and
Swift J1858 (despite their very different masses; see Methods), the beat timescale is much shorter around the black hole in the example lightcurves shown in Figure 2. In fact, GRS 1915+105 exhibits a wide range of beat durations in similar limit cycles[21], which suggests that the beats may represent a second instability timescale[4] or may be affected by other factors in the accretion flow. One possibility is the jet power, which is expected to have a significant impact on the disk structure, and thus on the observed X-ray lightcurve[4, 3]. A careful comparison of the time-dependent radio/O-IR properties in states or sources with different beat timescales[34] could further elucidate the role of jets in shaping these instabilities.
Our results draw a new coherent picture that links together key aspects of the multiwavelength variability of both black holes and neutron stars at high accretion rate: recurrent repetitive patterns, radio oscillations and fast flaring. At these high accretion rates, the accretion disk becomes unstable, resulting in disk-jet cycles on timescales of \(\sim 10\) s to \(\sim 1000\) s. These have historically been observed in X-rays, but our work shows that given the right conditions (e.g., inclination, orbital period, obscuration, and the relative brightness of the jet), accretion instabilities may in fact be more readily observable at UV/O/IR wavelengths. These instabilities are also observationally associated with radio-emitting discrete ejections: therefore, for the first time we can define a consistent physical scenario which can _quantitatively_ account for most of the multiwavelength variability observed from accreting compact objects at high luminosity. We argue that accretion instabilities, irradiation/obscuration, and jet ejecta should be seen as three fundamental pillars that can be used to study other classes of objects accreting near the Eddington limit. With this insight, future time-resolved multiwavelength campaigns on compact objects will lead to better constraints on the physics of these instabilities and their hosts, independently of the nature of the central object[8]. | accretion disks around compact objects are expected to enter an unstable phase at high luminosity. 一つの不安定性がある際に、 accreted radiation pressureによりdisk viscosity が変化し、inner disk の周期的な消耗と補充が起こります。このようなシナリオは、しかし、単一の星質量ブラックホールのみで定量的に検証されています。そのサイクルは一部の孤立したケースでは見られるものの、大多数の明るい吸積黒星とブラックホールの変動放射はそれらの欠如を示唆しており、それは解決すべき謎です。ここでは、吸積する中性子星を中心に、同じマルチ波長不安定性が観察されました。さらに、高吸積率のブラックホールと中性子星において、電波スペクトルから無線電波到X線までの一貫性のある変動説明が可能になり、これは、吸積盤が不安定であることによって生じる重力波の噴出が、内側の盤を消耗または補充する |
2310.00118 | Transforming Materials Discovery for Artificial Photosynthesis:
High-Throughput Screening of Earth-Abundant Semiconductors | We present a highly efficient workflow for designing semiconductor structures
with specific physical properties, which can be utilized for a range of
applications, including photocatalytic water splitting. Our algorithm generates
candidate structures composed of earth-abundant elements that exhibit optimal
light-trapping, high efficiency in \ce{H2} and/or \ce{O2} production, and
resistance to reduction and oxidation in aqueous media. To achieve this, we use
an ionic translation model trained on the Inorganic Crystal Structure Database
(ICSD) to predict over thirty thousand undiscovered semiconductor compositions.
These predictions are then screened for redox stability under Hydrogen
Evolution Reaction (HER) or Oxygen Evolution Reaction (OER) conditions before
generating thermodynamically stable crystal structures and calculating accurate
band gap values for the compounds. Our approach results in the identification
of dozens of promising semiconductor candidates with ideal properties for
artificial photosynthesis, offering a significant advancement toward the
conversion of sunlight into chemical fuels. | Sean M. Stafford, Alexander Aduenko, Marcus Djokic, Yu-Hsiu Lin, Jose L. Mendoza-Cortes | 2023-09-29T20:12:08 | http://arxiv.org/abs/2310.00118v1 | # Transforming Materials Discovery for Artificial Photosynthesis:
###### Abstract
We present a highly efficient workflow for designing semiconductor structures with specific physical properties, which can be utilized for a range of applications, including photocatalytic water splitting. Our algorithm generates candidate structures composed of earth-abundant elements that exhibit optimal light-trapping, high efficiency in H\({}_{2}\) and/or O\({}_{2}\) production, and resistance to reduction and oxidation in aqueous media. To achieve this, we use an ionic translation model trained on the Inorganic Crystal Structure Database (ICSD) to predict over thirty thousand undiscovered semiconductor compositions. These predictions are then screened for redox stability under Hydrogen Evolution Reaction (HER) or Oxygen Evolution Reaction (OER) conditions before generating thermodynamically stable crystal structures and calculating accurate band gap values for the compounds. Our approach results in the identification of dozens of promising semiconductor candidates with ideal properties for artificial photosynthesis, offering a significant advancement toward the conversion of sunlight into chemical fuels.
## I Introduction
Alarmingly, humanity's consumption of fossil fuels continues to grow rapidly despite widespread awareness of their connection to the climate crisis. [1; 2; 3] The sun offers the best path to wean ourselves off these pollutants as it provides about as much energy to Earth every hour that humanity uses throughout an entire year. [2; 3; 4] Solar currently remains a discouraging 1.5% share of our energy consumption, but thanks to investment in the past decade, this share is growing exponentially. [1; 2; 3]
The vast majority of investment in solar energy has been dedicated to the research and production of photovoltaic (PV) cells, primarily in the form of solar panels. As a result of this investment, the technology has matured significantly and become increasingly accessible. In fact, the price of solar panels has plummeded by over 99.6% since 1976, when their power generation capacity was a million times less than it is today. This data is supported by multiple sources, including solar panel price and uptake data. [5; 6; 7; 8]
Photovoltaic (PV) cells, while a promising source of renewable energy, face a significant challenge due to their inherent intermittency. [9; 10; 11; 12] As they generate electricity by converting sunlight into a potential difference between photoelectrode components, [13] they do not store energy, resulting in an output that is dependent on sunlight availability. The power output of PV cells is, therefore, subject to daily and annual oscillations, as well as fluctuations in weather conditions and regional climate differences. [9; 10; 11; 12]
A promising alternative to traditional solar technology is the photo-electrolyte. This cutting-edge system harnesses electricity generated by a PV material to power a water-splitting reaction on a catalyst. By separating the functions of trapping sunlight and generating fuel into two distinct components, the photo-electrolyte generates Hydrogen and Oxygen fuel from sunlight indirectly. This innovative approach circumvents the intermittency problem associated with conventional solar power systems, ensuring energy remains available even when sunlight is not. However, there are still a few hurdles to overcome. For instance, the current system requires wired connections, which can result in significant energy loss. Additionally, the high cost of the water-splitting catalyst (typically made of Platinum or other rare-earth elements) has been a significant barrier to the scalability of photo-electrolyte technology. A third, unrealized technology - a "no-wires" photo-electrolyte system that performs photovoltaic and catalytic functions in a single material - shows great promise. With a cost-effective material, this groundbreaking photocatalytic water-splitting process could address the efficiency and scalability problems of photo-electrolyzers, as well as the intermittency problem of PV cells.
This paper outlines our quest for a breakthrough photocatalytic water-splitting material that meets the critical requirements of stability, efficiency, and scalability. Unfortunately, no existing material is currently able to meet all these essential criteria. Our search is guided by the demanding specifications of the artificial photosynthesis process we are striving to achieve. To effectively split water, a photocatalyst must possess discrete electronic excitations, which require a semiconductor material. The material's electronic structure governs photoabsorption, with the band gap \(E_{g}\) acting as a filter for lower energy photons that are unable to promote an electron to the conduction band and initiate an excitation. To achieve maximum photoabsorption rates, an efficient photocatalyst must be sensitive to light in the high solar availability range of approximately 1-3 eV. Furthermore, the band gap must be direct to ensure optimal performance. [13; 14; 15] In addition to electronic properties, the material must also exhibit excellent stability in an aqueous solution. The photocathode may undergo a reduction reaction with itself and decompose if its reduction potential \(\phi_{red}\) is positive relative to the Normal Hydrogen Electrode (NHE). Similarly, the photoanode may decompose if its oxidation potential \(\phi_{ox}\) is less than 1.23 V wrt. NHE, which is the oxidation potential of water. Consequently, the redox potentials of the material must be compatible with aqueous stability requirements. Finally, any successful arti
ficial photosynthesis technology must be composed of Earth-abundant elements to keep the material cost-effective and accessible. This critical constraint ensures that the material is far cheaper than Platinum, making it more widely available for research and development.[14] In summary, our search for the ideal photocatalytic water-splitting material is restricted to Earth-abundant elements that possess compatible redox potentials and band gaps for both aqueous stability and efficient photocatalysis.
In the past, searching for a material with a specific set of properties relied heavily on heuristic models, which often proved inadequate due to the vastness of structure space and the complexity of structure-property relationships. This made the search for an optimal material a daunting task. However, recent advancements in computational techniques, such as the use of modern processing power and sophisticated simulation software, have significantly improved the ability to search structure space more effectively.[16] This materials design revolution can be largely attributed to the substantial improvements in density functional theory (DFT), which can now predict the properties of previously unknown materials with reasonable reliability. Despite recent improvements in density functional theory (DFT), a brute-force approach to materials discovery remains impractical. However, researchers have developed strategic improvements over brute force methods, such as the use of large databases of known materials to identify patterns and make inferences about new materials to guide the search.[17; 18] One such tool in this vein is the substitution likelihood matrix. It was introduced by Hautier _et al._[19] about a decade ago to assess the plausibility of the existence of compounds that differ from known compounds by the swap of ionic components. Recently, this tool has been enhanced and updated by Stafford et al. (2023b, in preparation).
Another strategic improvement is the use of structure prediction algorithms, which can significantly improve the efficiency of materials discovery. One such algorithm is the Universal Structure Predictor: Evolutionary Xtallography (USPEX), an evolutionary structure search algorithm that interfaces with a DFT code to generate stable crystal structures for a given composition.[20; 21; 22] By utilizing structure prediction algorithms like USPEX alongside other strategies and tools, such as large databases of known materials and substitution likelihood matrices, we have designed a novel and more efficient materials discovery process.
This paper aims to not only introduce our novel materials discovery process but also to showcase its practical application in the field of artificial photosynthesis. In Section II, we present SALSA, our systematic approach to materials discovery that combines database mining, substitution likelihood analysis, and evolutionary structure prediction algorithms. In Section III, we demonstrate the efficacy of SALSA by applying it to the search for a photocatalytic water-splitter, a crucial component of artificial photosynthesis. In Section IV, we analyze and contextualize the results of our application, highlight
Figure 1: Introducing the SALSA workflow: A Comprehensive Approach to Materials Discovery. Our novel workflow begins with a curated dataset of compounds with known structures and properties. Leveraging an enhanced substitution matrix we constructed from the full ICSD, we generate a vast library of candidate compounds. We then filter these candidates by identifying structural interpolations with desired properties, ultimately using the USPEX algorithm to determine their structures. Lastly, we employ the high-fidelity CRYSTAL software to perform accurate calculations of both structures and properties
ing the benefits of our approach compared to traditional methods. Furthermore, in Section V, we provide more detailed descriptions of the computational techniques used in SALSA, including density functional theory and crystal structure prediction algorithms. Finally, in Section VI, we conclude with some reflections on the potential impact of SALSA on the development of materials for photocatalytic water-splitting and other important applications in materials science.
## II Salsa - (S)ubstitution, (A)proximation, evo(L)utionary (S)earch, and (A)b-initio calculations
We developed a highly efficient and versatile materials discovery process, dubbed SALSA, which is an acronym for **S**ubstitution, **A**pproximation, evo**L**utionary **S**earch, and **A**b-initio calculations. An overview of SALSA is provided in Figure 1. The process starts by taking a target property or set of properties as input and returns a set of candidate structures as output. Instead of relying on brute-force approaches, SALSA harnesses the power of a large database of compounds with known structures and properties to rapidly search for new materials. The process begins with swapping ionic components between pairs of known compounds that have similar ionic species, as guided by a substitution likelihood matrix, to produce a dataset of hybrid compounds with defined compositions but undefined structures. We then infer approximate properties for these hybrid compounds using a weighted sum of properties of parent compounds and discard hybrids without desirable properties. Promising hybrids are then subjected to an evolutionary structure search using the USPEX algorithm, which generates stable crystal structures for a given composition whenever possible. High-fidelity DFT calculations are then used to recalculate the properties of the generated structures, and structures with undesirable properties are discarded. The process produces a set of undiscovered materials that are promising candidates for various applications, including the application to artificial photosynthesis discussed in Section III. Furthermore, SALSA is highly versatile and can be applied to other materials science problems as well.
Substitution by Chemical SimilarityOur group re-constructed and expanded the scope of the substitution likelihood matrix introduced by Hautier _et al._[19] In our construction, we used the entirety of the Inorganic Crystal Structure Database (ICSD)[23] and do not restrict substitutions to preserve the space group of the crystal structure (Stafford et al., 2023b in prep will describe details of this construction.) High values of our matrix correspond to pairs of ionic species empirically observed to exist in similar chemical environments. Above a chosen threshold, a value designates substitution between an ion pair as likely. Applying these likely substitutions to compounds of our initial dataset forms a hypothetical set of new candidate compounds. The resulting candidate dataset is too large for us to feasibly calculate properties of all compounds unless we are overly restrictive with unit cell size or substitution threshold. Therefore, we narrow the scope of our investigation to a subset for which we can efficiently approximate properties.
Approximation by Linear InterpolationWe examine the class of candidate compounds which are compositional interpolations between two initial compounds, i.e. hybrid compounds. We derive estimates for the properties of hybrids by summing the properties of parent compounds with the same ratio used in the corresponding hybrid composition. Next, we define the boundary of a target region of property space appropriate for our application. Finally, we eliminate hybrids that do not lie within this region. This step allows us to filter out the sizeable portion of our candidate compounds that are far removed from the target region before proceeding to intensive calculations. While this is an extremely simplistic model of property space, it is a computationally cheap way to approximate values close enough to eliminate most of the unsuitable candidates without a high risk of eliminating suitable ones. Note that we reduce this risk by extending the boundary of our target region beyond the ideal region of property space by enough to include some tolerance for the error that comes with our interpolation method. See Figure 2 for a summary of this scheme.
Evolutionary Search of Structure SpaceUntil this point, we have defined our hybrid compounds by their composition alone, but reliable property calculations require structural information. Crystal structure prediction from first principles is prohibitively difficult using just composition. Instead, we turn to an evolutionary structure search code, USPEX, to generate crystal structures for our hybrids. We provide USPEX with a hybrid composition and enable all available stochastic variation operations, which includes variation of the space group. If USPEX is unable to converge a structure for a given composition, that indicates the composition is unlikely to have a thermodynamically stable structure and is eliminated from further consideration. See Section V.5 for a more detailed look at our USPEX methodology.
Figure 2: SALSA’s composition-property interpolation scheme illustrated for generic properties \(\alpha\), \(\beta\) and \(\gamma\). Parent and hybrid compounds are represented by points outside and within a target region, respectively. Target region represented by green cuboid. For simplicity in depiction, each property has an upper and lower bound here, but this is not required.
Ab-initio Property CalculationsOur candidate set is now vastly narrowed down and contains structural information so high fidelity property calculations are computationally feasible. Therefore we perform geometry optimization and property calculation with another DFT code, CRYSTAL17, at the hybrid functional level of theory.[24; 25] Some candidate compounds located within the target region according to interpolation-inferred values shift outside the region upon replacement by CRYSTAL17-calculated values while others do not converge with CRYSTAL17 at all. We discard these and are left with the final products of SALSA - the structures which CRYSTAL17 converges and determines to have properties in the target region.
## III SALSA applied to photocatalytic water-splitting
We found that millions of candidate compounds could be generated from our initial dataset with the ion exchanges suggested by our substitution matrix. Of these, about 13,600 were compatible with our structural interpolation scheme, that is, they could be constructed as hybrids of compounds within our initial dataset of known semiconductors. See Section V.2 for details on this dataset construction.
\begin{table}
\begin{tabular}{l c c c} Compound & Band gap (eV) & Oxidation (V) & Reduction (V) \\ \hline Ag\({}_{2}\)Te - AgBr & & & \\ \hline Ag\({}_{3}\)TeBr & 1.26 & 1.69 & \(-\)0.41 \\ Ag\({}_{4}\)TeBr\({}_{2}\) & 1.72 & 1.83 & \(-\)0.27 \\ Ag\({}_{5}\)TeBr\({}_{3}\) & 1.98 & 1.90 & \(-\)0.20 \\ \hline Ag\({}_{25}\) - AgBr & & & \\ \hline Ag\({}_{3}\)SB\({}_{3}\) & 2.23 & 1.61 & 0.03 \(\mathbf{\downarrow}\) \\ Ag\({}_{3}\)SBr & 1.70 & 1.17 \(\mathbf{\parallel}\) & \(-\)0.01 \\ Ag\({}_{4}\)SBr\({}_{2}\) & 2.04 & 1.45 & 0.01 \(\mathbf{\downarrow}\) \\ \hline TiO\({}_{2}\) - CuO & & & \\ \hline Ti\({}_{2}\)CuO\({}_{5}\) & 2.55 & 1.30 & \(-\)0.48 \\ Ti\({}_{3}\)CuO\({}_{7}\) & 2.67 & 1.42 & \(-\)0.58 \\ TiCuO\({}_{3}\) & 2.28 & 1.03 \(\mathbf{\parallel}\) & \(-\)0.28 \\ \hline TiO\({}_{2}\) - PbO & & & \\ \hline Ti\({}_{2}\)PbO\({}_{5}\) & 2.93 \(\mathbf{\parallel}\) & 1.58 & \(-\)0.56 \\ TiPb\({}_{2}\)O\({}_{4}\) & 2.83 \(\mathbf{\parallel}\) & 1.36 & \(-\)0.22 \\ TiPbO\({}_{3}\) & 2.88 \(\mathbf{\parallel}\) & 1.48 & \(-\)0.40 \\ \end{tabular}
\end{table}
Table 1: A selection of ternary hybrid compounds including silver telluride-bromides, silver sulfide-bromides, titanium cuprates and titanium-lead oxides. All interpolated band gaps and redox potentials lie within target ranges. One \(\mathbf{\downarrow}\)-symbol appears next to a value for each 0.05 eV/V it lies outside of the ideal range (rounded down).
Figure 3: a) A visualization of band gap - oxidation potential - reduction potential space from a perspective that highlights possible interpolations into the ideal property space. Any compound in our initial dataset that could produce one or more interpolations of interest is represented here. Those which had suitable \(\phi_{ox}\), \(\phi_{red}\) or both are labeled with “O”, “R” and “&”, respectively. Lines represent interpolations, with line thickness proportional to a distance within the ideal region. Dashed oval identifies an influential high-\(\phi_{ox}\) cluster. Extra 0.2 eV/V boundary region not depicted here. b) A “top-down” 2D projection of this space excluding the \(\phi_{red}\) dimension. “R” indicates a compound with suitable \(\phi_{red}\).
### Candidate Compounds
Overall, we found about 1250 hybrid compounds within our target region, including 484 within our ideal region. This corresponds to roughly one out of every 10 and 30 of all possible hybrids, respectively. Most interpolation pairings involved binary compounds with no elements in common so more hybrids were quaternary rather than ternary. Furthermore, the binary parents of ternary compounds tended to be located more closely to each other in property space, without any portion of the target region between them, so ternary compounds were relatively underrepresented in the regions of interest. The quaternary:ternary ratio was about 5:1 overall, 7:1 in the target region, and 8:1 in the ideal region.
Figure 3 provides insight into how certain interpolation patterns emerged as dominant. These patterns can be understood in relation to the initial distribution of compounds in property space. Few initial compounds had acceptable \(E_{g}\) or \(\phi_{ox}\) and none had both simultaneously; however, acceptable \(\phi_{red}\) was much more common. This combination advantageously positioned those with relatively high \(\phi_{ox}\), especially the circled
Figure 4: a) and d) A visualization of band gap - oxidation potential - reduction potential space from a perspective which highlights some interpolations that yielded USPEX-converged structures. Depicted hybrid compounds are represented by blue points. Initial compounds that were parents to the depicted hybrid compounds are represented by peach-colored points. Extra 0.2 eV/V boundary region depicted in translucent green. b) and c) Crystal structures we found for the hybrid compounds. Atom sizes are consistent throughout figure for a given element, except atoms in the legend, which are 2 times as large in diameter. For each structure, dashed gray lines indicate the extent of a single conventional unit cell.
cluster containing the five highest \(\phi_{ox}\) compounds, because many partners were located across the ideal region from them. In fact, compounds from this cluster constituted one partner in nearly all interpolation pairings depicted in Figure 3, with the other partner being out-of-cluster and usually low \(\phi_{red}\).
These pairings had the largest interpolation distance within the ideal region when the out-of-cluster partner was among the highest \(\phi_{ox}\) of the low-\(E_{g}\) compounds. Larger interpolation distance correlates with a greater number of possible hybrid compounds so this was the most dominant type of interpolation in our hybrid compound dataset. Thus we can roughly understand the interpolation opportunities available to our dataset by focusing on just a small subset of low-\(E_{g}\) and high-\(E_{g}\) compounds which are least oxidizable.
The four highest \(\phi_{ox}\) compounds in the high-\(E_{g}\) cluster were AgBr, TiO\({}_{2}\), AgCl, and CuCl, ordered by the number of hybrids derived from them. 95% of hybrid compounds had a parent in this group, including 42% from AgBr alone. The four highest \(\phi_{ox}\) compounds with low-\(E_{g}\) were the binary combinations of Pb and Ag with Se and Te. 40% of hybrids had a parent in this group and 36% that had one parent from each group. Table 1 provides some example hybrid compounds from the target region including three hybrids of different composition from pairs of AgBr and TiO\({}_{2}\) with lower \(E_{g}\) compounds. The variety of hybrids included represents how different parents produced hybrids in different regions, e.g. TiO\({}_{2}\) - PbO hybrids tended to have low \(E_{g}\).
### Candidate Structures
We used the procedure for USPEX and VASP laid out in Section V.5 to search for the crystal structures of hybrids in our target region. USPEX was able to converge structures for about 50 hybrid compounds. The elemental composition of these structures mostly coincides with the composition in the hybrid compounds highlighted in the previous Section. For example, Ag has the greatest occurrence by far, due to its presence in both the low and high \(E_{g}\) groups. However, Br has a surprisingly much lower occurrence and Cd has a relatively higher occurrence. Figure 4 and Table 2 show example results of USPEX converged structures. Figure 4 also connects shifts in composition to changes in structure and property space.
## IV Discussion
Figure 6 (a) presents the interpolations into property space from our initial compounds which yielded our final structures, as well as shifts from interpolated predictions. Trends within this subset of interpolations suggest certain paths are favored for producing a photocatalytic water-splitter.
Our final materials can be divided into two groups. One group is made up of materials containing Silver, halides and group 16 compounds. Among the few compounds in our initial dataset which had good oxidation potentials, most contained Silver, so this group emerged from the interpolation between a pair of materials which had good oxidation potentials, but which had band gaps that were too low and too high respectively. Consequently, these materials are robust to hole oxidation - all interpolated oxidation potentials are at least 0.2 V greater than the ideal minimum. However, their interpolated reduction potentials lie close to the threshold for rejection - none are more than 0.01 V under the ideal maximum. Additionally, these structures have low symmetry and are expensive due to their Silver content.
The other group contains Lead instead of Silver. Redox suitability of these Lead compounds is inverted relative to the Silver group. That is, these compounds are robust to electron reduction due to Lead's especially negative reduction potential - all have reduction potentials more than 0.2 V under the ideal maximum. However, none have an interpolated oxidation potential that is 0.03 V greater than the ideal minimum. The Lead structures are also higher in symmetry and relatively cheap. Figure 6 (b) highlights how compounds from different groups lie near different planes of the desired property space, demonstrating the strengths and weaknesses of these groups.
The Lead group is about 50 times cheaper so it may offer more scalability.[27, 28, 29, 30, 31] However, the Silver group follows more closely to regular compositional formula. This means it may be easier to find more compounds in this group with different interpolation ratios if the ones we have discovered do not prove to be as effective as they appear to be. Both paths should be investigated experimentally.
Furthermore, we envision the materials design approach we used to be generalizable. In a different scheme, we would see a picture similar to this, but with different starting compounds and different boundaries than our target property space.
Figure 5: Crystal structures of final compounds with properties suitable for photocatalytic water-splitting. Atom sizes are consistent throughout figure for a given element, except atoms in the legend, which are 2 times as large in diameter. For each structure, dashed gray lines indicate the extent of a single conventional unit cell.
\begin{table}
\begin{tabular}{c c c c c c} Compound & Band Gap (eV) & Oxidation Potential (V) & Reduction Potential (V) & Space Group & Price (USD/kg) \\ \hline Ti\({}_{2}\)O\({}_{4}\)Pb\({}_{3}\)Se\({}_{3}\) & 2.333 & 1.257 & \(-\)0.717 & 1 & 8 \\ PbCuSeCl & 1.512 & 1.225 \(\pm\) & \(-\)0.246 & 156 & 7 \\ Ag\({}_{4}\)Br\({}_{2}\)S & 2.741 & 1.451 & 0.014 \(\pm\) & 1 & 307 \\ Ag\({}_{4}\)Cl\({}_{2}\)Se & 1.058 \(\pm\) & 1.907 & \(-\)0.007 & 1 & 299 \\ Ag\({}_{4}\)Cl\({}_{2}\)S & 1.060 \(\pm\) & 1.527 & 0.099 \(\pm\) & 1 & 301 \\ \end{tabular}
\end{table}
Table 3: Final compounds with band gaps and redox potentials suitable for photocatalytic water-splitting. One \(\pm\)-symbol appears next to a value for each 0.05 eV/V it lies outside of the ideal range (rounded down).
## V Methods
### Initial Dataset
We first collected a dataset of experimentally determined semiconductor band gaps. We then applied the method described in Stafford et al. 2023c in prep to calculate reduction and oxidation potentials. This formed an initial dataset containing 34 compounds. We sought compounds with band gaps between 1.23-2.80 eV to enable efficient photoabsorption. We also sought reduction potentials below 0.00 V and oxidation potentials above 1.23 V, with respect to the NHE, for materials which are stable in an aqueous environment. None of our original materials had suitable values for all three properties. Figure 7 presents an overview of the collection of initial compounds and the region of property space described above. Table 4 lists each initial compound, its band gap and its redox potentials.
Figure 8 in the Supplementary Material presents a closer look at the swarm of compounds that hover just outside of the target space. Few compounds are stable to photo-catalyzed decomposition. This is mainly because most have oxidation potentials that are too low, leaving them prone to hole oxidation (Figure 8 (d)). However, some of the few with acceptable oxidation potentials have reduction potentials that are too high (Figure 8 (b)). Additionally, compounds are roughly evenly divided into three groups which cannot absorb sunlight efficiently, cannot absorb it at all in the regions of higher solar intensity, and which have an acceptable band gap (Figure 8 (e) and (c)). No matter which angle we look at this property space, we see there is great room for improvement.
### Parameters Used for Candidate Generation
Substitution ThresholdWe used a substitution threshold of 0, that is, we did not consider substitutions associ
\begin{table}
\begin{tabular}{l c c c} \hline \hline Compound & Band gap (eV) & Oxidation (V) & Reduction (V) \\ \hline Figure 5 & & & \\ \hline Ag\({}_{2}\)S & 0.90 \(\downarrow\) & 0.50 \(\downarrow\) & \(-\)0.07 \(\checkmark\) \\ Ag\({}_{2}\)Se & 0.15 \(\downarrow\) & 1.39 \(\checkmark\) & \(-\)0.31 \(\checkmark\) \\ Ag\({}_{2}\)Te & 0.17 \(\downarrow\) & 1.38 \(\checkmark\) & \(-\)0.74 \(\checkmark\) \\ AgBr & 2.89 \(\uparrow\) & 2.16 \(\checkmark\) & 0.07 \(\uparrow\) \\ AgCl & 3.28 \(\uparrow\) & 2.30 \(\checkmark\) & 0.22 \(\uparrow\) \\ CuCl & 3.40 \(\uparrow\) & 1.69 \(\checkmark\) & 0.12 \(\uparrow\) \\ PbSe & 0.27 \(\downarrow\) & 0.76 \(\downarrow\) & \(-\)0.61 \(\checkmark\) \\ TiO\({}_{2}\) & 3.00 \(\uparrow\) & 1.75 \(\checkmark\) & \(-\)0.83 \(\checkmark\) \\ \hline Other & & & \\ \hline AlAs & 2.20 \(\checkmark\) & \(-\)1.11 \(\downarrow\) & 0.64 \(\uparrow\) \\ AlN & 6.00 \(\uparrow\) & \(-\)0.53 \(\downarrow\) & \(-\)0.90 \(\checkmark\) \\ AlPb & 2.80 \(\uparrow\) & \(-\)0.94 \(\downarrow\) & \(-\)0.62 \(\checkmark\) \\ BN & 6.00 \(\uparrow\) & \(-\)0.06 \(\downarrow\) & \(-\)0.70 \(\checkmark\) \\ CdS & 2.58 \(\checkmark\) & 0.35 \(\downarrow\) & \(-\)0.67 \(\checkmark\) \\ CdSe & 1.85 \(\checkmark\) & 0.78 \(\downarrow\) & \(-\)0.83 \(\checkmark\) \\ CdTe & 1.61 \(\checkmark\) & 0.51 \(\downarrow\) & \(-\)0.99 \(\checkmark\) \\ Cu\({}_{2}\)O & 2.10 \(\checkmark\) & 0.64 \(\downarrow\) & 0.44 \(\uparrow\) \\ Cu\({}_{2}\)S & 1.20 \(\downarrow\) & 0.89 \(\downarrow\) & \(-\)0.30 \(\checkmark\) \\ CuO & 1.20 \(\downarrow\) & \(-\)0.05 \(\downarrow\) & 0.54 \(\uparrow\) \\ GaN & 3.40 \(\uparrow\) & \(-\)0.06 \(\downarrow\) & \(-\)0.36 \(\checkmark\) \\ GaSb & 0.73 \(\downarrow\) & \(-\)0.38 \(\downarrow\) & \(-\)0.65 \(\checkmark\) \\ InzS\({}_{3}\) & 1.98 \(\checkmark\) & 0.49 \(\downarrow\) & \(-\)0.57 \(\checkmark\) \\ InAs & 0.41 \(\downarrow\) & \(-\)0.03 \(\downarrow\) & \(-\)0.42 \(\checkmark\) \\ InP & 1.42 \(\checkmark\) & 0.05 \(\downarrow\) & \(-\)0.31 \(\checkmark\) \\ InSb & 0.23 \(\downarrow\) & \(-\)0.13 \(\downarrow\) & \(-\)0.60 \(\checkmark\) \\ MgO & 7.80 \(\uparrow\) & 0.18 \(\downarrow\) & \(-\)1.73 \(\checkmark\) \\ PbO & 2.70 \(\checkmark\) & 1.07 \(\downarrow\) & 0.24 \(\uparrow\) \\ PbS & 0.37 \(\downarrow\) & 0.29 \(\downarrow\) & \(-\)0.37 \(\checkmark\) \\ PbTe & 0.32 \(\downarrow\) & 0.60 \(\downarrow\) & \(-\)0.88 \(\checkmark\) \\ SnO\({}_{2}\) & 3.50 \(\uparrow\) & 1.56 \(\checkmark\) & \(-\)0.12 \(\checkmark\) \\ SnS & 1.00 \(\downarrow\) & 0.42 \(\downarrow\) & \(-\)0.37 \(\checkmark\) \\ ZnO & 3.37 \(\uparrow\) & 0.48 \(\downarrow\) & \(-\)0.45 \(\checkmark\) \\ ZnS & 3.84 \(\uparrow\) & 0.35 \(\downarrow\) & \(-\)0.90 \(\checkmark\) \\ ZnSe & 2.83 \(\uparrow\) & 0.40 \(\downarrow\) & \(-\)0.93 \(\checkmark\) \\ ZnTe & 2.39 \(\checkmark\) & 0.29 \(\downarrow\) & \(-\)1.25 \(\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The compounds of our initial dataset with their known band gaps and redox potentials. \(\checkmark\), \(\downarrow\), and \(\uparrow\) symbols indicate whether property values are suitable, too low or too high for photocatalytic water splitting, respectively.
Figure 6: a) Property space diagram with undiscovered semiconductors with desirable band gaps (eV), oxidation potentials (V) and reduction potentials (V) produced by SALSA depicted in blue. Interpolated predictions depicted as unfilled circles. Original compounds are depicted as peach-colored points. b) A 2D \(\phi_{ox}\)-\(\phi_{red}\) projection that demonstrates which boundaries final Pb and Ag compounds lie near. Final compounds are lettered in correspondance with a) to conserve space. This projection is 1.0 V\(\times\)1.0 V in extent.
ated with negative values in our substitution likelihood matrix. This parameter can be adjusted as governed by the computational resources available to a search. A lower threshold enables a more thorough exploration of composition space, but is more computationally expensive and less efficient at finding suitable materials.
Substitution ImplementationWe allowed substitution to constitute a complete or partial replacement of the original ion. For example, \(\mathrm{Br}^{-}\leftrightarrow\mathrm{I}^{-}\) is a matrix-allowed substitution and AgBr is in our initial dataset so compounds of the form \(\mathrm{Ag}_{n}\,\mathrm{Br}_{n-m}\,\mathrm{I}_{m}\) with \(n,m\in\mathbb{Z}\), are in our candidate dataset. We limited substitutions to be first or second order, i.e at most two substitutions could be used to generate an individual candidate. In Section III, first and second-order substitutions correspond to ternary and quaternary compounds, respectively. Theoretically, a second-order substitution could consist of exchanging a single, original ionic component for two new ions. However, second-order substitutions that formed hybrid compounds consisted of a single substitution of each of the original components as this is the only way a second-order substitution could correspond to interpolation between two binary compounds. Building on the previous example, the substitution \(\mathrm{Ag}^{+}\leftrightarrow\mathrm{Cu}^{+}\) could be used in second-order substitutions to produce quaternary compounds of the form \(\mathrm{Ag}_{n-p}\,\mathrm{Cu}_{p}\,\mathrm{Br}_{n-m}\,\mathrm{I}_{m}\) with \(n,p,m\in\mathbb{Z}\). For the purpose of enumerating a complete dataset, the new components of candidate compounds were limited to half or less the final composition. So all second-order substitutions were partial and you could not generate Cu I (\(n=p=m=1\)) from just AgBr. However, this limitation does not affect hybrid compounds.
Unit Cell Size LimitIn practice, the enumeration of candidate compounds requires some constraint on values of \(n,m,p\). Results presented in Section III implemented this constraint by imposing a maximum of 20 atoms in a unit cell. This is equivalent to the constraints \(1\leq n,p,m\leq 10\) in the previous example.
### Property Space Selection Criteria
With our interpolation scheme we filtered compounds that did not meet the following criteria: \(1.03<\) band gap (\(\mathrm{eV}\)) \(<\) 3.00, oxidation potential (\(\mathrm{V}\)) \(>\) 1.03, and reduction potential (\(\mathrm{V}\)) \(<\) 0.2. This includes an extra window of 0.20 eV for the band gaps and 0.20 V for the potentials to allow for materials that might ultimately arrive in the desired region of property space by deviating slightly from their linear interpolation. To illustrate this process, consider PbSe and CuCl. PbSe's band gap is too small, at 0.27 eV, and its oxidation potential is too low, at 0.76 eV, while CuCl has too high a band gap at 3.40 eV. However, the 50:50 interpolation between these two, PbCuSeCl, has band gap, oxidation and reduction potentials of 1.84 eV, 1.23 V and \(-\)0.25 V, respectively, which places it just inside the threshold of our target region.
### Interpolation
We construct hybrid compositions which are integer ratios of two parent compositions. We then estimate the properties of the corresponding hybrid compounds to be linear interpolations of the parent compounds on a per-atom basis. In other words, we weight the initial property values by the number of atoms contributed to the hybrid. Furthermore, we don't restrict our interpolations to be single-substitution. For example, both \(\mathrm{Pb}^{2+}\leftrightarrow\mathrm{Cu}^{+}\) and \(\mathrm{Se}^{2-}\leftrightarrow\mathrm{Cl}^{-}\) are matrix-allowed substitutions so if we start with initial compounds PbSe and CuCl, we generate interpolated compositions such as PbCuSeCl.
To better understand the per-atom weighting procedure, consider an illustrative example in which we have initial compounds Ag\({}_{2}\)S and AgBr that have a property with values 0 and \(P\). \(\mathrm{S}^{2-}\leftrightarrow\mathrm{Br}^{-}\) is a matrix-allowed substitution, so we consider composition ratios of Ag\({}_{2}\)S and AgBr such as 2:1, 1:1, and 1:2, which correspond to Ag\({}_{5}\)Br\({}_{5}\), Ag\({}_{3}\)BrS, and Ag\({}_{4}\)Br\({}_{2}\)S, respectively. According to our interpolation procedure, these new candidate compounds have estimated property values of \(\frac{4}{7}P\), \(\frac{2}{5}P\), and \(\frac{1}{4}P\), respectively. Note that Ag\({}_{3}\)BrS has a property value of \(\frac{2}{5}P\) rather than \(\frac{1}{2}P\), despite being a 1:1 ratio of initial compositions. To understand this potentially nonintuitive result, recognize that 2 of the 5 atoms in Ag\({}_{3}\)BrS were contributed by AgBr so its interpolation weight is \(\frac{2}{5}\) and accordingly, the interpolation weight of Ag\({}_{2}\)S is \(\frac{3}{5}\). Therefore, \(P_{new}=\frac{2}{5}\times P+\frac{3}{5}\times 0=\frac{2}{5}P\)
### USPEX Settings
We provide USPEX a composition and allow it to perform all stochastic modifications it has at its disposal. We
Figure 7: An overlook on the initial dataset of compounds used in our application of SALSA to artificial photosynthesis. Here they are depicted in a band gap – oxidation potential – reduction potential property space. The region of property space suitable for photocatalytic water-splitting is indicated. All 34 initial compounds are depicted.
do not constrain the structure by space group. For energy evaluation, we elect USPEX's option to interface with the DFT code, Vienna Ab initio Simulation Package (VASP) [32, 33, 34]. All VASP calculations were performed in the plane-wave DFT framework at the Generalized Gradient Approximation (GGA) level of theory and used the Perdew, Burke, and Ernzerhof (PBE) functional [35]. Projector-augmented wave (PAW) pseudopotentials were used to represent the core electrons and ion-electron interactions [36, 37]. We used a plane-wave cutoff of 500 eV, an energy convergence criterion of \(10^{-4}\) eV, and force convergence of 0.02 eV/A. Dispersive interactions were accounted for using DFT-D3 corrections [38] with Becke-Jonson damping [39]. We also included spin polarization effects.
### CRYSTAL17 Settings
We used the hybrid DFT code CRYSTAL17 to conduct higher fidelity geometry optimization on our candidate structures. CRYSTAL17 uses basis sets of atom-centered Gaussian-type functions [24, 25]. We used the hybrid Heyd-Scuseria-Ernzerhof (HSE06) functional [40, 41]. We also considered spin-polarization effects and used relativistic compact effective potentials and efficient, shared-exponent basis sets [42, 43]. The effective potentials were used for O, Cu, Se, Ag, Te and Pb. We included full geometry optimization of cell positions and lattice parameters. We sampled the reciprocal space for all the structures using a \(\Gamma\)-centered Monkhorst-Pack scheme with a resolution of \(a_{i}n_{k_{i}}\geq 40\)\(\AA\) where \(a_{i}\) and \(n_{k_{i}}\) represent a lattice constant along the \(i^{th}\) axis in real space and the number of k-points along the \(i^{th}\) axis in reciprocal space, respectively. We optimized geometry with an SCF energy convergence criterion of \(2.72\times 10^{-6}\) eV, an RMS force criterion of \(1.54\times 10^{-2}\) eV/A, a max force criterion of \(2.31\times 10^{-2}\) eV/A, an RMS displacement of \(6.35\times 10^{-4}\) A, a max displacement criterion of \(9.53\times 10^{-4}\) A and a between-geometry energy convergence criterion of \(2.72\times 10^{-6}\) A. For this application we also performed a single-point SCF optimization on the converged geometry to acquire a band gap, although this is not necessary for the SALSA workflow in general.
## VI Conclusions
We have introduced a general materials design process that can be used for many applications. The process only requires a dataset of known compounds with known properties and the ability to calculate some of the properties from first-principles for a small set of structures. We applied our new process to an unrealized artificial photosynthesis technology and were able to discover materials that are good candidates for photocatalytic water-splitting. This includes PbCuSeCl, a material with a novel structure, which we were able to discover because our process allows for an expansive search of structure space. It also includes Ti\({}_{2}\)O\({}_{4}\)Pb\({}_{3}\)Se\({}_{3}\) which has band gap and interpolated redox potentials within the ideal range for photocatalytic water-splitting.
Furthermore, work is underway to improve several methods used in the SALSA process. We may expand and enhance further the substitution matrix. We are also working on a way to generalize the redox potential calculation method with larger datasets.
###### Acknowledgements.
SMS is supported by the Mendoza Lab start-up funds. JLMC acknowledges start-up funds from Michigan State University. This work was supported in part by computational resources and services provided by the Institute for Cyber-Enabled Research at Michigan State University.
**Author Contributions**. AA and JLMC started the project in 2012-2013. JLMC conceived the idea and executed the first iterations of the search algorithms. AA and JLMC wrote the first draft. AA and JLMC implemented and developed the first iteration of the algorithms. SMS, MD, YL continued and finished the project. SMS implemented the next generation of the algorithm. Conceptualization: AA, JLMC. Methodology: AA, SMS, MD, YL, JLMC. Software: AA, SMS, MD, YL, JLMC. Validation: AA, SMS, MD, YL, JLMC. Formal Analysis: SMS, MD, JLMC. Investigation: AA, SMS, MD, JLMC. Resources: JLMC. Writing--original draft preparation: AA, JLMC. Writing--review and editing: SMS, AA, MD, YL, JLMC. Visualization: SMS, MD, JLMC. Supervision: JLMC. Project administration: JLMC. Funding Acquisition: JLMC. All authors have read and agreed to the published version of the manuscript.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
| 私たちは、特定の物理的特性を持つ半導体構造を設計するための高度に効率的なワークフローを提示しています。このワークフローは、光触媒水Splittingを含む幅広いアプリケーションに使用できます。アルゴリズムは、地球的に豊富に存在する元素で構成された候補構造を生成します。光捕獲に優れた、H2および/またはO2の生産効率の高い特性を示し、水性媒体における還元および酸化に耐性があります。このため、無機結晶構造データベース(ICSD)で訓練されたイオン翻訳モデルを使用して、3万以上の未発見の半導体組成を予測します。これらの予測は、水素進化反応(HER)または酸素進化反応(OER)の条件下での redoxstabilitを評価した後、 thermodynamically stableな結晶構造を生成し、化合物の正確なバンドギャップ値を計算します。私たちのアプローチは、理想的な特性を持つ半 |
2309.03572 | Operator relations characterizing higher-order differential operators | Let $r$ be a positive integer, $N$ be a nonnegative integer and $\Omega
\subset \mathbb{R}^{r}$ be a domain. Further, for all multi-indices $\alpha \in
\mathbb{N}^{r}$, $|\alpha|\leq N$, let us consider the partial differential
operator $D^{\alpha}$ defined by \[
D^{\alpha}= \frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots
\partial x_{r}^{\alpha_{r}}}, \] where $\alpha= (\alpha_{1}, \ldots,
\alpha_{r})$. Here by definition we mean $D^{0}\equiv \mathrm{id}$. An easy
computation shows that if $f, g\in \mathscr{C}^{N}(\Omega)$ and $\alpha \in
\mathbb{N}^{r}, |\alpha|\leq N$, then we have \[ \tag{$\ast$} D^{\alpha}(f\cdot
g) = \sum_{\beta\leq \alpha}\binom{\alpha}{\beta}D^{\beta}(f)\cdot D^{\alpha -
\beta}(g). \] This paper is devoted to the study of identity $(\ast)$ in the
space $\mathscr{C}(\Omega)$. More precisely, if $r$ is a positive integer, $N$
is a nonnegative integer and $\Omega \subset \mathbb{R}^{r}$ is a domain, then
we describe those mappings $T_{\alpha} \colon \mathscr{C}(\Omega)\to
\mathscr{C}(\Omega)$, $\alpha \in \mathbb{N}^{r}, |\alpha|\leq N$ that satisfy
identity $(\ast)$ for all possible multi-indices $\alpha\in \mathbb{N}^{r}$,
$|\alpha|\leq N$. Our main result says that if the domain is
$\mathscr{C}(\Omega)$, then the mappings $T_{\alpha}$ are of a rather special
form. Related results in the space $\mathscr{C}^{N}(\Omega)$ are also
presented. | Włodzimierz Fechner, Eszter Gselmann, Aleksandra Świątczak | 2023-09-07T09:05:01 | http://arxiv.org/abs/2309.03572v1 | # Operator relations characterizing higher-order differential operators
###### Abstract
Let \(r\) be a positive integer, \(N\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Further, for all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), let us consider the partial differential operator \(D^{\alpha}\) defined by
\[D^{\alpha}=\frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots \partial x_{r}^{\alpha_{r}}},\]
where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\). Here by definition we mean \(D^{0}\equiv\mathrm{id}\). An easy computation shows that if \(f,g\in\mathscr{C}^{N}(\Omega)\) and \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), then we have
\[D^{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}D^{\beta}(f) \cdot D^{\alpha-\beta}(g).\] ( \[\ast\] )
This paper is devoted to the study of identity \((\ast)\) in the space \(\mathscr{C}(\Omega)\). More precisely, if \(r\) is a positive integer, \(N\) is a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) is a domain, then we describe those mappings \(T_{\alpha}\colon\mathscr{C}(\Omega)\to\mathscr{C}(\Omega)\), \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\) that satisfy identity \((\ast)\) for all possible multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\). Our main result says that if the domain is \(\mathscr{C}(\Omega)\), then the mappings \(T_{\alpha}\) are of a rather special form. Related results in the space \(\mathscr{C}^{N}(\Omega)\) are also presented.
## 1 Introduction and preliminaries
In this paper the set of real numbers is denoted by \(\mathbb{R}\), the set of complex numbers by \(\mathbb{C}\), and the set of nonnegative integers by \(\mathbb{N}\).
Let \(r\) be a positive integer. Elements of \(\mathbb{N}^{r}\) will be called \(r\)-dimensional multi-indices. Sums and differences of multi-indices (of the same dimension) are meant to be componentwise, i.e., if \(\alpha,\beta\in\mathbb{N}^{r}\), then
\[\alpha\pm\beta=(\alpha_{1}\pm\beta_{1},\ldots,\alpha_{r}\pm\beta_{r})\]
Further, if \(\alpha,\beta\in\mathbb{N}^{r}\), then we write \(\alpha\leq\beta\) if for all \(i=1,\ldots,r\) we have \(\alpha_{i}\leq\beta_{i}\), where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\) and \(\beta=(\beta_{1},\ldots,\beta_{r})\). If for the multi-indices \(\alpha,\beta\in\mathbb{N}^{r}\) we have \(\alpha\leq\beta\) and \(\alpha\neq\beta\), we will write \(\alpha<\beta\). By the height of a multi-index \(\alpha\in\mathbb{N}^{r}\) we understand \(|\alpha|=\sum_{i=1}^{r}\alpha_{i}\), where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\). Finally, we will also use the notion of factorial and binomial coefficients in this multi-index setting. If \(\alpha,\beta\in\mathbb{N}^{r}\), then
\[\alpha!=\alpha_{1}!\cdots\alpha_{r}!\]
and
\[\binom{\alpha}{\beta}=\binom{\alpha_{1}}{\beta_{1}}\cdots\binom{\alpha_{r}}{ \beta_{r}}=\frac{\alpha!}{\beta!(\alpha-\beta)!},\]
where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\) and \(\beta=(\beta_{1},\ldots,\beta_{r})\).
Let \(r\) be a positive integer, \(N\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain (i.e. a nonempty, open and connected set). For all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), let us consider the partial differential operator \(D^{\alpha}\) defined by
\[D^{\alpha}=\frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots \partial x_{r}^{\alpha_{r}}},\]
where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\). Here by definition we mean \(D^{0}=\mathrm{id}\). Let further
\[\mathcal{C}^{N}(\Omega)=\{f\colon\Omega\to\mathbb{R}\,|\,f\text{ is $N$ times continuously differentiable}\}\,.\]
An easy computation shows that if \(f,g\in\mathcal{C}^{N}(\Omega)\) and \(\alpha\in\mathbb{N}^{r},|\alpha|\leq N\), then we have
\[D^{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}D^{\beta}(f) \cdot D^{\alpha-\beta}(g). \tag{1}\]
The main aim of this paper will be about the converse in some sense. More precisely, in this paper, we will study the solutions \(T_{\alpha}\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) of the operator equation
\[T_{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f)T _{\alpha-\beta}(g)\qquad\big{(}f,g\in\mathcal{C}^{N}(\Omega)\big{)}\]
for all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\).
Equations analogous to (1) have an important role not only in connection to characterization theorems related to differential operators but also in harmonic and spectral analysis.
In the following, we will use the notations and the terminology of the monographs Szekelyhidi [7, 8] and while considering moment sequences of higher rank, the terminology of [2].
Let \((G,\cdot)\) be an Abelian semigroup. A nonzero function \(m\colon G\to\mathbb{C}\) is called _exponential_, if
\[m(x\cdot y)=m(x)m(y)\]
holds for all \(x,y\) in \(G\). Let \(N\) be a nonnegative integer. A function \(\varphi\colon G\to\mathbb{C}\) is termed to be a _moment function of order \(N\)_, if there exist functions \(\varphi_{k}\colon G\to\mathbb{C}\) such that \(\varphi_{0}=1\), \(\varphi_{N}=\varphi\) and
\[\varphi_{k}(x\cdot y)=\sum_{j=0}^{k}\binom{k}{j}\varphi_{j}(x)\varphi_{k-j}(y) \tag{2}\]
for all \(x\) and \(y\) in \(G\) and \(k=0,1,\ldots,N\). If \(G\) is a monoid with the neutral element \(1\), then this concept can be extended by relaxing the assumption \(\varphi_{0}\equiv 1\) to \(\varphi_{0}(1)=1\). In this case, \(\varphi_{0}\) is an arbitrary exponential function and we say that \(\varphi_{0}\)_generates the generalized moment sequence of order \(N\)_ and the function \(\varphi_{k}\) is a _generalized moment function of order \(k\)_, or, if we want to specify the exponential \(\varphi_{0}\), then we say that \(\varphi_{k}\) is a _generalized moment function of order \(k\) associated with the exponential \(\varphi_{0}\)_.
**Definition 1**.: Let \(G\) be an Abelian semigroup, \(r\) a positive integer, and for each multi-index \(\alpha\) in \(\mathbb{N}^{r}\) let \(f_{\alpha}\colon G\to\mathbb{C}\) be a function. We say that \((f_{\alpha})_{\alpha\in\mathbb{N}^{r}}\) is a _generalized moment sequence of rank \(r\)_, if
\[f_{\alpha}(x\cdot y)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}f_{\beta}(x)f_ {\alpha-\beta}(y) \tag{3}\]
holds whenever \(x,y\) are in \(G\). The function \(f_{\mathbf{0}}\), where \(\mathbf{0}\) is the zero element in \(\mathbb{N}^{r}\), is called the _generating function_ of the sequence.
_Remark 1_.: For \(r=1\), instead of multi-indices, we have nonnegative integer indices. Thus generalized moment functions of rank \(1\) are simply moment sequences.
_Remark 2_.: Assume now that \((G,\cdot)\) is an Abelian group (not only a semigroup). For \(\alpha=\mathbf{0}\) we have
\[f_{\mathbf{0}}(x\cdot y)=f_{\mathbf{0}}(x)\cdot f_{\mathbf{0}}(y)\]
for each \(x,y\) in \(G\), hence \(f_{\mathbf{0}}=m\) is always an exponential, or identically zero. It can be proved by induction on the height of the multi-index \(\alpha\in\mathbb{N}^{r}\) that if \(f_{\mathbf{0}}\) is the identically zero function, then for all multi-index \(\alpha\), the mapping \(f_{\alpha}\) must be identically zero, too.
In a rather natural way, the above notions can be extended from complex-valued mappings to mappings whose range is a (commutative) ring. Indeed, if \((G,\cdot)\) is an Abelian semigroup and \(Q\) is a commutative ring, \(r\) is a positive integer, and \(\alpha\in\mathbb{N}^{r}\) is a multi-index, then a function \(f\colon G\to Q\) is a generalized moment function of rank \(r\) and of order \(N\), where \(N=|\alpha|\), if for all multi-indices \(\beta\in\mathbb{N}^{r}\) with \(|\beta|\leq N\), there exists a function \(f_{\beta}\colon G\to Q\) such that \(f=f_{\alpha}\) and we have
\[f_{\beta}(x\cdot y)=\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}f_{\gamma}(x) f_{\beta-\gamma}(y) \tag{4}\]
holds whenever \(x,y\in G\) and for all multi-indices \(\beta\in\mathbb{N}^{r}\) with \(|\beta|\leq N\).
_Remark 3_.: Using the above definition this means that if \(N\geq 1\) and we consider \(\mathcal{C}^{N}(\Omega)\) with the pointwise product of functions, then it will become an Abelian semigroup and we take \(\mathcal{C}(\Omega)\) to be the range, then the sequence of mappings \((D^{\alpha})_{|\alpha|\leq N}\) forms a moment sequence of rank \(r\).
## 2 Characterizations of higher order differential operators
The main aim of this paper is to investigate the following problem: Let \(r\) be a positive integer, \(N\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain (i.e. a nonempty, open and connected set). Determine the mappings \(T_{\alpha}\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\), \(\alpha\in\mathbb{N}^{r},|\alpha|\leq N\) if they fulfill
\[T_{\beta}(f\cdot g)=\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}T_{\gamma}(f) T_{\beta-\gamma}(g) \tag{5}\]
for all \(f,g\in\mathcal{C}^{N}(\Omega)\) and for all multi-indices \(\beta\in\mathbb{N}^{r}\), \(|\beta|\leq N\).
Observe that if \(\beta=\mathbf{0}=(0,\ldots,0)\), then the above identity becomes
\[T_{\mathbf{0}}(f\cdot g)=T_{\mathbf{0}}(f)\cdot T_{\mathbf{0}}(g)\qquad\left( f,g\in\mathcal{C}^{N}(\Omega)\right).\]
This means, that similarly to the group case, the first element of the sequence, i.e. \(T_{\mathbf{0}}\) is an 'exponential'.
Recall again that if \((G,\cdot)\) is an Abelian _group_, then a nonzero function \(m\colon G\to\mathbb{C}\) is an exponential, if
\[m(x\cdot y)=m(x)m(y)\]
holds for all \(x,y\) in \(G\). In the case of this concept, the fact that the range of \(m\) is the set of complex numbers plays a key role. Indeed, if \(m\) is an exponential on the Abelian group \(G\), then either \(m\) is identically zero, or nowhere zero. At the same time, as we will see below, analogous statements are not true for mappings \(T\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\).
The study of multiplicative maps between function spaces has quite extensive literature. Here we quote only two of them, but the interested reader can consult e.g. Artstein-Avidan-Faifman-Milman [1], Milgram [4], Mircun [5] and Mrcun-Semrl [6].
A result from [6] concerning _bijective_ multiplicative mappings between the function spaces \(\mathcal{C}(X)\) and \(\mathcal{C}(Y)\) says that if we are given compact Hausdorff spaces \(X\) and \(Y\), \(\tau\colon\,Y\to X\) is a homeomorphism and \(p\in\mathcal{C}(Y)\) is a positive function, then the mapping \(\mathcal{M}\colon\,\mathcal{C}(X)\to\mathcal{C}(Y)\) defined by
\[\mathcal{M}(f)(x)=|f(\tau(x))|^{p(x)}\cdot\mathrm{sgn}\left(f(\tau(x)) \right)\qquad(x\in Y,f\in\mathcal{C}(X))\]
is a bijective and multiplicative map, i.e. we have
\[\mathcal{M}(f\cdot g)(x)=\mathcal{M}(f)(x)\cdot\mathcal{M}(g)(x)\]
for all \(x\in Y\) and \(f,g\in\mathcal{C}(X)\).
In view of this, if \(K\subset\mathbb{R}^{r}\) is a _compact_ set and \(\tau\colon\,K\to K\) is a homeomorphism, then the mapping \(\mathcal{M}\colon\,\mathcal{C}(K)\to\mathcal{C}(K)\) defined by
\[\mathcal{M}(f)(x)=|f(\tau(x))|^{p(x)}\cdot\mathrm{sgn}\left(f(\tau(x))\right) \qquad(x\in K,f\in\mathcal{C}(K))\]
is a bijective and multiplicative map. Firstly observe that this is only one direction and not an 'if and only if' statement. Further, in general, we intend to work on a _domain_\(\Omega\subset\mathbb{R}^{r}\) and we cannot a priori assume that the mapping in question is _bijective_.
A corollary of a result from Mrcun [5] describes _bijective_ multiplicative self-mappings of \(\mathcal{C}^{N}(\Omega)\), where \(N\) is a fixed positive integer. Let \(N,r\) be a positive integers and \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathcal{C}^{N}\)-manifold. Then for any multiplicative bijection \(\mathcal{M}\colon\,\mathcal{C}^{N}(\Omega)\to\mathcal{C}^{N}(\Omega)\) there exists a unique \(\mathcal{C}^{N}\)-diffeomorphism \(\tau\colon\,\Omega\to\Omega\) such that
\[\mathcal{M}(f)(x)=f(\tau(x))\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}( \Omega)\right)\]
holds.
In the cases we intend to study, unfortunately, the range of the mappings is not \(\mathcal{C}^{N}(\Omega)\), but the much larger function space \(\mathcal{C}(\Omega)\). In addition, in general, it cannot be guaranteed that the mapping \(T_{\mathbf{0}}\) is bijective. However, without the assumption of bijectivity, we cannot expect to be able to describe the multiplicative mappings in these spaces. Thus, in this paper, we will determine the moment functions of the spaces in question in the case of some important multiplicative mappings.
### A non-bijective case
Let \(r\) be a positive integer, \(N\) be nonnegative a integer, and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Then the mapping \(T_{\mathbf{0}}\colon\,\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) defined by
\[T_{\mathbf{0}}(f)(x)=1\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}(\Omega)\right)\]
is multiplicative (and non-bijective). Therefore, it can be suitable to generate a moment sequence. As we will see, this mapping generates a fairly trivial moment sequence.
**Theorem 1**.: _Let \(r\) be a positive integer, \(N\) be nonnegative a integer, and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Assume further that for all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leqslant N\), we are given a mapping \(T_{\alpha}\colon\,\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) such that_
\[T_{\mathbf{0}}(f)(x)=1\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}(\Omega)\right)\]
_and \((T_{\alpha})_{|\alpha|\leqslant N}\) forms a moment sequence of rank \(r\) and of order \(N\). Then for all multi-indices \(\alpha\in\mathbb{N}^{r}\) with \(\alpha\neq\mathbf{0}\) and \(|\alpha|\leqslant N\) we have_
\[T_{\alpha}(f)(x)=0\]
_for all \(x\in\Omega\) and \(f\in\mathcal{C}^{N}(\Omega)\)._
Proof.: We prove the statement on induction of the height of the multi-index \(\alpha\in\mathbb{N}^{r}\). Accordingly, let \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index with \(|\alpha|=1\). Then
\[T_{\alpha}(f\cdot g)=T_{\boldsymbol{0}}(f)T_{\alpha}(g)+T_{\alpha}(f)T_{ \boldsymbol{0}}(g)\]
holds for all \(f,g\in\mathcal{C}^{N}(\Omega)\). Since
\[T_{\boldsymbol{0}}(f)(x)=1\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}(\Omega) \right),\]
this means that
\[T_{\alpha}(f\cdot g)=T_{\alpha}(f)+T_{\alpha}(g)\]
all \(f,g\in\mathcal{C}^{N}(\Omega)\). Let \(f\) and \(g\) be the identically zero functions on \(\Omega\), we get that
\[T_{\alpha}(0)=2T_{\alpha}(0),\]
so \(T_{\alpha}(0)=0\). This however yields that
\[T_{\alpha}(f\cdot 0)=T_{\alpha}(f)+T_{\alpha}(0)\]
for all \(f\in\mathcal{C}^{N}(\Omega)\). Thus
\[T_{\alpha}(f)(x)=0\]
for all \(f\in\mathcal{C}^{N}(\Omega)\) and \(x\in\Omega\).
Let now \(k\in\{1,\ldots,N-1\}\) be arbitrary, and suppose that
\[T_{\beta}(f)(x)=0\qquad\left(f\in\mathcal{C}^{N}(\Omega),x\in\Omega\right)\]
holds for all multi-indices \(\beta\) with \(|\beta|\leq k\). Let further \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index with \(|\alpha|=k+1\). Then
\[T_{\alpha}(f\cdot g) =\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f)\cdot T_ {\alpha-\beta}(g)\] \[=T_{\boldsymbol{0}}(f)T_{\alpha}(g)+T_{\alpha}(f)T_{\boldsymbol{ 0}}(g)+\sum_{0<\beta<\alpha}\binom{\alpha}{\beta}T_{\beta}(f)\cdot T_{\alpha- \beta}(g)\] \[=T_{\alpha}(f)+T_{\alpha}(g)\]
holds for all \(f,g\in\mathcal{C}^{N}(\Omega)\). This is exactly the same equation that we solved above. Thus
\[T_{\alpha}(f)(x)=0\]
for all \(f\in\mathcal{C}^{N}(\Omega)\) and \(x\in\Omega\).
### A bijective case
Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathcal{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) be a \(\mathcal{C}^{N}\)-diffeomorphism. Define \(\tilde{T}_{\boldsymbol{0}}\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) through
\[\tilde{T}_{\boldsymbol{0}}(f)(x)=f(\tau(x))\qquad\left(x\in\Omega,f\in \mathcal{C}^{N}(\Omega)\right).\]
Then \(\tilde{T}_{\boldsymbol{0}}\) is a multiplicative mapping. Thus it can be an appropriate candidate to generate a moment sequence on \(\mathcal{C}^{N}(\Omega)\).
**Lemma 1**.: _Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathscr{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) be a \(\mathscr{C}^{N}\)-diffeomorphism. Further, let us consider the mappings \(T_{\mathbf{0}},\tilde{T}_{\mathbf{0}}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C }(\Omega)\) defined by_
\[T_{\mathbf{0}}(f)(x)=f(x)\qquad\text{and}\qquad\tilde{T}_{\mathbf{0}}(f)(x)=f( \tau(x))\qquad\left(x\in\Omega,f\in\mathscr{C}^{N}(\Omega)\right),\]
_respectively. Then the following statements are equivalent:_
1. _the sequence of mappings_ \(T_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\)_,_ \(\alpha\in\mathbb{N}^{r}\)_,_ \(|\alpha|\leq N\) _is a moment sequence generated by_ \(T_{\mathbf{0}}\)__
2. _the sequence of mappings_ \(\tilde{T}_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\)_,_ \(\alpha\in\mathbb{N}^{r}\)_,_ \(|\alpha|\leq N\) _is a moment sequence generated by_ \(\tilde{T}_{\mathbf{0}}\)__
Proof.: Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathscr{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) be a \(\mathscr{C}^{N}\)-diffeomorphism. Further, let is consider the mappings \(T_{\mathbf{0}},\tilde{T}_{\mathbf{0}}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{ C}(\Omega)\) defined by
\[T_{\mathbf{0}}(f)(x)=f(x)\qquad\text{and}\qquad\tilde{T}_{\mathbf{0}}(f)(x)=f( \tau(x))\qquad\left(x\in\Omega,f\in\mathscr{C}^{N}(\Omega)\right),\]
respectively.
To prove the direction (i)\(\Rightarrow\)(ii), assume that the sequence of mappings \(T_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\), \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\) is a moment sequence generated by \(T_{\mathbf{0}}\). This means that for all \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\) we have
\[T_{\alpha}(f\cdot g)(x)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}( f)(x)\cdot T_{\alpha-\beta}(g)(x)\]
for all \(f,g\in\mathscr{C}^{N}(\Omega)\) and \(x\in\Omega\). Thus we also have
\[T_{\alpha}(f\cdot g)(\tau(x))=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{ \beta}(f)(\tau(x))\cdot T_{\alpha-\beta}(g)(\tau(x))\qquad\left(f,g\in\mathscr{ C}^{N}(\Omega),x\in\Omega\right).\]
For all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), define the mapping \(\tilde{T}_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\) by
\[\tilde{T}_{\alpha}(f)(x)=T_{\alpha}(f)(\tau(x))\qquad\left(f\in\mathscr{C}^{N }(\Omega),x\in\Omega\right)\]
to deduce that
\[\tilde{T}_{\alpha}(f\cdot g)(x)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta} \tilde{T}_{\beta}(f)(x)\cdot\tilde{T}_{\alpha-\beta}(g)(x)\]
for all \(f,g\in\mathscr{C}^{N}(\Omega)\) and \(x\in\Omega\). Thus the sequence of mappings \((\tilde{T}_{\alpha})_{|\alpha|\leq N}\) is a moment sequence of rank \(r\) generated by \(\tilde{T}_{0}\).
The proof of the implication (ii)\(\Rightarrow\)(i) is analogous. It is enough to consider a point \(x=\tau(y)\) with arbitrary \(y\in\Omega\) and use the fact that \(\tau\) is a diffeomorphism.
As we saw above, if \(r\) and \(N\) are positive integers, \(\Omega\subset\mathbb{R}^{r}\) is a \(\mathscr{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) is a \(\mathscr{C}^{N}\)-diffeomorphism, then the mapping \(\tilde{T}_{\mathbf{0}}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\) defined by
\[\tilde{T}_{\mathbf{0}}(f)(x)=f(\tau(x))\qquad\left(x\in\Omega,f\in\mathscr{C}^ {N}(\Omega)\right),\]
is a multiplicative mapping. Thus it can be an appropriate candidate to generate a moment sequence on \(\mathscr{C}^{N}(\Omega)\). Nevertheless, the previous lemma says that instead of multiplicative mappings of this form, it suffices to consider the identity mapping. Accordingly, below we will describe moment sequences generated by the identity mapping. Further, observe that while describing the solutions of equation (5), not only the generator, i.e., the operator \(T_{\mathbf{0}}\), but also the domain \(\mathscr{C}^{N}(\Omega)\) can play a crucial role. In the second part of this section, we focus on the largest possible domain, that is, we will work on \(\mathscr{C}(\Omega)\).
During the proof of Theorem 2 we will use a corollary of [3, Theorem 3.5] and also [3, Theorem 7.1] which are the following statements. Before stating these results, however, we need two more notions from the theory of operator relations.
**Definition 2**.: Let \(k\) be a nonnegative integer, \(r\) be a positive integer and \(\Omega\subset\mathbb{R}^{r}\) be an open set. An operator \(A\colon\mathscr{C}^{k}(\Omega)\to\mathscr{C}(\Omega)\) is _non-degenerate_ if for each nonvoid open subset \(U\subset\Omega\) and all \(x\in U\), there exist functions \(g_{1},g_{2}\in\mathscr{C}^{k}(\Omega)\) with supports in \(U\) such that the vectors \((g_{i}(x),Ag_{i}(x))\in\mathbb{R}^{2}\), \(i=1,2\) are linearly independent in \(\mathbb{R}^{2}\).
**Definition 3**.: Let \(k\) and \(r\) be positive integers with \(k\geq 2\) and \(\Omega\subset\mathbb{R}^{r}\) be an open set. We say that the operator \(A\colon\mathscr{C}^{k}(\Omega)\to\mathscr{C}(\Omega)\)_depends non-trivially on the derivative_ if there exists \(x_{0}\in\Omega\) and there are functions \(f_{1},f_{2}\in\mathscr{C}^{k}(\Omega)\) such that
\[f_{1}(x_{0})=f_{2}(x_{0})\quad\text{and}\quad Af_{1}(x_{0})\neq Af_{2}(x_{0})\]
holds.
**Proposition 1**.: _Let \(r\) be a positive integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Suppose that the operator \(T\colon\mathscr{C}(\Omega)\to\mathscr{C}(\Omega)\) satisfies the Leibniz rule, i.e.,_
\[T(f\cdot g)=f\cdot T(g)+T(f)\cdot g\qquad\left(f,g\in\mathscr{C}(\Omega) \right).\]
_Then there exists a function \(c\in\mathscr{C}(\Omega)\) such that for all \(f\in\mathscr{C}(\Omega)\) and \(x\in\Omega\)_
\[T(f)(x)=c(x)\cdot f(x)\cdot\ln\left(\left|f(x)\right|\right).\]
_Conversely, any such map \(T\) satisfies the Leibniz rule._
**Proposition 2**.: _Let \(r\) be a positive integer, \(k\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Assume that \(T,A\colon\mathscr{C}^{k}(\Omega)\to\mathscr{C}(\Omega)\) satisfy_
\[T(f\cdot g)=T(f)\cdot g+f\cdot T(g)+2A(f)\cdot A(g)\qquad\left(f,g\in\mathscr{C }^{k}(\Omega)\right)\]
_and that in case \(k\geq 2\) the mapping \(A\) is non-degenerate and depends non-trivially on the derivative. Then there are continuous functions \(a\colon\Omega\to\mathbb{R}\) and \(b,c\colon\Omega\to\mathbb{R}^{r}\) such that we have_
\[T(f)(x) = \left\langle f^{\prime\prime}(x)c(x),c(x)\right\rangle+R(f)(x) \qquad\left(f\in\mathscr{C}^{k}(\Omega),x\in\Omega\right),\] \[A(f)(x) = \left\langle f^{\prime}(x),c(x)\right\rangle\]
_where_
\[R(f)(x)=\left\langle f^{\prime}(x),b(x)\right\rangle+a(x)f(x)\ln\left(\left|f (x)\right|\right)\qquad\left(f\in\mathscr{C}^{k}(\Omega)\right).\]
_If \(k=1\), then necessarily \(c\equiv 0\). Further, if \(k=0\), then necessarily \(b\equiv 0\) and \(c\equiv 0\)._
_Conversely, these operators satisfy the above second-order Leibniz rule._
Our main result for operators defined on \(\mathscr{C}(\Omega)\) is the following theorem.
**Theorem 2**.: _Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a domain and assume that for all multi-indices \(\alpha\in\mathbb{N}^{r}\), with \(|\alpha|\leq N\) we are given a mapping \(T_{\alpha}\colon\mathscr{C}(\Omega)\to\mathscr{C}(\Omega)\) such that \(T_{\mathbf{0}}\) is the identity mapping and for all multi-indices \(\alpha\in\mathbb{N}^{r}\) with \(0\neq|\alpha|\leq N\) we have_
\[T_{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f) \cdot T_{\alpha-\beta}(g) \tag{6}\]
_for all \(f,g\in\mathscr{C}(\Omega)\). Then there exist a family of functions \(\{c_{\alpha}\in\mathscr{C}(\Omega):0\neq|\alpha|\leq N\}\) such that_
\[\left[\sum_{\mathbf{0}\subset\beta<\alpha}\binom{\alpha}{\beta}c_{\beta}(x) \cdot c_{\alpha-\beta}(x)\right]=0 \tag{7}\]
\[T_{\alpha}(f)(x)=c_{\alpha}(x)f(x)\ln(|f(x)|)\qquad(x\in\Omega,f\in\mathcal{C}( \Omega),0\neq|\alpha|\leq N)\,. \tag{8}\]
_And also conversely, if \(T_{\mathbf{0}}\) is the identity mapping on \(\mathcal{C}(\Omega)\), we are given a family of functions that satisfies (7) and we define the mappings \(T_{\alpha}\) on \(\mathcal{C}(\Omega)\) by the formula (8), then they satisfy equation (6) for all multi-indices \(\alpha\) such that \(0\neq|\alpha|\leq N\)._
Proof.: Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a domain and assume that for all multi-indices \(\alpha\in\mathbb{N}^{r}\), with \(|\alpha|\leq N\) we are given a mapping \(T_{\alpha}\colon\mathcal{C}(\Omega)\to\mathcal{C}(\Omega)\) such that \(T_{\mathbf{0}}\) is the identity mapping and for all multi-indices \(\alpha\in\mathbb{N}^{r}\) with \(0\neq|\alpha|\leq N\) we have
\[T_{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f) \cdot T_{\alpha-\beta}(g)\]
for all \(f,g\in\mathcal{C}(\Omega)\).
We prove the statement by induction on the multi-index \(\alpha\).
Let \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index for which \(|\alpha|=1\) holds. Then
\[T_{\alpha}(f\cdot g)=T_{\mathbf{0}}(f)T_{\alpha}(g)+T_{\alpha}(f)T_{\mathbf{0 }}(g)=f\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot g\qquad(f,g\in\mathcal{C}(\Omega ))\,,\]
since \(T_{\mathbf{0}}=\operatorname{id}\) was assumed. Using Proposition 1, we obtain that there exists a continuous function \(c_{\alpha}\in\mathcal{C}(\Omega)\) such that
\[T_{\alpha}(f)(x)=c_{\alpha}(x)f(x)\ln(|f(x)|)\qquad(x\in\Omega,f\in\mathcal{C }(\Omega))\,.\]
Let now \(k\in\{1,\ldots,N-1\}\) be arbitrary and assume that the statement of the theorem holds for all multi-indices \(\beta\in\mathbb{N}^{r}\) for which we have \(|\beta|\leq k\). Let further \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index for which \(|\alpha|=k+1\). Then
\[T_{\alpha}(f\cdot g) =\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f)\cdot T_{ \alpha-\beta}(g)\] \[=T_{\mathbf{0}}(f)\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot T_{ \mathbf{0}}(g)+\sum_{\mathbf{0}<\beta<\alpha}\binom{\alpha}{\beta}T_{\beta}(f )\cdot T_{\alpha-\beta}(g)\] \[=f\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot g+\sum_{\mathbf{0}< \beta<\alpha}\binom{\alpha}{\beta}c_{\beta}f\ln(|f|)\cdot c_{\alpha-\beta}g \ln(|g|)\] \[=f\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot g+\left[\sum_{\mathbf{0 }<\beta<\alpha}\binom{\alpha}{\beta}c_{\beta}(x)\cdot c_{\alpha-\beta}\right] \cdot f\ln(|f|)\cdot g(x)\ln(|g|)\]
holds for all \(f,g\in\mathcal{C}(\Omega)\). Using Proposition 2, taking into account that \(k=0\), we obtain that there exists a continuous function \(c_{\alpha}\) such that
\[T_{\alpha}(f)(x)=c_{\alpha}(x)\cdot f(x)\cdot\ln(|f(x)|)\]
is fulfilled for all \(f\in\mathcal{C}(\Omega)\) and \(x\in\Omega\). Further, the family of functions \(\{c_{\alpha}\in\mathcal{C}(\Omega):0\neq|\alpha|\leq N\}\) necessarily satisfies (7).
The converse implication is an easy computation.
As we saw in the previous theorem, the moment sequences are quite poor on the \(\mathcal{C}(\Omega)\) space. We note that if \(N\geqslant 1\), then there are substantially more diverse moment sequences in the space \(\mathcal{C}^{N}(\Omega)\), see Remark 3. However, this will be dealt with in one of our future work.
_Acknowledgment_.: The research of Eszter Gselmann has been supported by project no. K134191 that has been implemented with the support provided by the National Research, Development and Innovation Fund of Hungary, financed under the K_20 funding scheme.
The work of Aleksandra Swiatczak is implemented under the project "Curriculum for advanced doctoral education & training - CADET Academy of TUL" co-financed by the STER Programme - Internationalization of doctoral schools.
This article has been completed while one of the authors (Aleksandra Swiatczak), was the Doctoral Candidate in the Interdisciplinary Doctoral School at the Lodz University of Technology, Poland.
| $r$は正の整数、$N$は非負の整数で、$\Omega \subset \mathbb{R}^{r}$ を実数$r$次元空間とする。さらに、$α \in \mathbb{N}^{r}$ のすべての多項の$α$ について $|\alpha| \leq N$ で、$D^{\alpha}$ を以下の偏微分算子として定義する。
\[D^{\alpha}=\frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{r}^{\alpha_{r}}},\]
ただし $α = (\alpha_{1}, ..., \alpha_{r})$. ここで、$D^{0} \equiv \mathrm{id}$ と定義する。この定義から容易に導き出されることである。$f, g \in \mathscr{C}^{N}(\Omega)$ と $\alpha \in \mathbb{N}^{r}$ |
2301.00659 | On partial monotonicity of some extropy measures | Gupta and Chaudhary [14] introduced general weighted extropy and studied
related properties. In this paper, we study conditional extropy and define the
monotonic behaviour of conditional extropy. Also, we obtain results on the
convolution of general weighted extropy. | Nitin Gupta, Santosh Kumar Chaudhary | 2022-11-29T23:15:58 | http://arxiv.org/abs/2301.00659v1 | ###### Abstract
###### Abstract
Gupta and Chaudhary [14] introduced general weighted entropy and studied related properties. In this paper, we study conditional entropy and define the monotonic behaviour of conditional entropy. Also, we obtain results on the convolution of general weighted entropy.
**Key Words**: _Entropy, Extropy, Log-concavity, Log-convexity, Partial monotonicity._
**Mathematical Subject Classification**: _94A17; 62N05; 60E15._
**On partial monotonicity of some extropy measures**
**Nitin Gupta* and Santosh Kumar Chaudhary**
**Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India
**[email protected]**
**Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India
**[email protected]**
**Corresponding author E-mail: [email protected]**
## 1 Introduction
In the technological age we live in, technology is a part of almost everything. In the field of computer science, the most well-known technology for allowing a computer to automatically learn from the past is called machine learning. Entropy and extropy in machine learning are two of the many techniques and concepts that are being used to solve complex problems easily. Further, entropy and extropy are also useful in the fields of information theory, physics, probability and statistics, computer science, economics, communication theory etc (see Balakrishnan et al. [3], Becerra et al. [7], Kazemi et al. [17], Sati and Gupta [25], Tahmasebi and Toomaj [29], Tuli [32]).
Shannon [27] introduced the notion of information entropy which measures the average amount of uncertainty about an occurrence associated with a certain probability distribution. Let \(Y\) be a discrete random variable having probability mass function \(p_{i},\ i=1,2\ldots,N\). The discrete version of
###### Abstract
We consider the generalized entropy of order \(\theta\) for \(\theta>0\), \(\theta\neq 1\), which is given by
\[H_{\theta}(Y)=\frac{1}{1-\theta}\log\left(\int_{-\infty}^{\infty}(g_{Y}(y))^{ \theta}dy\right).\]
## 1 Introduction
In this paper, we consider the generalized entropy of order \(\theta\) for \(\theta>0\), \(\theta\neq 1\), which is given by
\[H_{\theta}(Y)=\frac{1}{1-\theta}\log\left(\int_{-\infty}^{\infty}(g_{Y}(y))^ {\theta}dy\right).\]
The generalized entropy of order \(\theta\) for \(\theta>0\), \(\theta\neq 1\), which is given by
\[H_{\theta}(Y)=\frac{1}{1-\theta}\log\left(\int_{-\infty}^{\infty}(g_{Y}(y))^ {\theta}dy\right).\]
Tasallis [31] defined the generalized entropy for \(\theta>0\), \(\theta\neq 1\), which is given by
\[S_{\theta}(Y)=\frac{1}{\theta-1}\left(1-\int_{-\infty}^{\infty}(g_{Y}(y))^{ \theta}dy\right).\]
Kapur [18] gives Kapur entropy of order \(\theta\) and type \(\lambda\) for \(\theta\neq\lambda,\ \theta>0,\,\lambda>0\), which is given by
\[H_{\theta,\lambda}(Y)=\frac{1}{\lambda-\theta}\left[\log\left(\int_{-\infty} ^{\infty}(g_{Y}(y))^{\theta}dy\right)-\log\left(\int_{-\infty}^{\infty}(g_{Y}( y))^{\lambda}dy\right)\right].\]
Varma [33] generalized entropy of order \(\theta\) and type \(\lambda\) for \(\lambda-1<\theta<\lambda,\ \ \lambda\geq 1\), which is given by
\[H_{\theta}^{\lambda}(Y)=\frac{1}{\lambda-\theta}\log\left(\int_{-\infty}^{ \infty}\left(g_{Y}(y)\right)^{\theta+\lambda-1}dy\right).\]
The conditional Shannon entropy of \(Y\) given \(S\), where \(S=\{c<Y<d\}\), is given by
\[H\left(Y|S\right)=-\int_{c}^{d}g_{Y|S}(y)\log\left(g_{Y|S}(y)\right)dy,\]
where
\[g_{Y|S}(y)=\frac{g_{Y}(y)}{G_{Y}(d)-G_{Y}(c)}\,,\ \ c<y<d.\]
One may refer to Sunoj et al. [28] for a review of conditional Shannon entropy (\(H\left(Y|S\right)\)). Convolution and monotonic behaviour of the conditional Shannon entropy, Renyi entropy, Tsallis entropy, Kapur's and Verma's entropies have been studied in the literature (see Chen et al. [9], Gupta and Bajaj [13], Sati and Gupta [25] and Shangari and Chen [26]). Bansal and Gupta [6] studied the monotonicity properties of conditional cumulative past entropy \(\xi(Y|S)\) and convolution results for conditional entropy \(J(Y|S)\). In this paper, we study the monotonicity of conditional entropy and convolution results for general weighted entropy.
As described in Chen et al. [9] and Shangari and Chen [26], \(H\left(Y|S\right)\) may serve as an indicator of uncertainty for an interval \(S\). The measure of uncertainty shrinks/expands as the interval providing the information about the outcome shrinks/expands. For intervals \(S_{1}\) and \(S_{2}\) such that \(S_{2}\subseteq S_{1}\), then entropy \(H\) is partially increasing (decreasing) if \(H(Y|Y\in S_{2})\leq(\geq)H(Y|Y\in S_{1})\). Under the condition that \(G_{Y}\left(y\right)\) is a log-concave function (for more on log-concave probability and its application, see Bagnoli and Bergstrom [2]). Shangari and Chen [26] proved that \(H(Y|Y\in S)\) is a partially increasing function in the interval \(S\). Under the same condition, they also proved that the conditional Renyi entropy \(H_{\theta}(Y|S)\) of \(Y\) given \(S=(c,d)\) is a partially increasing function in the interval \(S\) for \(\theta\geq 0\)\(\theta\neq 1\). Under the condition that \(G_{Y}\left(y\right)\) is concave, Gupta and Bajaj [13] proved that conditional Kapur entropy \(H_{\theta,\lambda}(Y|S)\) of \(Y\) given \(S=(c,d)\) is a partially increasing function in the interval \(S\). They also show that if \(G_{Y}\left(y\right)\) is a log-concave function then the conditional Tsallis entropy \(S_{\theta}(Y|S)\) of \(Y\) given \(S\) is a partially increasing function in the interval \(S\) where \(S=(c,d)\). Sati and Gupta [25] studied the monotonic behaviour of conditional Varma entropy \(H_{\theta}^{\lambda}(Y|S)\). Under the condition \(G_{Y}(y)\) is log-concave function and \(\theta+\lambda>(<)2\) they showed that the \(H_{\theta}^{\lambda}(Y|S)\) is partially decreasing (increasing) in \(S=(c,d)\).
Ash [1], Cover and Thomas [10] and Yeung [34] provide an excellent review of detailed properties which play an important role in information theory. Bansal and Gupta [6] proposed a new conditional entropy which is
based on cumulative past entropy and defined the monotonic behaviour of \(\xi(Y|S)\). They proved that if \(\int_{c}^{y}G_{Y}(u)du\) is a log-concave function then the conditional cumulative past entropy \(Y\) given \(S\), i,e. \(\xi(Y|S)\) is increasing in \(d\) where \(S=(c,d)\).
Entropies are significant in the study of likelihood bases, inference principles, and large deviation theory because of its relevance in these fields. Shannon, Renyi, Tsallis and Varma entropies have operational meaning in terms of data compression. They also find a role as a measure of complexity and uncertainty in different areas such as coding theory, computer science, electronics and physics. For more details, one may refer to Cover and Thomas [10].
Extropy, an alternative measure of uncertainty, was defined by Lad et al. [12] and proved to be the complement dual of the Shannon entropy. Entropy and entropy measures may be thought of as the positive and negative images of a photographic film related to each other. Extropy and its generalisations have numerous applications in the literature, including information theory, economics, communication theory, computer science, and physics see Balakrishnan et al. [3], Becerra et al. [7], Kazemi et al. [17], Tahmasebi and Toomaj [30]). Becerra et al. [7] used entropy in speech recognition that can also be used to score the forecasting distribution. Balakrishnan et al. [3] used Tsallis entropy in pattern recognition. Based on a generalisation of extropy known as negative cumulative extropy, Tahmasebi and Toomaj [29] investigated the stock market in OECD nations. The fractional Deng extropy, a generalisation of extropy, was investigated by Kazemi et al. [17] in relation to a classification problem. To solve the compressive sensing problem, Tahmasebi et al. [30] applied certain extropy measures. Extropy provides some conceptual advantages over entropy in particular circumstances, despite the mathematical similarities between entropy and extropy. Extropy of discrete random variable \(Y\) with probability mass function \(p_{i}\) for \(i=1,2,\ldots N\) is defined as
\[J_{N}(Y)=-\sum_{i=1}^{N}(1-p_{i})\log(1-p_{i}).\]
The entropy and extropy are identical for \(N=2\), that is, \(J_{2}(Y)=H_{2}(Y)\). The extropy also called the differential extropy of continuous distribution with probability density function \(g_{Y}(y)\) is
\[J(Y)=-\frac{1}{2}\int_{-\infty}^{\infty}g_{Y}^{2}(y)dy.\]
The conditional extropy of \(Y\) given \(S=(c,d)\) is given as
\[J(Y|S)=-\frac{1}{2}\!\!\int_{-\infty}^{\infty}g_{Y|S}^{2}(y)dy.\]
Qiu [21] studied the extropy for order statistics and record values, including characterisation results, lower bounds, monotone properties, and statistical applications. Balakrishnan et al. [4] and Bansal and Gupta [5] independently introduced the weighted extropy as
\[J^{y}(Y)=-\frac{1}{2}\int_{-\infty}^{\infty}yg_{Y}^{2}(y)dy.\]
The conditional weighted extropy of \(Y\) given \(S=(c,d)\) is given as
\[J^{y}(Y|S)=-\frac{1}{2}\!\int_{-\infty}^{\infty}yg_{Y|S}^{2}(y)dy.\]
The general weighted extropy of \(Y\) with weight \(w(y)\geq 0\) is given as
\[J^{w}(Y)=-\frac{1}{2}\!\int_{-\infty}^{\infty}w(y)g_{Y}^{2}(y)dy.\]
The conditional general weighted extropy of \(Y\) with weight \(w(y)\geq 0\) given \(S=(c,d)\) is given as
\[J^{w}(Y|S)=-\frac{1}{2}\!\int_{-\infty}^{\infty}w(y)g_{Y|S}^{2}(y)dy.\]
The \(J^{w}(Y)\) is the generalization of weighted \(J(Y).\) Balakrishnan et al. [4] studied characterization results and bounds for weighted versions of extropy, residual extropy, past extropy, bivariate extropy and bivariate weighted extropy whereas Bansal and Gupta [5] discussed the results for weighted extropy and weighted residual extropy of \(Y\). Gupta and Chaudhary [14] defined \(J^{w}(Y)\) and provided some results related to ranked set sampling. Bansal and Gupta [6] studied \(\xi(Y|S)\) and its partial monotonicity. This motivated us to find partial monotonicity of conditional extropy. Further, convolution of extropy has also been studied by Bansal and Gupta [6]. That motivated us to derive results on convolution of general weighted extropy. This paper is arranged as follows. Section 2 provides the result on the monotonicity of conditional extropy. In Section 3, we studied convolution of general weighted extropy. Section 4 concludes this paper.
## 2 Monotonicity of conditional extropy
The conditional extropy of \(Y\) given \(S=(c,d)\) is given as
\[J(Y|S) =-\frac{1}{2}\!\int_{-\infty}^{\infty}g_{Y|S}^{2}(y)dy\] \[=-\frac{1}{2}\!\int_{c}^{d}\left(\frac{g_{Y}(y)}{G_{Y}(d)-G_{Y}( c)}\right)^{2}dy.\]
The measure of uncertainty shrinks/expands as the interval providing the information about the outcome shrinks/expands. Here we study the conditions under which the \(J(Y|S)\) is increasing in \(d\), where \(S=(c,d)\) and that is given by the following theorem.
**Theorem 1**: _Let \(S=\{c<Y<d\}\). If \(G_{Y}(y)\) is log-concave (log-convex), then \(J(Y|S)\) is increasing (decreasing) in \(d\), for fixed \(c\)._
**Proof :** From definition,
\[J(Y|S)=\frac{-1}{2}{\int_{c}^{d}\left(\frac{g_{Y}(y)}{G_{Y}(d)-G_{Y}(c)} \right)^{2}dy}.\]
Now for fixed \(c\), differentiating \(J(Y|S)\) with respect to \(d\), we get
\[\frac{\mathrm{d}(J(Y|S))}{\mathrm{d}d}=\frac{-1}{2\left(G_{Y}(d)- G_{Y}(c)\right)^{4}}\left(g_{Y}^{2}(d)\left(G_{Y}(d)-G_{Y}(c)\right)^{2}-2g_{Y}(d)\right.\] \[\left.\left(G_{Y}(d)-G_{Y}(c)\right){\int_{c}^{b}g_{Y}^{2}(y)dy}\right)\] \[= \frac{g_{Y}(d)\psi_{1}(d)}{2\left(G_{Y}(d)-G_{Y}(c)\right)^{3}}, \tag{2.1}\]
where
\[\psi_{1}(y)=2\int_{c}^{y}g_{Y}^{2}(u)du-g_{Y}(y)\left(G_{Y}(y)-G_ {Y}(c)\right),\ \ \psi_{1}(c)=0,\]
and
\[\psi_{1}^{\prime}(y)=g_{Y}^{2}(y)-g_{Y}^{\prime}(y)\left(G_{Y}(y)-G_{Y}(c) \right).\]
Note that \(G_{Y}(y)\) is log-concave function, implies that \(\frac{G_{Y}(y)-G_{Y}(c)}{G_{Y}(d)-G_{Y}(c)}\) is log-concave function. Hence we have
\[g_{Y}^{2}(y)-g_{Y}^{\prime}(y)\left(G_{Y}(y)-G_{Y}(c)\right)\geq 0,\ \mbox{for all}\ d.\]
Since \(\psi_{1}^{\prime}(y)\geq 0\), that is, \(\psi_{1}(y)\) is increasing in \(y\). Now for \(d>c\) we have \(\psi_{1}(d)\geq\psi_{1}(c)\), that is, \(\psi_{1}(d)\geq 0\). Hence from (2.1) we have, \(\frac{\mathrm{d}J(Y|S)}{\mathrm{d}d}\geq 0\). Therefore \(J(Y|S)\) is increasing in \(d\), for fixed \(c\).
In the next section, we provide a result on convolution of \(J^{w}(Y)\). We will prove that the conditional general weighted entropy of \(V=|Y_{1}-Y_{2}|\) given \(S=\{c\leq Y_{1},\ Y_{2}\leq d\}\), i.e. \(J^{w}(V|S)\) is partially increasing in \(S\).
## 3 Convolution of General weighted entropy
Let \(Y\) be a random experiment and it is repeated to measure its reproducibility or precision or both. Then measure of uncertainty of the experiment is the function \(V=|Y_{1}-Y_{2}|\); where \(Y_{1}\) and \(Y_{2}\) are independent and identically distributed random variable from an experiment \(Y\) with probability density function \(g_{Y}(y)\). The difference \(V=|Y_{1}-Y_{2}|\) is the measure of the uncertainty between two outcomes. Uncertainty should reduce if further information of the form \(S=\{c<Y_{1},Y_{2}<d\}\) is provided.
The marginal probability density function of \(V=|Y_{1}-Y_{2}|\) given \(S=\{c<Y_{1},Y_{2}<d\}\) is
\[h(v;c,d)=\int\limits_{c+v}^{d}\frac{g_{Y}(y-v)g_{Y}(y)dy}{(G_{Y}(d)-G_{Y}(c))^{ 2}},\mbox{ for all }v\in[0,d-c].\]
Chen et al. [8] proved that the \(H(V|Y\in S)\) is partially monotonic in \(S\) provided the random variables \(Y_{1}\) and \(Y_{2}\) have log-concave probability density functions that take value in \(S\). Shangari and Chen [26] claimed and Gupta and Bajaj [13] proved that if \(Y_{1}\) and \(Y_{2}\) have log-concave probability density function which takes value in \(S\), then the conditional Tasalli and Renyi entropy of \(V\) given \(S\) is partially increasing function in \(S\) if \(\theta>0\), \(\theta\neq 1\). Sati and Gupta [25] study the partial monotonicity of the \(H^{\lambda}_{\theta}(Y|S)\). Bansal and Gupta [6] studied the convolution results for conditional extropy.
The proof of the next result of this section will be using the following lemma from Chen et al. [9] (also see Sati and Gupta [25]).
**Lemma 1**:
1. _Let the probability density functions of random variables_ \(Y_{1}\) _and_ \(Y_{2}\) _be log-concave functions. If the function_ \(\phi(v)\) _is increasing in_ \(v\)_, then_ \(E(\phi(V)|S)\) _is increasing in_ \(d\) _for any_ \(c\)_, and decreasing in_ \(c\) _for any_ \(d\)_; where_ \(V=|Y_{1}-Y_{2}|\) _where_ \(S=\{c<Y_{1},Y_{2}<d\}\)_._
2. _If_ \(g_{Y}(y)\) _is log-concave function, then_ \(h(v;c,d)\) _is decreasing function of_ \(v\) _on_ \(v\in[0,d-c]\)_._
Now, we will prove the following theorem which provides the conditions for \(J^{w}(V|S)\) to be a partially increasing/decreasing in \(S\).
**Theorem 2**: _Let the probability density functions of random variables \(Y_{1}\) and \(Y_{2}\) be log-concave function. Let weight be \(w(y)\geq 0\), \(w(y)\) is decreasing in \(y\), and \(S=\{c<Y_{1},Y_{2}<d\}\), then the \(J^{w}(Y|S)\) is a partially increasing in \(S\)._
**Proof :** The conditional general weighted extropy of \(V\) given \(S\) is
\[J^{w}(V|S)=\frac{-1}{2}\int_{c}^{d}w(v)\left(h(v;c,d)\right)^{2}dv.\]
For fixed \(c\), if we choose for any \(d_{1}\leq d_{2}\),
\[\psi_{1}(v)=(w(v))^{1/2}\,h(v;c,d_{1})\ \ \mbox{and}\ \ \psi_{2}(v)=(w(v))^{1/2}\,h(v;c,d_{2})\]
clearly here \(\psi_{1}(v)\) and \(\psi_{2}(v)\) are non-negative functions. Also, let \(p=2\), \(q=2\), then \(p>0,\ q>0\) and \(\frac{1}{p}+\frac{1}{q}=1\). With the help of H\(\ddot{o}\)lder's inequality, we now obtain
\[\int\psi_{1}(v)\psi_{2}(v)dv\leq\left(\int(\psi_{1}(v))^{p}dv \right)^{1/p}\left(\int(\psi_{2}(v))^{q}dv\right)^{1/q},\] \[\mbox{that is,}\ \ \int w(v)h(v;c,d_{1})h(v;c,d_{2})dv\leq\left( \int w(v)h^{2}(v;c,d_{1})dv\right)^{1/2}\] \[\ \
Clearly \(\psi_{3}(v)\) and \(\psi_{4}(v)\) are non-negative. Also, let \(p=\frac{1}{2}\), \(q=-1\), then \(p<1,\ q<0\) and \(\frac{1}{p}+\frac{1}{q}=1\). Now H\(\ddot{o}\)lder's inequality provides
\[\left(\int(\psi_{3}(v))^{p}dv\right)^{1/p}\left(\int(\psi_{4}(v))^ {q}dv\right)^{1/q}\leq\int\psi_{3}(v)\psi_{4}(v)dv,\] that is, \[\ \left(\int w(v)h(v;c_{1},d)\ h(v;c_{2},d)dv\right)^{2}\left( \int w(v)(h(v;c_{1},d))^{2}dv\right)^{-1}\] \[\leq\int w(v)(h(v;c_{2},d))^{2}dv,\] that is, \[\ \int w(v)h(v;c_{1},d)\ h(v;c_{2},d)dv\leq\left(\int w(v)(h(v;c_ {1},d))^{2}dv\right)^{\frac{1}{2}}\] \[\ \left(\int w(u)(h(v;c_{2},d))^{2}dv\right)^{\frac{1}{2}}. \tag{3.4}\]
For fixed \(c_{2}>0\), let
\[\phi_{2}(v)=-w(v)h(v;c_{1},d);\]
then,
\[\phi_{2}^{\prime}(v)=-w^{\prime}(v)h(v;c_{1},d)-w(v)h^{\prime}(v;c_{1},d)\geq 0,\]
as the probability density function \(h(v;c,d)\) is decreasing function in \(v\) for \(0\leq v\leq d-c\) (Using Lemma 1 (b)), \(w(v)\geq 0\), and \(w(v)\) is decreasing function in \(v\). Hence \(\phi_{2}(v)\) increases in \(v\). By lemma 1 (a) for any \(c_{1}<c_{2}<d\), we have
\[E(\phi_{2}(V)|c_{2}\leq Y_{1},Y_{2}\leq d) \leq E(\phi_{2}(V)|c_{1}\leq Y_{1},Y_{2}\leq d),\] \[\mbox{that is, }\int w(v)\ (h(v;c_{1},d))^{2}dv \leq\int w(v)\ h(v;c_{1},d)h(v;c_{2},d)dv. \tag{3.5}\]
Now, (3.3) and (3.5) implies
\[\int w(v)\ (h(v;c_{1},d))^{2}dv\leq\int w(v)\ (h(v;c_{2},d))^{2}dv. \tag{3.6}\]
Therefore we have
\[-\frac{1}{2}\int w(v)\ (h(v;c_{2},d))^{2}dv \leq-\frac{1}{2}\int w(v)\ (h(v;a_{1},d))^{2}dv,\] \[\mbox{that is, }\ J^{w}(V|c_{2}<Y_{1},Y_{2}<d) \leq J^{w}(V|c_{1}<Y_{1},Y_{2}<d);\ \mbox{ for }c_{1}\leq c_{2}.\]
As a result, for fixed \(d\), the \(J^{w}(V|S)\) is decreasing in \(c\). Therefore the \(J^{w}(V|S)\) is partially increasing in \(S\).
**Remarks:** It is observable from the above theorem that, under specific circumstances, the \(J(Y|S)\) is partially increasing in \(S\), demonstrating its reasonability as a complement dual of the entropy measure.
**Remark 1**: _In Theorem 2 if we take \(w(y)=1\), we get the result of Bansal and Gupta [6]._
The following examples to Theorem 2 may be provided.
**Example 1**:
1. _Let_ \(Y_{1}\) _and_ \(Y_{2}\) _be two independent and identically distributed Weibull random variables with probability density function for_ \(\theta\geq 1,\ \lambda\geq 0\)_,_ \[g_{Y}(y)=\theta\lambda^{\theta}y^{\theta-1}e^{-(\lambda y)^{\theta}},\ y\geq 0.\] _Since_ \(g_{Y}(y)\) _is a log-concave function for_ \(\theta\geq 1\) _and_ \(w(y)=1/y\)_, then using Theorem_ 2_,_ \(J^{w}(V|S)\) _is a partially increasing in_ \(S\)_._
2. _Let_ \(Y_{1}\) _and_ \(Y_{2}\) _be two independent and identically distributed gamma random variables with probability density function for_ \(\theta\geq 1,\ \lambda\geq 0\)_,_ \[g_{Y}(y)=\frac{\lambda^{\theta}}{\Gamma(\theta)}y^{\theta-1}e^{-\lambda y},\ \ y\geq 0.\] _Since the probability density function of the gamma distribution is a log-concave function for_ \(\theta\geq 1\) _and let_ \(w(y)=1/y\)_, then using Theorem_ 2_,_ \(J^{w}(V|S)\) _is a partially increasing in_ \(S\)_.._
## 4 Conclusion
The entropy measure and its generalisations are now widely used in all scientific domains. General weighted entropy is a generalisation of extropy. We proposed conditional extropy and studied its partial monotonicity. We also obtained some results on convolution of general weighted extropy.
### Funding
Santosh Kumar Chaudhary is getting financial assistance for research from the Council of Scientific and Industrial Research (CSIR), Government of India (File Number 09/0081 (14002)/2022-EMR-I).
### Conflict of interest
No conflicts of interest are disclosed by the authors.
## Acknowledgement
The authors are thankful to the reviewers for their insightful comments, which significantly improved this manuscript. | Guptaとチャウdhary[14]は、一般重み付きエントロピーを導入し、関連する性質を調査しました。この論文では、条件エントロピーを調査し、条件エントロピーの単調性について定義しました。また、一般重み付きエントロピーの畳み込みに関する結果も得ました。
**Explanation**
* The original sentence is structured grammatically and flows naturally.
* Japanese sentence structure generally follows subject-verb-object order.
* Keywords and technical terms are carefully chosen for accuracy and clarity. |
2309.13761 | Text Classification: A Perspective of Deep Learning Methods | In recent years, with the rapid development of information on the Internet,
the number of complex texts and documents has increased exponentially, which
requires a deeper understanding of deep learning methods in order to accurately
classify texts using deep learning techniques, and thus deep learning methods
have become increasingly important in text classification. Text classification
is a class of tasks that automatically classifies a set of documents into
multiple predefined categories based on their content and subject matter. Thus,
the main goal of text classification is to enable users to extract information
from textual resources and process processes such as retrieval, classification,
and machine learning techniques together in order to classify different
categories. Many new techniques of deep learning have already achieved
excellent results in natural language processing. The success of these learning
algorithms relies on their ability to understand complex models and non-linear
relationships in data. However, finding the right structure, architecture, and
techniques for text classification is a challenge for researchers. This paper
introduces deep learning-based text classification algorithms, including
important steps required for text classification tasks such as feature
extraction, feature reduction, and evaluation strategies and methods. At the
end of the article, different deep learning text classification methods are
compared and summarized. | Zhongwei Wan | 2023-09-24T21:49:51 | http://arxiv.org/abs/2309.13761v1 | # Text Classification: A Perspective of Deep Learning Methods
###### Abstract
In recent years, with the rapid development of information on the Internet, the number of complex texts and documents has increased exponentially, which requires a deeper understanding of deep learning methods in order to accurately classify texts using deep learning techniques, and thus deep learning methods have become increasingly important in text classification. Text classification is a class of tasks that automatically classifies a set of documents into multiple predefined categories based on their content and subject matter. Thus, the main goal of text classification is to enable users to extract information from textual resources and process processes such as retrieval, classification, and machine learning techniques together in order to classify different categories. Many new techniques of deep learning have already achieved excellent results in natural language processing. The success of these learning algorithms relies on their ability to understand complex models and non-linear relationships in data. However, finding the right structure, architecture, and techniques for text classification is a challenge for researchers. This paper introduces deep learning-based text classification algorithms, including important steps required for text classification tasks such as feature extraction, feature reduction, and evaluation strategies and methods. At the end of the article, different deep learning text classification methods are compared and summarized.
Text Classification Machine Learning Deep Learning
## 1 Introduction
Text classification is a classical problem in natural language processing. The task is to assign predefined categories to a given sequence of texts. In recent years, the study of text classification has become increasingly important due to the rapid growth of social networks, blogs and forums, and the increase in the size of online academic libraries. As a result, text classification is widely used in information retrieval systems and search engine applications. At the same time, text classification can also be used for email and SMS spam filtering. Most of the text classification techniques include feature extraction of text, data reduction and deep learning model selection, and model evaluation. Also, text classification systems can classify text by its size, such as document level, paragraph level, sentence level, and clause level [1].
Before deep learning became the dominant model, traditional machine learning had a wide range of applications in text classification, such as using ensemble learning techniques like boosting and bagging for text classification and analysis [2]. At the same time, [3] used simple logistic regression to classify textual information for information retrieval using simple logistic regression. [4] uses a Naive Bayesian classifier to classify documents because Naive Bayes uses less memory and computation, and is the classifier that is used more often by traditional machine learning.
Therefore, this review consists of several parts, the first part will briefly introduce several deep learning algorithms for feature extraction in text classification tasks, such as Word2Vec [5], Global Vectors for Word Representation
(GloVe) [6], and several word embedding algorithms. In the second part, we will also briefly introduce several data reduction algorithms that may be used in traditional machine learning-based text classification tasks, such as Principal Component Analysis (PCA) [7], Linear Discriminant Analysis (LDA) [8], which can improve the Accuracy in traditional machine learning text classification tasks, in some cases.We will focus on several conventional deep learning-based text classification algorithms, such as LSTM [9], GRU [5], and several state of the art attention models, such as Transformer [10], and improved versions based on Transformer such as XL-Net [11], Bert [12], and several improved models of Bert, among others. Finally, Several common model evaluation techniques are very essential in text classification tasks, such as accuracy, Fb score, receiver operating characteristics (ROC), and area under the ROC curve (AUC).
## 2 Feature extraction
Word embedding is a key technique in the feature extraction process of text classification. Although we have used Tokenizer before the feature extraction task to divide sentences into words and count the number of occurrences of each word, as well as to generate syntactic word representations, this process does not capture the semantic information between words and words. This problem can cause serious problems in the model's understanding of the semantic information in sentences. For example, the n-gram model does not find word-to-word similarities. So google researchers in the journal NIPS solved this problem by word vector embedding. [5] is one of the foundational papers of word2vec, presented by Tomas Mikolov of Google. The paper proposed two word2vec model structures, CBOW and Skip-gram. And [13] is another foundational paper on word2vec. The Skip-gram model is described in detail, including the specific form of the model and two feasible training methods, Hierarchical Softmax and Negative Sampling. [6] presented GloVe, an improved technique of word2vec, using global matrix decomposition (LSA) and local content windows (Word2vec) to fully utilize statistical information to train the model using elements with non-zero frequencies in the word co-occurrence matrix.
From the three papers on word embedding presented by the above researchers, word embedding is a feature learning technique in which each word from a vocabulary is mapped to an X-dimensional vector. Two of the key techniques, word2vec and GloVe, have been successfully used in deep learning techniques. In word2vec, the training goal of the Skip-gram model is to find word representations that are useful for predicting words in the context of a sentence or document. And the formula is:
\[\frac{1}{T}\sum_{t=1}^{T}\sum_{-c\leq j\leq c_{j}\neq 0}\log p\left(w_{t+j} \mid w_{t}\right) \tag{1}\]
The CBOW model uses the contextual words to predict the central words, and the diagram of the two models is shown below.
## 3 Feature reduction
In this section, we will briefly introduce possible feature reduction techniques that can be used in text classification tasks. Many text sequences in term-based vector models consist of many complex features, and therefore many researchers use machine learning-based feature reduction techniques to reduce the feature space size and thus the temporal and spatial complexity of the model. We will briefly introduce two common dimensionality reduction techniques such as PCA [7], LDA [8] in the following.
Figure 1: Pipeline of Text classification [1]
### Principal Component Analysis (PCA)
The core idea of PCA is to reduce dimensionality by finding approximate subspaces of the data distribution. n-dimensional features are mapped by PCA to the k-dimension, which is a new orthogonal feature, also known as a principal component, that is reconstructed from the original n-dimensional features. k-dimensional features are reconstructed from the original n-dimensional features. It is closely related to the data itself. The first new axis is chosen to be the direction with the greatest variance in the original data, the second new axis is chosen to be the one with the greatest variance in the plane orthogonal to the first axis, and the third axis is the one with the greatest variance in the plane orthogonal to the 1st and 2nd axes. By analogy, n such axes can be obtained. The new axes are obtained in this way. Finally, by eigenvalue decomposition or SVD decomposition of the covariance matrix of the data, the eigenvector corresponding to the first K largest eigenvalues of the desired dimension is obtained, and multiplied with the original data matrix to obtain the dimensionality-reduced features.
\[Cov(X,Y)=E[(X-E(X))(Y-E(Y))]=\frac{1}{n-1}\sum_{i=1}^{n}\left(x_{i}-\bar{x} \right)\left(y_{i}-\bar{y}\right) \tag{2}\]
\[A=U\Sigma V^{T} \tag{3}\]
### Linear Discriminant Analysis (LDA)
LDA differs from the unsupervised learning technique of PCA in that it is a supervised dimensionality reduction technique, which means that each sample of the dataset has a category output.The core idea of LDA is that we want to
Figure 3: Two-dimensional PCA projection of the 1000-dimensional Skip-gram vectors of countries and their capital cities[62]
Figure 2: The CBOW architecture predicts the current word based on the context, and the Skip-gram predicts surrounding words given the current word[5].
project the data in a low dimension, and after projection we want the projection points of each category of data to be as close to each other as possible, and the distance between the category centers of the different categories of data to be as large as possible. Since we are projecting multiple categories to a low dimension, the projected low dimensional space is not a straight line, but a hyperplane. Suppose we project a low-dimensional space of dimension d, corresponding to a base vector of (w1,w2,...wn). The core formula of LDA is as follows, Sw is defined as a class scatter matrix and Sb is defined as an interclass scatter matrix, so the optimization function of the LDA dimensionality reduction algorithm is as follows.
\[\underbrace{\arg\max}_{W}J(W)=\frac{\prod_{\text{diag}}\ W^{T}S_{b}W}{\prod_{ \text{diag}}\ W^{T}S_{w}W} \tag{4}\]
\[S_{b}=\sum_{j=1}^{k}N_{j}\left(\mu_{j}-\mu\right)\left(\mu_{j}-\mu\right)^{T} \tag{5}\]
\[S_{w}=\sum_{j=1}^{k}S_{wj}=\sum_{j=1}^{k}\sum_{\mathbf{k}\in X_{j}}\left(x-\mu _{j}\right)\left(x-\mu j\right)^{T} \tag{6}\]
## 4 Deep Learning models
Deep learning models have achieved state-of-the-art results in many fields and are in many ways superior to traditional machine learning algorithms. Traditional machine learning algorithms require a feature extraction and classifier selection process, which increases the cost of labor. Deep learning models are end-to-end training models for many computer vision and natural language tasks. In many tasks, deep learning models have much better fitting and generalization capabilities than traditional machine learning algorithms, and we will introduce the most common deep learning backbone models, LSTM [9], GRU [5], and Transformer [10], which are mainly used in text classification tasks. ELMO [14], GPT [15], bert [12], GPT2 [16], XL-net [11] and other state of the art deep learning models.
### LSTM and GRU
Recurrent neural networks (RNNs) are the most commonly used neural network architectures for text data mining and classification, especially for serial data such as textual information. Thus, RNNs have advantages in classifying text, string, and sequential data, and in considering information from previous nodes on a temporal order basis.
#### 4.1.1 Long Short Time memory(LSTM)
LSTM is an improved model of RNN, which, compared to the original RNN, better overcomes the gradient disappearance problem through structures such as input gates, memory cells, forgetting gates, output gates, and hidden units, thereby maintaining long-term dependence. As shown in Figure 4, the forgetting gate controls whether the information from the cell memory of the previous time step is passed to the current time step, while the input gate controls how the input information of the current time step flows into the memory cell of the current time step through the candidate memory cell. This design can cope with the gradient decay problem in cyclic neural networks and better capture the dependency of large time step distances in time series. Therefore, the basic formula for the LSTM model is as follows.
\[i_{t}= \sigma(W_{i}[x_{t},h_{t-1}]+b_{i}),\] \[\tilde{C_{t}}= \tanh(W_{c}[x_{t},h_{t-1}]+b_{c})\] \[f_{t}= \sigma(W_{f}[x_{t},h_{t-1}]+b_{f}),\] \[C_{t}= i_{t}*\tilde{C_{t}}+f_{t}C_{t-1},\] \[o_{t}= \sigma(W_{o}[x_{t},h_{t-1}]+b_{o}),\] \[h_{t}= o_{t}\tanh(C_{t}),\]
#### 4.1.2 Gate Recurrent Unit(GRU)
Compared to LSTM, GRU is a simplified variant of LSTM with a reset gate and an update gate. As shown in figure 4, the state of the previous time step is discarded when the element in the reset gate is close to 0, and the hidden state of the previous time step is retained when the element in the reset gate is close to 1. The update gate controls how the hidden state should be updated by the candidate hidden state that contains information about the current time step. Finally, the hidden state of the current time step is a combination of the update gate of the current time step and the hidden state of the previous time step. The structure of GRU is as follows.
### ELMo
ELMo is a pre-training model based on Bi-direction LSTM. In previous word2vec work, each word corresponds to only one word vector, but it is not useful for polysemous words with complex semantic information. Therefore, in ELMo, the pre-trained model is no longer just a correspondence of vectors. When using ELMo, a sentence or a paragraph is entered into the model, and the model inferred the word vector for each word based on the context. The formula for the ELMo model is as follows.
\[p\left(t_{1},t_{2},\ldots,t_{N}\right)=\prod_{k=1}^{N}p\left(t_{k}\mid t_{1}, t_{2},\ldots,t_{k-1}\right) \tag{7}\]
\[p\left(t_{1},t_{2},\ldots,t_{N}\right)=\prod_{k=1}^{N}p\left(t_{k}\mid t_{k+1},t_{k+2},\ldots,t_{N}\right) \tag{8}\]
\[\begin{split}\sum_{k=1}^{N}\left(\log p\left(t_{k}\mid t_{1}, \ldots,t_{k-1};\Theta_{x},\widetilde{\Theta}_{LSTM},\Theta_{s}\right)\right. \\ \left.+\log p\left(t_{k}\mid t_{k+1},\ldots,t_{N};\Theta_{x}, \widetilde{\Theta}_{LSTM},\Theta_{s}\right)\right)\end{split} \tag{9}\]
Equation 7 computes the objective function to be learned by the forward LSTM language model, while Equation 8 computes the objective function for the backward LSTM language model. The Bi-directional LSTM language model
Figure 4: The cell of LSTM and GRU
Figure 5: The model of ELMo
used by ELMo combines the forward and backward formulas to maximize the forward and backward maximum likelihood probabilities, as shown in Equation 9. Thus, ELMo's model, shown in Fig. 9, has two improvements on the original LSTM, the first being the use of a multilayer LSTM and the second being the addition of a backward language model. For the backward language model, it is similar to the forward language model in that the backward context is given to predict the forward context. When ELMo's model completes the pre-training task, it can be used for other NLP tasks, or the Bi-directional LSTM can be fine-tuned and the representation of the task can be improved. This is a transfer learning method that can be used for NLP text classification tasks.
### Transformer
Attention is all you need. The Transformer model architecture, which has recently gained great interest among researchers, is based on a total attention model and has validity in the areas of language, vision, and reinforcement learning. Particularly in the field of natural language, the Transformer model has gone beyond RNN and has taken state of the art effects in several NLP tasks. The most important part of the Transformer model is the self-attention mechanism, which can be viewed as a graph-like induction bias that connects all the markers in a sequence through association-based pooling operations. Based on self-attention, the researchers proposed a multiheaded attention mechanism, a position-wise feed-forward network, layer normalization modules and residual connectors. Input to the Transformer model is often a tensor of shape (batch size, sequence length). The input first passes through an embedding layer that converts each one-hot token representation into a d dimensional embedding, And than an positional encodings is added to the aforementioned new tensor. The formula for the Transformer is as follows.
\[X_{A}=\text{ LayerNorm(MultiheadSelfAttention }(X))+X \tag{10}\]
\[X_{B}=\text{ Layer Norm(PositionFFN }(X_{A}))+X_{A} \tag{11}\]
\[\text{ Attention }(Q,K,V)=\operatorname{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}} }\right)V \tag{12}\]
where, Q, K, V are linear transformations applied on the the temporal dimension of the input sequence. dk is the dimension of the vector in the multiheaded attention model and is multiplied by 1/\(\sqrt{dk}\) to counteract the fact that when dk is large, the size of the dot product increases, thus pushing the softmax function into the region of its very small gradient. Thus, Transformer achieves the effect of state of the art on the sequence to sequence translation task. At the same time, for the text classification task, the use of transformer's encoder model coupled with the multilayer perceptron network (MLP) for the classification regression task also achieves good results.
### Generative Pre-Training (GPT)
Open AI proposes the Generative Pre-Training (GPT) model, a language comprehension task that uses a semi-supervised approach to processing. The GPT task is to learn a generic language representation, which can then be fine-tuned
Figure 6: The model of Transformer
for use in many downstream tasks, such as natural language processing tasks like text classification. The approach to unsupervised text processing is to maximize the great likelihood of the language model, hence the Transformer's decoder language model is used in the paper. Unlike Transformer's encoder, the Mask multiple-head attention model in the decoder model uses a mask that allows the model to notice only previous sequences during pre-training. Thus, the multi-layer structure of GPT applies multi-headed self-attention to process the input text plus a feedforward network of location information, and the output is a conceptual distribution of words.
Since the GPT uses a one-directional Transformer model, the model can only see the words above. The training process is to add Positional Encoding to the N-word word vector of the sentence and input it into the Transformer mentioned above, and the N outputs predict the next word at that location. The formula for the model is as follows.
\[L_{1}(\mathcal{U})=\sum\log P\left(u_{i}\mid u_{i-k},\dots,u_{i-1};\Theta\right) \tag{13}\]
\[h_{0}=UW_{e}+W_{p} \tag{14}\]
\[h_{l}=\text{ transformerblock }\left(h_{l-1}\right)\forall i\in[1,n] \tag{15}\]
\[P(u)=\operatorname{Softmax}\left(h_{n}W_{e}^{T}\right) \tag{16}\]
### Bi-directional Encoder Representation from Transformer(Bert)
BERT is very similar to GPT in that it is a two-stage Transformer-based training model, divided into Pre-Training and Fine-Tuning stages. The parameters in this model are fine-tuned to adapt it to different downstream tasks. However, GPT uses a unidirectional Transformer, while BERT uses a bidirectional Transformer, which means no Mask operation is required. In addition, BERT uses the Encoder in the Transformer model, while GPT uses the Decoder in the Transformer model, so the pre-training methods are different for the two models. In addition, BERT uses the Masked Language Model (MLM) pre-training method and the Next Sentence Prediction (NSP) pre-training method, which can be trained at the same time.
In order to distinguish between two sentences, BERT adds a Segment Embedding to be learned during pre-training, in addition to Positional Encoding. In this way, BERT's input consists of a word vector, a position vector, and a segment vector that are added together. In addition, the two sentences are distinguished from each other using <SEP> tags. The embedding is as follow.
Figure 8: The Embedding of Bert
Figure 7: The model of GPT
BERT's Fine-Tuning phase is not much different from GPT. Because of the bi-directional Transformer, the auxiliary training target used by GPT in the Fine-Tuning phase, i.e., the language model, has been abandoned. In addition, the output vector for classification prediction has been changed from the output position of the last word in GPT to the position of <CLS> at the beginning of a sentence.
### XL-Net
XLNet combines two pre-trained model ideas, Bert and GPT, to present a state-of-the art deep learning model in the field of natural language processing. From the above, we can see that GPT is a typical autoregressive language model, which has the disadvantage of not being able to use the above and below information at the same time. Bert, on the other hand, belongs to the Autoencoder Language Model, where Bert randomly Masks out a portion of the words in the input sequence, and then one of the main tasks of the pre-training process is to predict the Masks based on the contextual words. Therefore, XLNet needs to improve on Bert because the first pre-training stage takes the training mode of introducing [Mask] markers to Mask off some words, while the fine-tuning stage does not see such forced Mask markers, so the two stages have inconsistent usage patterns, which may lead to some performance loss. Another is that Bert assumes in the first pre-training phase that multiple words are Masked out of the sentence, that there is no relationship between the Masked out words, that they are conditionally independent, and that sometimes there is a relationship between the words, which XLNet takes into account.
Based on the autoregressive model, XLNet introduces the training objective of the Permutation Language Model in the pre-training phase, assuming that the sequence is x1,x2,x3,x4, the word to be predicted is x3, and ContextBefore is x1,x2. To take into account the content of ContextAfter, an After the random permutation operation, the sequences x4,x2,x3,x1 are obtained and the model is input. In this operation, XLNet is able to take into account both the context and the content. This part of the improvement is achieved through the mask operation in Transformer Attention. We refer to the literature for the detailed implementation process. The main idea of mask attention is as follow:
### Gpt-2
GPT2 is an enhanced version of GPT, and it is based on GPT-2 with the following improvements: GPT-2 collects a larger and more extensive dataset. At the same time, the quality of this dataset is ensured by retaining pages that
Figure 10: The mask attention of XLNet
Figure 9: The fine-tune of Bert
have high-quality content. Second, GPT-2 increased the number of Transformer stack layers to 48, the hidden layer dimension to 1600, and the number of parameters to 1.5 billion. Third, GPT-2 increased the vocabulary to 50257, the maximum context size from 512 to 1024, and the batch size from 512 to 1024. Self-attention is followed by the addition of a standardization layer; the initialization method of the residual layer is changed, and so on.
## 5 Conclusion
In natural language processing tasks, deep learning-based text classification is a very important research direction. With the development of the Internet and smart phones, the accurate classification of text and analysis of its content has become the frontier in the field of natural language processing. The rapid progress of deep learning has largely replaced the traditional machine learning methods. In this paper, we first introduce feature extraction methods for text classification, such as word2vec, and GloVE, which is a method of mutual inference between central words and contextual words. After that, we briefly introduce some feature reduction methods applied to text classification. We focus on deep learning based text classification models. These two-stage pre-training models consist of a pre-training process and a two-stage model that is based on the pre-training process. fine tuned process. Today, these deep learning based pre-trained models are the mainstay of natural language processing tasks. These models have been fine-tuned and have yielded excellent results in the field of text classification.
| 近年、インターネットの情報の発展が急速に進んでいるため、複雑なテキストや文書の数が増加 exponentially しているため、深層学習法の深い理解が必要となり、深層学習技術を用いてテキストを正確に分類するために、深層学習方法がますます重要になっています。テキスト分類は、複数の事前定義されたカテゴリに、文書のコンテンツと主題に基づいて、自動的に分類するタスクの一種です。したがって、テキスト分類の主な目的は、ユーザーがテキストリソースから情報を抽出できるようにすること、そして、検索、分類、機械学習技術などのプロセスを連携させて、異なるカテゴリを分類することです。自然言語処理における、多くの新しい深層学習の技術はすでに優れた成果を挙げている。これらの学習アルゴリズムの成功は、複雑なモデルとデータにおける非線形な関係を理解する能力に依存しています。しかし、テキスト分類に適切な構造、アーキテクチャ、手法を見つけることは、研究 |
2309.10240 | DProvDB: Differentially Private Query Processing with Multi-Analyst
Provenance | Recent years have witnessed the adoption of differential privacy (DP) in
practical database systems like PINQ, FLEX, and PrivateSQL. Such systems allow
data analysts to query sensitive data while providing a rigorous and provable
privacy guarantee. However, the existing design of these systems does not
distinguish data analysts of different privilege levels or trust levels. This
design can have an unfair apportion of the privacy budget among the data
analyst if treating them as a single entity, or waste the privacy budget if
considering them as non-colluding parties and answering their queries
independently. In this paper, we propose DProvDB, a fine-grained privacy
provenance framework for the multi-analyst scenario that tracks the privacy
loss to each single data analyst. Under this framework, when given a fixed
privacy budget, we build algorithms that maximize the number of queries that
could be answered accurately and apportion the privacy budget according to the
privilege levels of the data analysts. | Shufan Zhang, Xi He | 2023-09-19T01:42:39 | http://arxiv.org/abs/2309.10240v1 | # DProvDB: Differentially Private Query Processing with Multi-Analyst Provenance+
###### Abstract.
Recent years have witnessed the adoption of differential privacy (DP) in practical database systems like PINQ, FLEX, and PrivateSQL. Such systems allow data analysts to query sensitive data while providing a rigorous and provable privacy guarantee. However, the existing design of these systems does not distinguish data analysts of different privilege levels or trust levels. This design can have an unfair apportion of the privacy budget among the data analyst if treating them as a single entity, or waste the privacy budget if considering them as non-colluding parties and answering their queries independently. In this paper, we propose DProvOB, a fine-grained privacy provenance framework for the multi-analyst scenario that tracks the privacy loss to each single data analyst. Under this framework, when given a fixed privacy budget, we build algorithms that maximize the number of queries that could be answered accurately and apportion the privacy budget according to the privilege levels of the data analysts.
## 1. Introduction
With the growing attention on data privacy and the development of privacy regulations like GDPR (Zheng et al., 2017), companies with sensitive data must share their data without compromising the privacy of data contributors. Differential privacy (DP) (Krishnan et al., 2017) has been considered as a promising standard for this setting. Recent years have witnessed the adoption of DP to practical systems for data management and online query processing, such as PINQ (Zheng et al., 2017), FLEX (Zheng et al., 2017), PrivateSQL (Zheng et al., 2017), GoogleDP (Beng et al., 2017), and Chorus (Chorus, 2017). In systems of this kind, data curators or system providers set up a finite system-wise privacy budget to bound the overall extent of information disclosure. An incoming query consumes some privacy budget. The system stops processing new queries once the budget has been fully depleted. Thus, the privacy budget is a crucial resource to manage in such a query processing system.
In practice, multiple data analysts can be interested in the same data, and they have different privilege/trust levels in accessing the data. For instance, tech companies need to query their users' data for internal applications like anomaly detection. They also consider inviting external researchers with low privilege/trust levels to access the same sensitive data for study. Existing query processing systems with DP guarantees would regard these data analysts as a unified entity and do not provide tools to distinguish them or track their perspective privacy loss. This leads to a few potential problems. First, a low-privilege external data analyst who asks queries first can consume more privacy budget than an internal one, if the system does not interfere with the sequence of queries. Second, if naively tracking and answering each analyst's queries independently of the others, the system can waste the privacy budget when two data analysts ask similar queries.
The aforementioned challenges to private data management and analytics are mainly on account of the fact that the systems are "_stateless_", meaning none of the existing DP query-processing systems records the individual budget limit and the historical queries asked by the data analysts. That is, the **metadata** about _where the query comes from, how the query is computed, and how many times each result is produced_, which is related to the **provenance information** in database research (Chorus et al., 2017; Krishnan et al., 2017). As one can see, without privacy provenance, the query answering process for the multi-analyst use case can be unfair or wasteful in budget allocation.
To tackle these challenges, we propose a "stateful" DP query processing system DProvOB, which enables a novel privacy provenance framework designed for the multi-analyst setting. Following the existing work (Zheng et al., 2017), DProvOB answers queries based on private synopses (i.e., materialized results for views) of data. Instead of recording all the query metadata, we propose a more succinct data structure -- a privacy provenance table, that enforces only necessary privacy tracing as per each data analyst and per view. The privacy provenance table is associated with privacy constraints so that constraint-violating queries will be rejected. Making use of this privacy provenance framework, DProvOB can maintain global (viz., as per view) and local (viz., as per analyst) DP synopses and update them dynamically according to data analysts' requests.
DProvOB is supported with a new principled method, called _additive Gaussian approach_, to manage DP synopses. The additive Gaussian approach leverages DP mechanism that adds correlated Gaussian noise to mediating unnecessary budget consumption across data analysts and over time. This approach first creates a global DP synopsis for a view query; Then, from this global synopsis, it provides the necessary local DP synopsis to data analysts who are interested in this view by adding more Gaussian noise. In such a way DProvOB is tuned to answer as many queries accurately from different data analysts. Even when all the analysts collude, the privacy loss will be bounded by the budget used for the global synopsis. Adding up to its merits, we notice the provenance tracking in DProvOB can help in achieving a notion called proportional fairness. We believe most of if not all, existing DP query processing systems can benefit from integrating our multi-analyst privacy provenance framework -- DProvOB can be regarded as a middle-ground approach between the purely interactive DP systems and those based solely on synopses, from which we both provably and empirically show that DProvOB can significantly improve on system utility and fairness for multi-analyst DP query processing.
The contributions of this paper are the following:
* We propose a multi-analyst DP model where mechanisms satisfying this DP provide discrepant answers to analysts with different privilege levels. Under this setting, we ask research questions about tight privacy analysis, budget allocation, and fair query processing. (Section SS3)
* We propose a privacy provenance framework that compactly traces historical queries and privacy consumption as per analyst and as per view. With this framework, the administrator is able to enforce privacy constraints, enabling dynamic budget allocation and fair query processing. (Section SS4)
* We design new accuracy-aware DP mechanisms that leverages the provenance data to manage synopses and inject correlated noise to achieve tight collusion bounds over time in the multi-analyst setting. The proposed mechanisms can be seamlessly added to the algorithmic toolbox for DP systems. (Section SS5)
* We implement DProvOB1, as a new multi-analyst query processing interface, and integrate it into an existing DP query system. We empirically evaluate DProvOB, and the experimental results show that our system is efficient and effective compared to baseline systems. (Section SS6)
Footnote 1: The system code is available at [https://github.com/DProvDB/DProvDB](https://github.com/DProvDB/DProvDB).
Paper RoadmapThe remainder of this paper is outlined as follows. Section 2 introduces the necessary notations and background knowledge on database and DP. Our multi-analyst DP query processing research problems are formulated in section 3 and a high-level overview of our proposed system is briefed in section 4. Section 5 describes the details of our design of the DP mechanisms and system modules. In section 6, we present the implementation details and an empirical evaluation of our system against the baseline solutions. Section 7 discusses extensions to the compromisation model and other strawman designs. In section 8 we go through the literature that is related and we conclude this work in section 9.
## 2. Preliminaries
Let \(\mathcal{D}\) denotes the domain of database and \(D\) be a database instance. A relation \(R\in D\) consists of a set of attributes, \(attr(R)=\{a_{1},\ldots,a_{j}\}\). We denote the domain of an attribute \(a_{j}\) by \(Dom(a_{j})\) while \(|Dom(a_{j})|\) denotes the domain size of that attribute. We introduce and summarize the related definitions of differential privacy.
Definition 1 (Differential Privacy (Henderson, 1993)).: _We say that a randomized algorithm \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{O}\) satisfies \((\epsilon,\delta)\)-differential privacy (DP), if for any two neighbouring databases \((D,D^{\prime})\) that differ in only 1 tuple, and \(O\subseteq\mathcal{O}\), we have_
\[\Pr[\mathcal{M}(D)\in O]\leq e^{\epsilon}\Pr[\mathcal{M}(D^{\prime})\in O]+\delta.\]
DP enjoys many useful properties, for example, post-processing and sequential composition (K
\(i\in[m]\), and all \(O_{i}\subseteq O_{i}\), we have_
\[\Pr[\mathcal{M}(D)\in O_{i}]\leq e^{\epsilon_{i}}\Pr[\mathcal{M}(D^{\prime})\in O _{i}]+\delta_{i},\]
_where \(O_{i}\) are the outputs released to the \(i\)th analyst._
The multi-analyst DP variant supports the composition across different algorithms, as indicated by the following theorem.
**Theorem 3.1** (Multi-Analyst DP Composition).: _Given two randomized mechanisms \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), where \(\mathcal{M}_{1}:\mathcal{D}\rightarrow(O_{1},\ldots,O_{m})\) satisfies \([(A_{1},\epsilon_{1},\delta_{1}),...,(A_{m},\epsilon_{m},\delta_{m})]\)-multi-analyst-DP, and \(\mathcal{M}_{2}:\mathcal{D}\rightarrow(O_{1}^{\prime},\ldots,O_{m}^{\prime})\) satisfies \([(A_{1},\epsilon_{1}^{\prime},\delta_{1}^{\prime}),...,(A_{m},\epsilon_{m}^{ \prime},\delta_{m}^{\prime})]\)-multi-analyst-DP, then the mechanism \(g(\mathcal{M}_{1},\mathcal{M}_{2})\) gives the \([(A_{1},\epsilon_{1}+\epsilon_{1}^{\prime},\delta_{1}+\delta_{1}^{\prime}),...,(A_{m},\epsilon_{m}+\epsilon_{m}^{\prime},\delta_{m}+\delta_{m}^{\prime})]\)-multi-analyst-DP guarantee._
Unlike prior DP work for multiple data analysts, our setup considers data analysts who are obliged under laws/regulations should not share their privacy budget/query responses with each other. We provide a detailed discussion on comparing other work in Section 8.
Under our new multi-analyst DP framework, several, natural but less well-understood, research questions (RQs) are raised and problem setups are considered of potential interest.
**RQ 1: worst-case privacy analysis across analysts.** Under this multi-analyst DP framework, what if all data analysts collude or are compromised by an adversary, how could we design algorithms to account for the privacy loss to the colluded analysts? When this happens, we can obtain the following trivial lower bound and upper bound for the standard DP measure.
**Theorem 3.2** (Compromisation Lower/Upper Bound, Trivial).: _Given a mechanism \(\mathcal{M}\) that satisfies \([(A_{1},\epsilon_{1},\delta_{1}),...,(A_{m},\epsilon_{m},\delta_{m})]\)-multi-analyst-DP, when all data analysts collude, its DP loss is (i) lowered bounded by \((\max\epsilon_{i},\max\delta_{i})\), where \((\epsilon_{i},\delta_{i})\) is the privacy loss to the \(i\)-th analyst, and (ii) trivially upper bounded by \((\sum\epsilon_{i},\sum\delta_{i})\)._
The lower bound indicates the least amount of information that has to be released (to the analyst) and this upper bound is simply derived from sequential composition. Obviously, the trivial upper bound does not match the lower bound, rendering the question of designing multi-analyst DP mechanisms to close the gap. Closing this gap means _even if these data analysts break the law and collude, the overall privacy loss of the multi-analyst DP mechanism is still minimized._ In this paper, we would like to design such algorithms that achieve the lower bound, as shown in Section 5.
**RQ 2: dynamic budget allocation across views.** The DP query processing system should impose constraints on the total privacy loss by all the analysts (in the worst case) and the privacy loss per analyst. When handling incoming queries, prior work either dynamically allocates the privacy budget based on the budget request per query (Sandhi, 2017, 2018) or query accuracy requirements (Sandhi, 2017) or predefine a set of static DP views that can handle the incoming queries (Sandhi, 2017).
The dynamic budget per query approach can deplete the privacy budget quickly as each query is handled with a fresh new privacy budget. The static views spend the budget in advance among them to handle unlimited queries, but they may fail to meet the accuracy requirements of some future queries. Therefore, in our work, we consider the view approach but assign budgets dynamically to the views based on the incoming queries so that more queries can be answered with their accuracy requirements. Specifically, we would like to use the histogram view, which queries the number of tuples in a database for each possible value of a set of attributes. The answer to a view is called a synopsis. We consider a set of views that can answer all incoming queries.
**Definition 6** (Query Answerability (Sandhi, 2017)).: _For a query \(q\) over the database \(D\), if there exists a query \(q^{\prime}\) over the histogram view \(V\) such that \(q(D)=q(V(D))\), we say that \(q\) is answerable over \(V\)._
**Example 1**.: Consider two queries \(q_{1}\) and \(q_{2}\) over a database for employees in Figure 1. They are answerable over the \(V_{1}\), a 3-way marginal contingency table over attributes (age, gender, education), via their respective transformed queries \(\hat{q}_{1}\) and \(\hat{q}_{2}\). \(\Box\)
Given a set of views, we would like to design algorithms that can dynamically allocate privacy budgets to them and update their corresponding DP synopses over time. We show how these algorithms maximize the queries that can be handled accurately in Section 5. Since we can dynamically allocate budget to views, our system can add new views to the system as overtime. We discuss this possibility in Section 5.3.
**RQ 3: fair query answering among data analysts.** A fair system expects data analysts with higher privacy privileges to receive more accurate answers or larger privacy budgets than ones with lower privacy privileges. However, existing DP systems make no distinctions among data analysts. Hence, it is possible that a low-privilege external analyst who asks queries first consumes all the privacy budget and receives more accurate query answers, leaving no privacy budgets for high-privilege internal data analysts. It is also impossible to force data analysts to ask queries in a certain order. In this context, we would like the system to set up the (available and consumed) privacy budgets for data analysts according to their privacy privilege level. In particular, we define privacy privilege levels as an integer in the range of 1 to 10, where a higher number represents a higher privilege level. We also define a fairness notion inspired by the literature on resource allocation (Bahdan et al., 2017; Bahdan et al., 2017; Bahdan et al., 2017).
**Definition 7** (Proportional Fairness).: _Consider a DP system handling a sequence of queries \(Q\) from multiple data analysts with a mechanism \(\mathcal{M}\), where each data analyst \(A_{i}\) is associated with a privilege level \(l_{i}\). We say the mechanism \(\mathcal{M}\) satisfies proportional fairness, if \(\forall A_{i},A_{j}\ (i\neq j),l_{i}\leq l_{j}\), we have_
\[\frac{Err_{i}(M,A_{i},Q)}{\mu(l_{i})}\leq\frac{Err_{j}(M,A_{j},Q)}{\mu(l_{j})},\]
_where \(Err_{i}(M,A_{i},Q)\) denotes the analyst \(A_{i}\)'s privacy budget consumption and \(\mu(\cdot)\) is some linear function._
This fairness notion suggests the quality of query answers to data analysts, denoted by \(Err_{i}(M,A_{i},Q)\) is proportional to their privilege levels, denoted by a linear function \(\mu(\cdot)\) of their privilege levels. We first consider the privacy budget per data analyst as the quality function, as a smaller error to the query answer is expected with a larger privacy budget. We show in Section 5.3 how to set up the system to achieve fairness when the analysts ask a sufficient number of queries, which means they finish consuming their assigned privacy budget.
System Overview
In this section, we outline the key design principles of DProvOB and briefly describe the modules of the system.
### Key Design Principles
To support the multi-analyst use case and to answer the aforementioned research questions, we identify the four principles and propose a system DProvOB that follows these principles.
**Principle 1: fine-grained privacy provenance.** The query processing system should be able to track the privacy budget allocated per each data analyst and per each view in a fine-grained way. The system should additionally enable a mechanism to compose privacy loss across data analysts and the queries they ask.
**Principle 2: view-based privacy management.** The queries are answered based on DP views or synopses. Compared to directly answering a query from the database \(D\), view-based query answering can answer more private queries (Srivastava et al., 2016), but it assumes the accessibility of a pre-known query workload. In our system, the view is the minimum data object that we keep track of its privacy loss, and the views can be updated dynamically if higher data utility is required. The privacy budgets spent on different views during the updating process depend on the incoming queries.
**Principle 3: dual query submission mode.** Besides allowing data analysts to submit a budget with their query, the system enables an accuracy-aware mode. With this mode, data analysts can submit the query with their desired accuracy levels regarding the expected squared error. The dual mode system supports data analysts from domain experts, who can take full advantage of the budgets, to DP novices, who only care about the accuracy bounds of the query.
**Principle 4: maximum query answering.** The system should be tuned to answer as many queries accurately as possible without violating the privacy constraint specified by the administrator as per data analyst and per view based on their privilege levels.
### Privacy Provenance Table
To meet the first two principles, we propose a privacy provenance table for DProvOB, inspired by the access matrix model in access control literature (Krishna et al., 2017), to track the privacy loss per analyst and per view, and further bound the privacy loss. Particularly, in our model, the state of the overall privacy loss of the system is defined as a triplet \((\mathcal{A},\mathcal{V},\mathcal{P})\), where \(\mathcal{A}\) denotes the set of data analysts and \(\mathcal{V}\) represents the list of query-views maintained by the system. We denote by \(\mathcal{P}\) the privacy provenance table, defined as follows.
[Privacy Provenance Table] The privacy provenance table \(\mathcal{P}\) consists of (i) a provenance matrix \(P\) that tracks the privacy loss of a view in \(\mathcal{V}\) to each data analyst in \(\mathcal{A}\), where each entry of the matrix \(P[A_{i},V_{j}]\) records the current cumulative privacy loss \(S^{A_{i}}_{V_{j}}\), on view \(V_{j}\) to analyst \(A_{i}\); (ii) a set of row/column/table constraints, \(\Psi\): a row constraint for \(i\)-th row of \(P\), denoted by \(\psi_{A_{i}}\), refers to the allowed maximum privacy loss to a data analyst \(A_{i}\in\mathcal{A}\) (according to his/her privilege level); a column constraint for the \(j\)-th column, denoted by \(\psi_{V_{j}}\), refers to as the allowed maximum privacy loss to a specific view \(V_{j}\); the table constraint over \(P\), denoted by \(\psi_{P}\), specifies the overall privacy loss allowed for the protected database.
The privacy constraints and the provenance matrix are correlated. In particular, the row/column constraints cannot exceed the overall table constraint, and each entry of the matrix cannot exceed row/column constraints. The correlations, such as the composition of the privacy constraints of all views or all analysts, depend on the DP mechanisms supported by the system. We provide the details of DP mechanisms and the respective correlations in privacy provenance table in Section 5.
Figure 1 gives an example of the privacy provenance table for \(n\) views and \(m\) data analysts. When DProvOB receives query \(q_{1}\) from Bob, it plans to use view \(V_{1}\) to answer it. DProvOB first retrieves the previous cumulative cost of \(V_{1}\) to Bob from the matrix, \(P[Bob,V_{1}]\), and then computes the new cumulative cost \(S^{Bob}_{V_{1}}\) for \(V_{1}\) to Bob as if it answers \(q_{1}\) using \(V_{1}\). If the new cost \(S^{Bob}_{V_{1}}\) is smaller than Bob's privacy constraint \(\psi_{Bob}\), the view constraint \(\psi_{V_{1}}\), and the table constraint \(\psi_{P}\), DProvOB will answer \(q_{1}\) and update \(P[Bob,V_{1}]\) to \(S^{Bob}_{V_{1}}\); otherwise, \(q_{1}\) will be rejected. \(\Box\)
Due to the privacy constraints imposed by the privacy provenance table, queries can be rejected when the cumulative privacy cost exceeds the constraints. DProvOB needs to design DP mechanisms that well utilize the privacy budget to answer more queries. Hence, we formulate the _maximum query answering problem_ based on the privacy provenance table.
Given a privacy provenance table \((\mathcal{A},\mathcal{V},\mathcal{P})\), at each time, a data analyst \(A_{i}\in\mathcal{A}\) submits the query with a utility requirement \((q_{i},v_{i})\), where the transformed \(\hat{q}_{i}\in\mathcal{V}\), how can we design a system to answer as many queries as possible without violating the row/column/table privacy constraints in \(P\) while meeting the utility requirement per query?
Figure 1. Illustration of the Histogram View, the Query Transformation and the Privacy Provenance Table: © Histogram view \(V_{1}\) over age, gender, and education is built on the database snapshot; © Forthcoming queries \(q_{1}\) and \(q_{2}\) are transformed into linearly answerable queries \(\hat{q}_{1}\) and \(\hat{q}_{2}\) over \(V_{1};\oplus\) Analysts (\(A_{1}\), \(A_{2}\) with low, \(A_{3}\) with high privilege) are recorded in the provenance table – the privacy loss for each view to each analyst is tracked for real-time queries.
Hence we would like to build an efficient system to satisfy the aforementioned design principles and enable the privacy provenance table, and develop algorithms to solve the maximum query answering problem in DProvOB. We next outline the system modules in DProvOB, and then provide detailed algorithm design in Section 5.
### System Modules
The DProvOB system works as a middle-ware between data analysts and existing DP DBMS systems (such as PINQ, Chorus, and PrivateSQL) to provide intriguing and add-on functionalities, including fine-grained privacy tracking, view/synopsis management, and privacy-accuracy translation. We briefly summarize the high-level ideas of the modules below.
**Privacy Provenance Tracking.**DProvOB maintains the privacy provenance table for each registered analyst and each generated view, as introduced in Section 4.2. Constraint checking is enabled based on this provenance tracking to decide whether to reject an analyst's query or not. We further build DP mechanisms to maintain and update the DP synposes and the privacy provenance table.
**Dual Query Submission Mode.**DProvOB provides two query submission modes to data analysts. _Privacy-oriented mode_(Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019): queries are submitted with a pre-apportioned privacy budget, i.e., \((A_{i},q_{i},\{e_{i},\delta_{i}\})\). _Accuracy-oriented mode_(Han et al., 2019; Wang et al., 2019): analysts can write the query with a desired accuracy bound, i.e., queries are in form of \((A_{i},q_{i},v_{i})\). We illustrate our algorithm with the accuracy-oriented mode.
**Algorithm Overview.** Algorithm 1 summarizes how DProvOB uses the DP synopses to answer incoming queries. At the system setup phase (line 1-3), the administrator initializes the privacy provenance table by setting the matrix entry as 0 and the row/column/table constraints \(\Psi\). The system initializes empty synopses for each view. The data analyst specifies a query \(q_{i}\) with its desired utility requirement \(v_{i}\) (line 5). Once the system receives the request, it selects the suitable view and mechanism to answer this query (line 6-7) and uses the function privacyTranslate() to find the minimum privacy budget \(\epsilon_{i}\) for \(V\) to meet the utility requirement of \(q_{i}\) (line 8). Then, DProvOB checks if answering \(q_{i}\) with budget \(\epsilon_{i}\) will violate the privacy constraints \(\Psi\) (line 9). If this constraint check passes, we run the mechanism to obtain a noisy synopsis (line 10). DProvOB uses this synopsis to answer query \(q_{i}\) and returns the answer to the data analyst (line 11). If the constraint check fails, DProvOB rejects the query (line 13). We show concrete DP mechanisms with their corresponding interfaces in the next section.
**Remark.** For simplicity, we drop \(\delta\) and focus on \(\epsilon\) as privacy loss/budget in privacy composition, but \(\delta\)'s of DP synopses are similarly composited as stated in Theorem 3.1. For the accuracy-privacy translation, we consider a fixed given small delta and aim to find the smallest possible epsilon to achieve the accuracy specification of the data analyst.
```
1 Set \(\delta\) in the system
2Function\(\textsc{run}(P,A_{i},V,\epsilon_{i})\) :
3 Generate a synopsis \(V^{e_{i}}_{A_{i}}\) from view \(V\)
4 Update privacy provenance table \(P[A_{i},V]\gets P[A_{i},V]+\epsilon_{i}\)
5return\(r_{i}\gets V^{e_{i}}_{A_{i}}\)
6 end for
7Function privacyTranslate(\(q_{i},v_{i},V,p\)) :
8 Set \(u=\psi p\), \(l=0\)
9\(\triangleright\) calculateVariance(\(q_{i},v_{i},V\))
10\(\epsilon\)\(=\) binARYSearch(\(l,u,\textsc{testAccurc}(\cdot,v,\Delta q_{i},\delta)\), \(p\))
11return\(\epsilon\)
12 end for
13Function constraintCheck(\(P,A_{i},V_{j},\epsilon_{i},\Psi\)) :
14if(\(P\).composited(\(\epsilon\)+\(\epsilon_{i}\leq\Psi\).\(\psi\))\(\wedge\)(\(P\).composited(\(\epsilon\)axis-Row) + \(\epsilon_{i}\leq\Psi\).\(\psi_{A_{i}}\))\(\wedge\)(\(P\).composited(\(\epsilon\)axis-Column) + \(\epsilon_{i}\leq\Psi\).\(\psi_{V_{j}}\))then
15return True/Pass
16 end
```
**Algorithm 2**Vanilla Approach
## 5. DP Algorithm Design
In this section, we first describe a vanilla DP mechanism that can instantiate the system interface but cannot maximize the number of queries being answered. Then we propose an additive Gaussian mechanism that leverages the correlated noise in query answering to improve the utility of the vanilla mechanism. Without loss of generality, we assume the data analysts do not submit the same query with decreased accuracy requirement (as they would be only interested in a more accurate answer).
### Vanilla Approach
The vanilla approach is based on the Gaussian mechanism (applied to both the basic Gaussian mechanism (Han et al., 2017) and the analytic Gaussian mechanism (Blei et al., 2017)). We describe how the system modules are instantiated with the vanilla approach.
#### 5.1.1. Accuracy-Privacy Translation
This module interprets the user-specified utility requirement into the minimum privacy budget (Algorithm 2: line 7-11). Note that instead of perturbing the query result to \(q_{i}\) directly, we generate a noisy DP synopsis and use it to answer a query (usually involving adding up a number of noisy counts from the synopsis). Hence, we need to translate the accuracy bound \(v_{i}\) specified by the data analyst over the query to the corresponding accuracy bound \(v\) for the synopsis before searching for the minimal privacy budget (Algorithm 2: line 9). Here, \(v\) represents the variance of the noise added to each count of the histogram synopsis. Next, we search for the minimal privacy budget \(\epsilon\) that results in a noisy synopsis with noise variance not more than \(v\), based on the following analytic Gaussian translation.
**Definition 9** (Analytic Gaussian Translation).: _Given a query \(q:\mathcal{D}\rightarrow\mathbb{R}^{d}\) to achieve an expected squared error bound \(v\) for this query, the minimum privacy budget for the analytic Gaussian mechanism should satisfy \(\Phi_{\mathcal{N}}\left(\frac{\Delta q}{2v}-\frac{\epsilon v}{\Delta q}\right)- \epsilon^{\epsilon}\Phi_{\mathcal{N}}\left(-\frac{\Delta q}{2v}-\frac{\epsilon v }{\Delta q}\right)\leq\delta\). That is, given \(\Delta q,\delta,v\), to solve the following optimization problem to find the minimal \(\epsilon\)._
\[\min_{\epsilon\in(0,\psi_{\mathcal{D}}]}\epsilon\text{ s.t. }\Phi_{\mathcal{N}}\left(\frac{\Delta q}{2v}-\frac{ \epsilon v}{\Delta q}\right)-\epsilon^{\epsilon}\Phi_{\mathcal{N}}\left(- \frac{\Delta q}{2v}-\frac{\epsilon v}{\Delta q}\right)\leq\delta \tag{1}\]
Finding a closed-form solution to the problem above is not easy. However, we observe that the LHS of the constraint in Equation (1) is a monotonic function of \(\epsilon\). Thus, we use binary search (Algorithm 2: line 10) to look for the smallest possible value for \(\epsilon\). For each tested value of \(\epsilon\), we compute its analytic Gaussian variance (Bartos et al., 2016), denoted by \(v^{\prime}\). If \(v^{\prime}>v\), then this value is invalid and we search for a bigger epsilon value; otherwise, we search for a smaller one. We stop at an epsilon value with a variance \(v^{\prime}\leq v\) and a distance \(p\) from the last tested invalid epsilon value. We have the following guarantee for this output.
**Proposition 5.1** (Correctness of Translation).: _Given a query \((q_{i},v_{i})\) and the view \(V\) for answering \(q_{i}\), the translation function (Algorithm 2, privacyTranslation) outputs a privacy budget \(\epsilon\). The query \(q_{i}\) can then be answered with expected square error \(v_{q}\) over the updated synopsis \(V_{A}^{\epsilon}\) such that: i) meets the accuracy requirement \(v_{q}\leq v_{i}\), and ii) \(\epsilon-\epsilon^{*}\leq p\), where \(\epsilon^{*}\) is the minimal privacy budget to meet the accuracy requirement for Algorithm 2 (RUN function)._
Proof Sketch.: First, line 9 derives the per-bin accuracy requirement based on the submitted per-query accuracy, and plugs it into the search condition (Equation (1)). Note that our DP mechanism and the accuracy requirement are data-independent. As long as the searching condition holds, the noise added to the query answer in run satisfies \(v_{q}\leq v_{i}\). Second, the stopping condition of the search algorithm guarantees 1) there is a solution, 2) the searching range is reduced to \(\leq p\). Thus we have \(\epsilon-\epsilon^{*}\leq p\).
#### 5.1.2. Provenance Constraint Checking
As mentioned, the administrator can specify privacy constraints in privacy provenance table. \(\mathsf{DProvOB}\) decide whether _reject or answer a query_ using the provenance matrix \(P\) and the privacy constraints \(\Psi\) in privacy provenance table, as indicated in Algorithm 2: 13-15 (the function constraintCheck). This function checks whether the three types of constraints would be violated when the current query was to issue. The composite function in this constraint-checking algorithm can refer to the basic sequential composition or tighter privacy composition given by Renyi-DP (Renyi and Renyi, 2016) or zCDP (Renyi and Renyi, 2016; Renyi and Renyi, 2016). We suggest to use advanced composition for accounting privacy loss over time, but not for checking constraints, because the size of the provenance table \(n*m\) is too small for a tight bound by this composition.
#### 5.1.3. Putting Components All Together
The vanilla approach is aligned with existing DP query systems in the sense that it adds independent noise to the result of each query. Hence, it can be quickly integrated into these systems to provide privacy provenance and accuracy-aware features with little overhead. Algorithm 2: 2-5 (the function run) outlines how the vanilla method runs. It first generates the DP synopsis \(V_{A_{i}}^{\epsilon_{i}}\) using analytic Gaussian mechanism for the chosen view \(V\) and updates the corresponding entry \(P[A_{i},V]\) in the privacy provenance table by compositing the consumed privacy loss \((\epsilon_{i},\delta_{i})\) on the query (depending on the specific composition method used). We defer the analysis for the accuracy and privacy properties of the vanilla mechanism to Section 5.4.
### Additive Gaussian Approach
While ideas of using correlated Gaussian noise have been exploited (Zhu et al., 2017), we adopt similar statistical properties into an additive Gaussian DP mechanism, a primitive to build our additive Gaussian approach for synopses maintenance. Then, we describe how \(\mathsf{DProvOB}\) generates and updates the (local and global) DP synopses with this algorithm across analysts and over time.
#### 5.2.1. Additive Gaussian Mechanism
The additive Gaussian mechanism (additive GM or aGM) modifies the standard Gaussian mechanism, based on the nice statistical property of the Gaussian distribution \(-\) the sum of i.i.d. normal random variables is still normally distributed. We outline this primitive mechanism in Algorithm 3. This primitive takes a query \(q\), a database instance \(D\), a set of privacy budgets \(\mathcal{B}\) corresponding to the set of data analysts \(\mathcal{A}\) as input, and this primitive outputs a noisy query result to each data analyst \(A_{i}\), which consumes the corresponding privacy budget \((\epsilon_{i},\delta)\). Its key idea is only to execute the query (to get the true answer on the database) once, and cumulatively inject noises to previous noisy answers, when multiple data analysts ask the same query. In particular, we sort the privacy budget set specified by the analysts. Starting from the largest budget, we add noise w.r.t. the Gaussian variance \(\sigma_{i}^{2}\) calculated from the query sensitivity \(\Delta q\) and this budget \((\epsilon_{i},\delta)\). For the rest of the budgets in the set, we calculate the Gaussian variance \(\sigma_{j}^{2}\) in the same approach but add noise w.r.t \(\sigma_{j}^{2}-\sigma_{i}^{2}\) to the previous noisy answer. The algorithm then returns the noisy query answer to each data analyst. The privacy guarantee of this primitive is stated as follows.
**Theorem 5.2**.: _Given a database \(D\), a set of privacy budgets \(\mathcal{B}\coloneqq(\epsilon_{1},\delta),(\epsilon_{2},\delta),\ldots,( \epsilon_{n},\delta)\) and a query \(q\), the additive Gaussian mechanism (Algorithm 4) that returns a set of noisy answers \(r_{1},r_{2},\ldots,r_{n}\) to each data analyst \(A_{i}\) satisfies \([(A_{1},\epsilon_{1},\delta),...,(A_{n},\epsilon_{n},\delta)]\)-multi-analyst-DP and \((\max\{\epsilon_{1},\ldots,\epsilon_{n}\},\delta)\)-DP._
Proof Sketch.: To each data analyst \(A_{i}\), the DP mechanism is equivalent to the standard Gaussian mechanism with a proper
variance that satisfies \((\epsilon_{i},\delta)\)-DP. Since the data is looked at once, the \((\max\{\epsilon_{1},\ldots,\epsilon_{n}\},\delta)\)-DP is guaranteed by post-processing.
**Discussion on \(\delta\)**. If the \(\delta\) is not a fixed parameter in the system, it could happen in privacy budget \((\epsilon_{i},\delta_{i})\) that \(\epsilon_{i}=\max\mathcal{E}\) but \(\delta_{i}=\min\mathcal{D}\). Algorithm 3 can be simply modified to handle this by not sorting \(\mathcal{B}\) based on descending \(\epsilon\)'s (line 4) but according to the ascending order of the calculated \(\sigma\) (line 6).
```
Input: Analysts \(\mathcal{A}=A_{1},\ldots,A_{n}\); A query \(q\); Database instance \(D\); A set of privacy budgets \(\mathcal{B}\coloneqq\{\mathcal{E},\mathcal{D}\}=\)\((\epsilon_{1},\delta),(\epsilon_{2},\delta),\ldots,(\epsilon_{n},\delta)\). Output: A set of noisy answers \(r_{1},r_{2},\ldots,r_{n}\).
1FunctionAdditiveGM\((\mathcal{A},\mathcal{B},q,D):\)
2\(r\leftarrow\textsc{queryExego}(q,D)\)\(\triangleright\) Obtain true query answer. \(\Delta q\leftarrow\textsc{sensCalc}(q)\)\(\triangleright\) Sensitivity calculation. \(\mathcal{B}^{\prime}\leftarrow\textsc{sort}(\mathcal{B},\epsilon_{i})\)\(\triangleright\) Sort \(\mathcal{B}\) on the desc order of \(\epsilon\)'s. \((\epsilon_{i},\delta)\leftarrow\textsc{pop}(\mathcal{B}^{\prime})\)\(\triangleright\) Pop the 1st element. \(\sigma_{i}\leftarrow\textsc{analyticGM}(\epsilon_{i},\delta,\Delta q)\)\(\triangleright\)[(2)]
3\(r_{i}\gets r+\eta_{i}\sim\mathcal{N}(0,\sigma_{i}^{2})\)\(\triangleright\) Add Gaussian noise. while\(\mathcal{B}^{\prime}\neq\varnothing\)do
4\((\epsilon_{j},\delta)\leftarrow\textsc{pop}(\mathcal{B}^{\prime})\)\(\sigma_{j}\leftarrow\textsc{analyticGM}(\epsilon_{j},\delta,\Delta q)\)\(\triangleright\)[(2)]
5\(r_{j}\gets r_{i}+\eta_{j}\sim\mathcal{N}(0,\sigma_{j}^{2}-\sigma_{i}^{2})\);
6
7 end while return\(\mathcal{R}\coloneqq\{r_{i}|i\in[n]\}\);
8
9 end while
```
**Algorithm 3**Additive Gaussian Noise Calibration
#### 5.2.2. Synopses Management
We introduce the concept of global and local DP synopses and then discuss the updating process in our additive GM. A DP synopsis (or synopsis for short) is a noisy answer to a (histogram) view over a database instance. We first use the privacy-oriented mode to explain the synopses management for clarity, and then elaborate the accuracy-oriented mode in the accuracy-privacy translation module (Section 5.2.3).
**Global and Local DP Synopses.** To solve the maximum query answering problem, for each view \(V\in\mathcal{V}\), \(\textsc{DProvDB}\) maintains a _global DP synopsis_ with a cost of \((\epsilon,\delta)\), denoted by \(V^{\epsilon,\delta}(D)\) or \(V^{\epsilon}\), where \(D\) is the database instance. For simplicity, we drop \(\delta\) by considering the same value for all \(\delta\) and \(D\). For this view, \(\textsc{DProvDB}\) also maintains a _local DP synopsis_ for each analyst \(A_{i}\in\mathcal{A}\), denoted by \(V^{\epsilon^{\prime}}_{A_{i}}\), where the local synopsis is always generated from the global synopsis \(V^{\epsilon}\) of the view \(V\) by adding more noise. Hence, we would like to ensure \(\epsilon\geq\epsilon^{\prime}\). This local DP synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) will be used to answer the queries asked by the data analyst \(A_{i}\).
The process of updating synopses consists of two parts. The first part is to update the local synopses based on the global synopses. The second part is to update the global synopses by relaxing the privacy guarantee, in order to answer a query with a higher accuracy requirement. We discuss the details below.
**Generating Local Synopses from Global Synopses.** We leverage our additive GM primitive to release a local DP synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) from a given global synopsis \(V^{\epsilon}\), where \(V^{\epsilon}\) is generated by a Gaussian mechanism. Given the privacy guarantee \(\epsilon\) (and \(\delta\)) and the sensitivity of the view, the Gaussian mechanism can calculate a proper variance \(\sigma^{2}\) for adding noise and ensuring DP. The additive GM calculates \(\sigma^{2}\) and \(\sigma^{\prime 2}\) based on \(\epsilon\) and \(\epsilon^{\prime}\) respectively, and then generates the local synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) by injecting independent noise drawn from \(\mathcal{N}(0,\sigma^{\prime 2}-\sigma^{2})\) to the global synopsis \(V^{\epsilon}\). As the global synopsis is hidden from all the analysts, the privacy loss to the analyst \(A_{i}\) is \(\epsilon^{\prime}\). Even if all the analysts collude, the maximum privacy loss is bounded by the budget spent on the global synopsis.
**Example 3**.: Alice and Bob are asking queries to \(\textsc{DProvDB}\). Alice asks the first query \(q_{1}\) (which is answerable on \(V_{1}\)) with _budget requirement_\(\epsilon_{V_{1},Alice}=0.5\). \(\textsc{DProvDB}\) generates a global synopsis \(V^{0.5}\) for \(V\) with budget \(0.5\) and then generate a local synopsis \(V^{0.5}_{Alice}\) from the global synopsis \(V^{0.5}_{Alice}\) for Alice. Bob next asks query \(q_{2}\) (which is also answerable on \(V_{1}\)) with budget \(\epsilon_{V_{1},Bob}=0.3\). Since the budget \(0.3<0.5\), we use the additive GM to generate a local synopsis \(V^{0.3}_{Bob}\) from the global synopsis \(V^{0.5}\) for Bob and return the query answer based on the local synopsis \(V^{0.3}_{Bob}\). This example follows "Case " in Fig. 2.
**Updating Global Synopses by Combining Views.** When the global DP synopsis \(V^{\epsilon}\) is not sufficiently accurate to handle a local synopsis request at privacy budget \(\epsilon_{t}\), \(\textsc{DProvDB}\) spends additional privacy budget \(\Delta\epsilon\) to update the global DP synopsis to \(V^{\epsilon+\Delta\epsilon}\), where \(\Delta\epsilon=\epsilon_{t}-\epsilon\). We still consider Gaussian mechanism, which generates an intermediate DP synopsis \(V^{\Delta\epsilon}\) with a budget \(\Delta\epsilon\). Then we combine the previous synopses with this intermediate synopsis into an updated one. The key insight of the combination is to properly involve the fresh noisy synopses by assigning each synopsis with a weight proportional to the inverse of its noise variance, which gives the smallest expected square error based on UMVUE (Sutskever et al., 2017; Wang et al., 2018). That is, for the \(t\)-th release, we combine these two synopses:
\[V^{\epsilon_{t}}=(1-w_{t})V^{\epsilon_{t-1}}+w_{t}V^{\Delta\epsilon}. \tag{2}\]
The resulted expected square error for \(V^{\epsilon_{t}}\) is \(\sigma_{t}=(1-w_{t})^{2}w_{t-1}+w_{t}^{2}w_{\Delta}\), where \(w_{t-1}\) is the noise variance of view \(V^{\epsilon_{t-1}}\), and \(v_{\Delta}\) is derived from \(V^{\Delta\epsilon}\). To minimize the resulting error, \(w_{t}=\frac{w_{t-1}}{w_{\Delta}w_{t-1}}\).
**Example 4**.: At the next time stamp, Bob asks a query \(q_{1}\) with budget \(\epsilon_{V_{1},Bob}=0.7\). Clearly the current global synopsis \(V^{0.5}\) is not sufficient to answer this query because \(0.7>0.5\). Then the system needs to update \(V^{0.5}\) and this is done by: 1) first generating a fresh global synopsis \(V^{0.2}\) using analytic GM from \(V(D)\); 2) then combining it with \(V^{0.5}\) to form \(V^{0.7}\) using Equation (2). This example is illustrated as steps 2a and 2b of "Case " in Fig. 2.
**Lemma 5.3** (Correctness of View Combination).: _Releasing the combined DP synopsis in the \(t\)-th update is \((\epsilon_{t-1}+\Delta\epsilon,\delta_{t-1}+\Delta\delta)\)-DP._
Proof Sketch.: The \(t\)-th update combines an \((\epsilon_{t-1},\delta_{t-1})\)-DP synopsis and a fresh synopsis that is \((\Delta\epsilon,\Delta\delta)\)-DP. By sequential composition, the combined synopsis is \((\epsilon+\Delta\epsilon,\delta+\Delta\delta)\)-DP.
The view combination is not _frictionless_. Although the combined synopsis \(V^{\epsilon+\Delta\epsilon}\) achieves \((\epsilon+\Delta\epsilon,\delta+\Delta\delta)\)-DP, if we spend the whole privacy budget on generating a synopsis all at once, this one-time synopsis \(V^{\epsilon}\) has the same privacy guarantee but is with less expected error than \(V^{\epsilon+\Delta\epsilon}\). We can show by sequentially combining
and releasing synopses over time it is optimal among all possible linear combinations of synopses, however, designing a frictionless updating algorithm for Gaussian mechanisms is non-trivial in its own right, which remains for our future work.
**Theorem 5.4** (Optimality of Linear View Combination).: _Given a sequences of views, \(V^{\epsilon_{i}},V^{\Delta\epsilon_{2}},\ldots,V^{\Delta\epsilon_{t}}\), the expected squared error of our \(t\)-th release is less than or equal to that of releasing \(w_{1}^{\epsilon_{i}}V^{\epsilon_{i}}+\sum_{i=2}^{t}w_{i}^{\epsilon_{i}}V^{ \Delta\epsilon_{i}}\) for all \(\{w_{i}\mid i=1,\ldots,t\}\) s.t. \(\sum_{i}w_{i}^{\epsilon}=1\)._
Intuitively, this theorem is proved by a reduction to the best unbiased estimator and re-normalizing weights based on induction.
**Updating Local Synopses and Accounting Privacy.** When a local DP synopsis is not sufficiently accurate to handle a query, but the budget request for this query \(\epsilon_{i}\) is still smaller than or equal to the budget \(\epsilon_{t}\) for the global synopsis of \(V\), \(\mathtt{DProvDB}\) generates a new local synopsis \(V^{\epsilon_{i}}_{A_{i}}\) from \(V^{\epsilon_{t}}\) using additive GM. The analyst \(A_{i}\) is able to combine query answers for a more accurate one, but the privacy cost for this analyst on view \(V\) is bounded as \(\min(\epsilon_{t},P[A_{i},V]+\epsilon_{i})\) which will be updated to \(P[A_{i},V]\).
**Example 5**.: Analysts' queries are always answered on local synopses. To answer Bob's query \((q_{1},e_{V_{1},Bob}=0.7),\mathtt{DProvDB}\) uses additive GM to generate a fresh local synopsis \(V^{0.7}_{Bob}\) from \(V^{0.7}\) and return the answer to Bob. Alice asks another query \((q_{1},e_{V_{1},Alice}=0.6)\). \(\mathtt{DProvDB}\) updates \(V^{0.5}_{Alice}\) by generating \(V^{0.6}_{Alice}\) from \(V^{0.7}\). Both analysts' privacy loss on \(V\) will be accounted as \(0.7\). This example complements the illustration of step 2c of "Case " in Fig. 2.
#### 5.2.3. Accuracy-Privacy Translation
The accuracy translation algorithm should consider the friction at the combination of global synopses. We propose an accuracy-privacy translation paradigm (Algorithm 4: 12) involving this consideration. This translator module takes the query \(q_{i}\), the utility requirement \(v_{i}\), the synopsis \(V\) for answering the query, and additionally the current global synopsis \(V^{\epsilon}\) (we simplify the interface in Algorithm 1) as input, and outputs the corresponding budget \(\epsilon_{i}\) for run (omitting the same value \(\delta\)).
As the first query release for the view does not involve the frictional updating issue, we separate the translation into two phases where the first query release directly follows the analytic Gaussian translation in our vanilla approach. For the second phase, given a global DP synopsis \(V^{\epsilon_{\epsilon}}\) at hand (with _tracked_ expected error \(v^{\prime}\)) for a specific query-view and a new query is submitted by a data analyst with expected error \(v_{i}<v^{\prime}\), we solve an optimization problem to find the Gaussian variance of the fresh new DP synopsis. We first calculate the Gaussian variance of the current DP synopsis \(v^{\prime}\) (line 13) and solve the following optimization problem (line 14).
\[\operatorname*{arg\,max}_{\sigma}\;v_{i}=w^{2}v^{\prime}+(1-w)^{2}v_{t}\;\; \text{s.t.}\;w\in[0,1] \tag{3}\]
The solution gives us the minimal error variance \(v_{t}=\sigma^{2}\) (line 15). By translating \(\sigma^{2}\) into the privacy budget using the standard analytic Gaussian translation technique (Def. 9), we can get the minimum privacy budget that achieves the required accuracy guarantee (line 16). Note that when the requested accuracy \(v_{i}>v^{\prime}\), the solution to this optimization problem is \(w=0\), which automatically degrades to the vanilla translation.
**Theorem 5.5**.: _Given a query \((q_{i},v_{i})\) and the view \(V\) for answering \(q_{i}\), the translation function (Algorithm 4, privacyTranslation) outputs a privacy budget \(\epsilon\). The query \(q_{i}\) can then be answered with expected square error \(v_{q}\) over the updated synopsis \(V^{\epsilon}_{A}\) such that: i) meets the accuracy requirement \(v_{q}\leq v_{i}\), and ii) \(\epsilon-\epsilon^{*}\leq p\), where \(\epsilon^{*}\) is the minimal privacy budget to meet the accuracy requirement for Algorithm 4 (run function)._
Proof Sketch.: The privacyTranslation in additive Gaussian approach calls the translation function in the vanilla approach as subroutine (Algorithm 4: 15). The correctness of the additive Gaussian privacyTranslation depends on inputting the correct expected square error, which is calculated based on the accuracy requirement \(v_{i}\) while considering the frictions, into the subroutine. The calculation of the expected squared error with an optimization solver has been discussed and analyzed above.
#### 5.2.4. Provenance Constraint Checking
The provenance constraint checking for the additive Gaussian approach (line 18) is similar to the module for the vanilla approach. We would like to highlight 3 differences. 1) Due to the use of the additive Gaussian mechanism, the composition across analysts on the same view is bounded as
Figure 2. Illustration of the Additive Gaussian Approach
tight as the \(\max P[A_{i},V],\forall A_{i}\in\mathcal{A}\). Therefore, we check the view constraint by taking the max over the column retrieved by the index of view \(V_{j}\). 2) To check table constraint, we composite a vector where each element is the maximum recorded privacy budget over every column/view. 3) The new cumulative cost is no longer \(\epsilon_{i}\), but \(\epsilon^{\prime}=\min(\epsilon,P[A_{i},V]+\epsilon_{i})-P[A_{i},V]\).
Theorem 5.6 ().: _Given the same sequence of queries \(Q\) and at least 2 data analysts in the system and the same system setup, the total number of queries answered by the additive Gaussian approach is always more than or equal to that answered by vanilla approach._
Proof Sketch.: \(\mathsf{DProvDB}\) processes incoming queries with DP synopsis (vanilla approach) or local synopsis (additive Gaussian approach). For each query \(q\) in \(Q\), if it can be answered (w.r.t the accuracy requirement) with cached synopsis, both approaches will process it in the same manner; otherwise, \(\mathsf{DProvDB}\) needs to update the synopses. Comparing the cost of synopses update in both methods, \(\min(\epsilon,P[A_{i},V]+\epsilon_{i})-P[A_{i},V]\leq\epsilon_{i}\) always holds. Therefore, given the same privacy constraints, if a synopsis can be generated to answer query \(q\) with vanilla approach, the additive Gaussian approach must be able to update a global synopsis and generate a local synopsis to answer this query, which proves the theorem.
We note that vanilla approach generates independent synopses for different data analysts, while in the additive Gaussian approach, we only update global synopses which saves privacy budgets when different analysts ask similar queries. We empirically show the benefit in terms of answering more queries with the additive Gaussian approach in Section 6.
#### 5.2.5. Putting Components All Together
The additive Gaussian approach is presented in Algorithm 4: 2-10 (the function run). At each time stamp, the system receives the query \(q\) from the analyst \(A_{i}\) and selects the view that this query can be answerable. If the translated budget \(\epsilon_{i}\) is greater than the budget allocated to the global synopsis of that view (line 3), we generate a delta synopsis (lines 4-5) and update the global synopsis (line 6). Otherwise, we can use additive GM to generate the local synopsis based on the (updated) global synopsis (lines 7-9). We update the provenance table with the consumed budget \(\epsilon_{i}\) (line 10) and answer the query based on the local synopsis to the analyst (line 11).
#### 5.2.6. Discussion on Combining Local Synopses
We may also update a local synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) (upon request \(\epsilon_{i}\)) by first generating an intermediate local synopsis \(V^{\Delta\epsilon}_{A_{i}}\) from \(V^{\epsilon\epsilon}\) using additive GM, where \(\Delta\epsilon=\epsilon_{i}-\epsilon^{\prime}\) and then combining \(V^{\Delta\epsilon}_{A_{i}}\) with the previous local synopsis in a similar way as it does for the global synopses, which leads to a new local synopsis \(V^{\epsilon+\Delta\epsilon}_{A_{i}}\).
Example 6 ().: To answer Bob's query \((q_{1},\epsilon_{V,Bob}=0.7)\), we can use additive GM to generate a fresh local synopsis \(V^{0.4}_{Bob}\) from \(V^{0.7}\), and combine \(V^{0.4}_{Bob}\) with the existing \(V^{0.3}_{Bob}\) to get \(V^{0.7}_{Bob}\).
However, unlike combining global synopses, these local synopses share correlated noise. We must solve a different optimization problem to find an unbiased estimator with minimum error. For example, given the last combined global synopsis (and its weights) \(V^{\prime}=w_{t-1}V^{\epsilon\epsilon-1}+w_{t}V^{\epsilon\epsilon-\epsilon_{t- 1}}\), if we know \(V^{\epsilon_{t-1}}_{A}\) is a fresh local synopsis generated from \(V^{\epsilon_{t-1}}\), we consider using the weights \(k_{t-1},k_{t}\) for local synopsis combination:
\[V^{\prime}_{A} =k_{t-1}V^{\epsilon_{t-1}}_{A}+k_{t}V^{\Delta\epsilon}_{A}\] \[=k_{t-1}(V^{\epsilon_{t-1}}+\eta^{t-1}_{A})+k_{t}(w_{t-1}V^{ \epsilon_{t-1}}+w_{t}V^{\epsilon_{t-1}}+\eta^{t}_{A})\] \[=(k_{t-1}+k_{t}w_{t-1})V^{\epsilon_{t-1}}+k_{t}w_{t}V^{\epsilon \epsilon-\epsilon_{t-1}}+k_{t-1}\eta^{t-1}_{A}+k_{t}\eta^{t}_{A},\]
where \(\eta^{t-1}_{A}\) and \(\eta^{t}_{A}\) are the noise added to the local synopses in additive GM with variance \(\sigma^{2}_{t-1}\) and \(\sigma^{2}_{t}\). We can find the adjusted weights that minimize the expected error for \(V^{\prime}_{A}\) is \(o_{A,t}=(k_{t-1}+k_{t}w_{t-1})^{2}o_{t-1}+k_{t}^{2}w_{t}^{2}o_{\Delta}+k_{t-1}^{ 2}o_{t-1}^{2}+k_{t}^{2}o_{t}^{2}\) subject to \(k_{t-1}+k_{t}w_{t-1}+k_{t}w_{t}=1\), using an optimization solver. Allowing the optimal combination of local synopses tightens the cumulative privacy cost of \(A_{i}\) on \(V\), i.e., \(\epsilon_{i}<\min(\epsilon_{i},P[A_{i},V]+\epsilon_{i})\). However, if the existing local synopsis \(V^{\epsilon_{t-1}}_{A}\) is a combined synopsis from a previous time stamp, the correct variance calculation requires a nested analysis on _from where the local synopses are generated and with what weights the global/local synopses are combined_. This renders too many parameters for \(\mathsf{DProvDB}\) to keep track of and puts additional challenges for accuracy translation since solving an optimization problem with many weights is hard, if not intractable. For practical reasons, \(\mathsf{DProvDB}\) adopts the approach described in Algorithm 4.
### Configuring Provenance Table
In existing systems, the administrator is only responsible for setting a single parameter specifying the overall privacy budget. DProvOB, however, requires the administrator to configure more parameters.
#### 5.3.1. Setting Analyst Constraints for Proportional Fairness
We first discuss guidelines to establish per-analyst constraints in DProvOB, by proposing two specifications that achieve proportional fairness.
Definition 10 (Constraints for Vanilla Approach).: _For the vanilla approach, we propose to specify each analyst's constraint by proportional indicated normalization. That is, for the table-level constraint \(\psi_{P}\) and analyst \(A_{i}\) with privilege \(l_{i}\), \(\psi_{A_{i}}=\frac{l_{i}}{\sum_{j\in[n]}l_{j}}\psi_{P}\)._
Note that this proposed specification is not optimal for the additive Gaussian approach, since the maximum utilized budget will then be constrained by \(\max\psi_{A_{i}}<\psi_{P}\) when more than \(1\) analyst is using the system. Instead, we propose the following specification.
Definition 11 (Constraints for Additive Gaussian).: _For the additive Gaussian approach, each analyst \(A_{i}\)'s constraint can be set to \(\frac{l_{i}}{l_{max}}\psi_{P}\), where \(l_{max}\) denotes the maximum privilege in the system._
**Comparing Two Specifications.** We compare the two analyst-constraint specifications (Def. 10 and 11) with experiments in Section 6.2. Besides their empirical performance, the vanilla constraint specification (Def. 10) requires all data analysts to be registered in the system before the provenance table is set up. The additive Gaussian approach (with specification in Def. 11), however, allows for the inclusion of new data analysts at a later time.
#### 5.3.2. Setting View Constraints for Dynamic Budget Allocation
Existing privacy budget allocator (Srivastava et al., 2017) adopts a static splitting strategy such that the per view privacy budget is equal or proportional to their sensitivity. DProvOB subsumes theirs, by similarly, setting the view constraints to be equal or proportional to the sensitivity of each view, i.e., \(\{\psi_{V_{j}}|V_{j}\in\mathcal{V}\}=\{\lambda_{V_{j}}\cdot\epsilon/\hat{ \Delta}_{V_{j}}\}\forall V_{j}\in\mathcal{V}\), where \(\hat{\Delta}_{V_{j}}\) is the upper bound of the sensitivity of the view \(V_{j}\). We therefore propose the following water-filling view constraint specification for a better view budget allocation.
Definition 12 (Water-filling View Constraint Setting).: _The table constraint \(\psi_{P}\) has been set up as a constant (i.e., the overall privacy budget) in the system. The administrator simply set all view constraints the same as the table constraint, \(\psi_{V_{j}}\coloneqq\psi_{P},\forall V_{j}\in\mathcal{V}\)._
With water-filling constraint specification, the provenance constraint checking will solely depend on the table constraint and the analyst constraints. The overall privacy budget is then dynamically allocated to the views based on analysts' queries. Compared to existing budget allocation methods on views, our water-filling specification based on the privacy provenance table reflects the actual accuracy demands on different views. Thus it results in avoiding the waste of privacy budget and providing better utility: i) DProvOB gesb fewer budgets on unpopular views, whose consumed budget is less than \(\lambda_{V_{j}}\cdot\epsilon/\hat{\Delta}_{V_{j}}\); ii) DProvOB can answer queries whose translated privacy budget \(\epsilon>\lambda_{V_{j}}\cdot\epsilon/\hat{\Delta}_{V_{j}}\). In addition, the water-filling specification allows DProvOB to add views over time.
### Privacy and Fairness Guarantees
Theorem 5.7 (System Privacy Guarantee).: _Given the privacy provenance table and its constraint specifications, \(\Psi=\{\psi_{A_{i}}|A_{i}\in\mathcal{A}\}\cup\{\psi_{V_{j}}|V_{j}\in\mathcal{V }\}\cup\{\psi_{P}\}\), both mechanisms for DProvOB ensure \([\ldots,(A_{i},\psi_{A_{i}},\delta),\ldots]\)-multi-analyst-DP; they also ensure \(\min(\psi_{V_{j}},\psi_{P})\)-DP for view \(V_{j}\in\mathcal{V}\) and overall \(\psi_{P}\)-DP if all the data analysts collude._
With the provenance table, DProvOB can achieve proportional fairness when analysts submit a sufficient number of queries, stated as in the following theorem. Both proofs are deferred to appendices.
Theorem 5.8 (System Fairness Guarantee).: _Given the privacy provenance table \(P\) and the described approaches of setting analyst constraints, both mechanisms achieve proportional fairness, when the data analysts finish consuming their assigned privacy budget._
## 6. Experimental Evaluation
In this section, we compare DProvOB with baseline systems and conduct an ablation investigation for a better understanding of different components in DProvOB. Our goal is to show that DProvOB can improve existing query answering systems for multi-analyst in terms of the number of queries answered and fairness.
### Experiment Setup
We implement DProvOB in Scala with PostgreSQL for the database system, and deploy it as a middle-ware between Chorus (Kalalouts et al., 2018) and multiple analysts. Our implementation follows existing practice (Kalouts et al., 2018) to set all \(\delta\) parameter to be the same and small (e.g., 1e-9) for all queries. The \(\delta\) parameters in the privacy constraints (column/row/table) in the privacy provenance table are set to be capped by the inverse of the dataset size.
#### 6.1.1. Baselines
Since we build on top of Chorus (Kalouts et al., 2018), we develop a number of baseline approaches for the multi-analyst framework using Chorus as well for fair comparisons.
* _Chorus_(Kalouts et al., 2018): This baseline is the plain Chorus, which uses GM and makes no distinction between data analysts and uses no views. The system only sets the overall privacy budget.
* DProvOB _minus Cached Views (ChorusP):_ This baseline enables privacy provenance tracking for each data analyst but does not store any synopses. We use Def. 10 to set up analyst constraints and Def. 12 for view constraints.
* DProvOB _minus Additive GM (Vanilla):_ We equip Chorus with privacy provenance table and the cached views, but with our vanilla approach to update and manage the provenance table and the views. The privacy provenance table is configured the same as in ChorusP.
* _Simulating PrivateSQL (Srivastava et al., 2017):_ We simulate PrivateSQL by generating the static DP synopses at first. The budget allocated to each view is proportional to the view sensitivities (Srivastava et al., 2017). Incoming queries that cannot be answered accurately with these synopses will be rejected.
#### 6.1.2. Datasets and Use Cases
We use the Adult dataset (Kalouts et al., 2018) (a demographic data of 15 attributes and 45,224 rows) and the TPC-H dataset (a synthetic data of 1GB) (Kalouts et al., 2018) for experiments. We consider the following use cases.
* _Randomized range queries (RRQ)_: We randomly generate 4,000 range queries _per analyst_, each with one attribute randomly selected with bias. Each query has the range specification \([s,s+o]\) where \(s\) and the offset \(o\) are drawn from a normal distribution. We design two types of query sequences from the data analysts: a) **round-robin**, where the analysts take turns to ask queries; b) **random**, where a data analyst is randomly selected each time.
* _Breadth-first search (BFS) tasks_: Each data analyst explores a dataset by traversing a decomposition tree of the cross product over the selected attributes, aiming to find the (sub-)regions with underrepresented records. That is, the data analyst traverses the domain and terminates only if the returned noisy count is within a specified threshold range.
To answer the queries, we generate one histogram view on each attribute. We use two analysts with privileges 1 and 4 by default setting and also study the effect of involving more analysts.
#### 6.1.3. Evaluation Metrics
We use four metrics for evaluation.
* _Number of queries being answered_: We report the number of queries that can be answered by the system when no more queries can be answered as the utility metric to the system.
* _Cumulative privacy budget_: For BFS tasks that have fixed workloads, it is possible that the budget is not used up when the tasks are complete. Therefore, we report the total cumulative budget consumed by all data analysts when the tasks end.
* _Normalized discounted cumulative fairness gain (nDCFG)_: We coined an empirical fairness measure here. First, we introduce DCFG measure for a mechanism \(\mathcal{M}\) as \(\mathrm{DCFG}_{\mathcal{M}}=\sum_{i=1}^{n}\frac{|Q_{A_{i}}|}{\log_{2}(\frac{ 1}{t}+1)}\), where \(l_{i}\) is the privilege level of analyst \(A_{i}\) and \(|Q_{A_{i}}|\) is the total number of queries of \(A_{i}\) being answered. Then nDCFG is DCFG normalized by the total number of queries answered.
* _Runtime_: We measures the run time in milliseconds.
We repeat each experiment 4 times using different random seeds.
### Empirical Results
We initiate experiments on the default settings of DProvOB and the baseline systems for comparison, and then study the effect of modifying the components of DProvOB.
#### 6.2.1. End-to-end Comparison
This comparison is with the setup of the analyst constraints in line with Def. 11 for DProvOB, and with Def. 10 for vanilla approach. We brief our findings.
**Results of RRQ task.** We present Fig. 3 for this experiment. We fix the entire query workload and run the query processing on the five mechanisms. DProvOB outperforms all competing systems, in both round-robin or random use cases. Chorus and ChorusP can answer very few queries because their privacy budgets are depleted quickly. Interestingly, our simulated PrivateSQL can answer a number of queries which are comparable to the vanilla approach when \(\epsilon=6.4\), but answers a limited number of queries under higher privacy regimes. This is intuitive because if one statically fairly pre-allocates the privacy budget to each view when the overall budget is limited, then possibly every synopsis can hardly answer queries accurately. One can also notice the fairness score of ChorusP is significantly higher than Chorus, meaning the enforcement of privacy provenance table can help achieve fairness.
**Results of BFS task.** An end-to-end comparison of the systems on the BFS tasks is depicted in Fig. 4. Both DProvOB and vanilla can complete executing the query workload with near constant budget consumption, while Chorus(P) spends privacy budget linearly with the workload size. DProvOB saves even more budget compared to vanilla approach, based on results on the TPC-H dataset.
**Runtime performance.** Table 1 summarizes the runtime of DProvOB and all baselines. DProvOB and sPrivateSQL use a rather large
Figure 4. End-to-end Comparison (BFS task): Cumulative privacy budget assumption v.s. workload indices. Left: over _Adult_ dataset; Right: over _TPC-H_ dataset.
Figure 3. End-to-end Comparison (_RRQ task_, over _Adult_ dataset), from left to right: a) utility v.s. overall budget, round-robin; b) utility v.s. overall budget, randomized; c) fairness against baselines, round-robin; d) fairness against baselines, randomized.
amount of time to set up views, however, answering large queries based on views saves more time on average compared to Chorus-based mechanisms. Recall that aGM requires solving optimization problems, where empirical results show incurred time overhead is only less than 2 ms per query.
We draw the same conclusion for our RRQ experiments and runtime performance test on the other dataset. We defer the results to the appendices.
#### 6.2.2. Component Comparison
We consider three components in DProvDB to evaluate separately.
**Cached Synopses.** Given the same overall budget, mechanisms leveraging cached synopses outperform those without caches in terms of utility when the size of the query workload increases. This phenomenon can be observed for all budget settings, as shown in Fig. 5. Due to the use of cached synopses, DProvDB can potentially answer more queries if the incoming queries hit the caches. Thus, the number of queries being answered would increase with the increasing size of the workload, given the fixed overall budget.
**Additive GM v.s. Vanilla.** Given the same overall budget, additive GM outperforms the vanilla mechanism on utility. The utility gap increases with the increasing number of data analysts presented in the system. The empirical results are presented in Fig. 6. When there are 2 analysts in the system, additive GM can only gain a marginal advantage over vanilla approach; when the number of data analysts increases to 6, additive GM can answer around \(\sim\)2-4x more queries than vanilla. We also compare different settings of analyst constraints among different numbers of analysts and different overall system budgets (with 2 analysts). It turns out that additive GM with the setting in Def. 11 (DProvDB-1_max), is the best one, outperforming the setting from Def. 10 (DProvDB-1_sum, Vanilla-1 sum) all the time for different epsilons and with \(\sim\)4x more queries being answered when #analysts=6.
**Constraint Configuration.** If we allow an expansion parameter \(\tau\geq 1\) on setting the analyst constraints (i.e., overselling privacy budgets than the constraint defined in Def. 11), we can obtain higher system utility by sacrificing fairness, as shown in Fig. 7. With 2 data analysts, the number of total queries being answered by additive GM is slightly increased when we gradually set a larger analyst constraint expansion rate. Under a low privacy regime (viz., \(\epsilon=3.2\)), this utility is increased by more than 15% comparing setting \(\tau=1.9\) and \(\tau=1.3\); on the other hand, the fairness score decreases by around 10%.
This result can be interpretable from a basic economic view, that we argue, as a system-wise public resource, the privacy budget could be _idle resources_ when some of the data analysts stop asking queries. Thus, the _hard privacy constraints_ enforced in the system can make some portion of privacy budgets unusable. Given the constraint expansion, we allow DProvDB to tweak between a fairness and utility trade-off, while the overall privacy is still guaranteed by the table constraint.
**Varying \(\delta\) parameter.** In this experiment, we control the overall privacy constraint \(\epsilon=6.4\) and vary the \(\delta\) parameter as per query. We use the BFS workload as in our end-to-end experiment and the results are shown in Fig. 8. While varying a small \(\delta\) parameter does not much affect the number of queries being answered, observe that increasing \(\delta\), DProvDB can slightly answer more queries. This is because, to achieve the same accuracy requirement, the translation module will output a smaller \(\epsilon\) when \(\delta\) is bigger, which will consume the privacy budget slower. Note that the overall privacy
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Systems & Setup Time & Running Time & No. of Queries & Per Query Perf \\ \hline DProvDB & 20386.65 ms & 297.30 ms & 86.0 & 3.46 ms \\ Vanilla & 20388.65 ms & 118.04 ms & 86.0 & 1.37 ms \\ SrivastavaQL & 20388.65 ms & 166.51 ms & 86.0 & 1.94 ms \\ Chorus & N/A & 7380.47 ms & 62.0 & 119.04 ms \\ ChorusP & N/A & 7478.69 ms & 62.0 & 120.62 ms \\ \hline \hline \end{tabular}
\end{table}
Table 1. Runtime Performance Comparison over Different Mechanisms on TPC-H Dataset (running on a Linux server with 64 GB DDR4 RAM and AMD Ryzen 5 3600 CPU, measured in milliseconds)
Figure 5. Component Comparison (_RRQ_ task, over _Adult_ dataset): Enabling Cached Synopses. Utility v.s. the size of query workload (round-robin)From left to right: \(\epsilon=\{0.4,0.8,1.6,3.2,6.4\}\).
Figure 6. Component Comparison (_RRQ_ task, over _Adult_ dataset): Additive GM v.s. Vanilla. Left: utility v.s. #analysts, round-robin; Right: utility v.s. overall budgets, round-robin.
constraint on \(\delta\) should be set no larger than the inverse of the dataset size. Setting an unreasonably large per-query \(\delta\) can limit the total number of queries being answered.
**Other experiments.** We also run experiments to evaluate \(\mathsf{DProvOB}\) on data-dependent utility metric, i.e. relative error (Kumar et al., 2017), and empirically validate the correctness of the accuracy-privacy translation module. In particular, we performed experiments to show that the noise variance \(v_{q}\) (derived from the variance of the noisy synopsis) of the query answer, according to the translated privacy budget, is always no more than the accuracy requirement \(v_{i}\) submitted by the data analyst. As shown in Fig. 9 (a), the difference between the two values, \(v_{q}-v_{i}\), is always less than 0, and very small for a given BFS query workload (where the accuracy requirement is set to be above \(v_{l}>10000\)).
Furthermore, we consider the following data-dependent utility, namely relative error (Kumar et al., 2017), which is defined as
\[\text{Relative Error}=\frac{|\text{True Answer}-\text{Noisy Answer}|}{\text{ max}\{\text{True Answer},c\}},\]
where \(c\) is a specified constant to avoid undefined values when the true answer is 0.
Note that \(\mathsf{DProvOB}\) does not specifically support analysts to submit queries with _data-dependent_ accuracy requirements. The translated query answer can have a large relative error if the true answer is small or close to zero. We thereby merely use this utility metric to empirically evaluate the answers of a BFS query workload, as a complementary result [Fig. 9 (b)] to the paper. \(\mathsf{DProvOB}\) and the Vanilla approach have a larger relative error than Chorus and ChorusP because they can answer more queries, many of which have a comparatively small true answer - incurring a large relative error, than Chorus-based methods.
## 7. Discussion
In this section, we discuss a weaker threat model of the multi-analyst corruption assumption, with which additional utility gain is possible. We also discuss other strawman solutions toward a multi-analyst DP system.
### Relaxation of Multi-analyst Threat Model
So far we have studied that all data analysts can collude. A more practical setting is, perhaps, considering a subset of data analysts that are compromised by the adversary. This setting is common to see in the multi-party computation research community (Kumar et al., 2017) (_a.k.a_ active security (Kumar et al., 2017; Kumar et al., 2017)), where \(t\) out of \(n\) participating parties are assumed to be corrupted.
Definition 13 ((\(t,n\))-compromised Multi-analyst Setting).: _We say a multi-analyst setting is \((t,n)\)-compromised, if there exist \(n\) data analysts where at most \(t\) of them are **malicious** (meaning they can collude in submitting queries and sharing the answers)._
The \((t,n)\)-compromised multi-analyst setting makes weaker assumptions on the attackers. Under this setting, the privacy loss is upper bounded by \((\sum_{t}e_{i},\sum_{t}\delta_{i})\), which is the summation over \(t\) largest privacy budgets. However, we cannot do better than the current \(\mathsf{DProvOB}\) algorithms with this weaker setting under worst-case privacy assumption.
Theorem 7.1 (Hardness on Worst-case Privacy).: _Given a mechanism \(\mathcal{M}\) which is \([\ldots,(A_{i},e_{i},\delta_{i}),\ldots]\)-multi-analyst-DP, the worst-case privacy loss under \((t,n)\)-compromised multi-analyst DP is
Figure 8. #queries being answered vs. varying delta parameter (BFS task, _Adult_). Left: round-robin, Right: randomized.
Figure 7. Component Comparison (_RRQ_ task, over _Adult_ dataset): Constraint Configuration. First row: utility v.s. constraint settings. Second row: fairness v.s. constraint settings. Left: round-robin. Right: randomized.
Figure 9. (a) The cumulative average of \(v_{q}-v_{l}\) for a BFS query workload (on _Adult_ dataset), where \(v_{i}\) represents the submitted accuracy requirement and \(v_{q}\) denotes the noise variance of the query answer. (b) Relative error of processing the BFS query workload (on _Adult_ dataset) among different mechanisms.
lower bounded by \((\max\epsilon_{i},\max\delta_{i})\), which is the same as under the all-compromisation setting._
Proof Sketch.: Under the worst-case assumption, the analyst with the largest privacy budget \((\max\epsilon_{i},\max\delta_{i})\) is sampled within the \(t\) compromised data analysts. Then it is natural to see that the lower bound of the privacy loss is \((\max\epsilon_{i},\max\delta_{i})\).
At first glance, the relaxation of the \((t,n)\)-compromisation does not provide us with better bounds. A second look, however, suggests the additional trust we put in the data analyst averages the privacy loss in compromisation cases. Therefore, with this relaxed privacy assumption, it is possible to design mechanisms to achieve better utility using policy-driven privacy.
**Policies for multi-analyst**. Blowfish privacy (Kendal, 2017) specifies different levels of protection over sensitive information in the curated database. In the spirit of Blowfish, we can use policies to specify different levels of trust in data analysts using DProvOB.
Definition 14 ((\(t,n\))-Analysts Corruption Graph).: _Given \(n\) data analysts and assuming \((t,n)\)-compromised setting, we say an undirected graph \(G=(V,E)\) is a \((t,n)\)-analysts corruption graph if,_
* _Each node in the vertex set_ \(v_{i}\in V\) _represents an analyst_ \(A_{i}\)_;_
* _An edge is presented in the edge set_ \(\mathsf{e}(v_{i},v_{j})\in E\) _if data analysts_ \(A_{i}\) _and_ \(A_{j}\) _can collude;_
* _Every connected component in_ \(G\) _has less than_ \(t\) _nodes._
The corruption graph models the prior belief of the policy designer (or DB administrator) to the system users. Groups of analysts are believed to be not compromised if they are in the disjoint sets of the corruption graph. Based on this corruption graph, we can specify the analysts' constraints as assigning the overall privacy budget \(\psi_{P}\) to each connected component.
Theorem 7.2 ().: _There exist mechanisms with \((t,n)\)-multi-analyst DP perform at least as well as with multi-analyst DP._
Proof.: Given a \((t,n)\)-analysts corruption graph \(G\), we show a naive budget/constraint that makes \((t,n)\)-multi-analyst DP degrade to multi-analyst DP. Ignoring the graph structure, we split the overall privacy budget \(\psi_{P}\) and assign a portion to each node proportional to the privilege weight on each node.
Let \(k\) be the number of disjoint connected components in \(G\). Then we have at most \([k]\cdot\psi_{P}\) privacy budget to assign to this graph. Clearly, the mechanisms with \((t,n)\)-multi-analyst DP achieve \(([k]\cdot\psi_{P})\)-DP. When \(n>t\), we have more than 1 connected component, meaning the overall privacy budget we could spend is more than that in the all-compromisation scenario.
### Comparison with Strawman Solutions
One may argue for various alternative solutions to the multi-analyst query processing problem, as opposed to our proposed system. We justify these possible alternatives and compare them with DProvOB.
**Strawman #1: Sharing Synthetic Data.** Recall that DProvOB generates global and local synopses from which different levels of noise are added with additive GM, and answers analysts' queries based on the local synopses. We show our proposed algorithm is optimal in using privacy budgets when all data analysts collude. One possible alternative may be just releasing the global synopses (or generating synthetic data using all privacy budgets) to every data analyst, which also can achieve optimality in the all-compromisation setting. We note that this solution is \(\min\) (\(\forall p_{i},(\max\epsilon_{i},\max\delta_{i})\))-DP (same as the overall DP guarantee provided by DProvOB), however, it does not achieve the notion of _Multi-analyst DP_ (all data analysts get the same output).
**Strawman #2: Pre-calculating Seeded Caches.** To avoid the cost of running the algorithm in an online manner, one may consider equally splitting the privacy budgets into small ones and using additive GM to pre-compute all the global synopses and local synopses. This solution saves all synopses and for future queries, it can directly answer by finding the appropriate synopses. If the space cost is too large, alternatively one may store only the seeds to generate all the synopses from which a synopsis can be quickly created.
This scheme arguably achieves the desired properties of DProvOB, if one ignores the storage overhead incurred by the pre-computed synopses or seeds. However, for an online query processing system, it is usually unpredictable what queries and accuracy requirements the data analysts would submit to the system. This solution focuses on doing most of the calculations offline, which may, first, lose the precision in translating accuracy to privacy, leading to a trade-off between precision and processing time regarding privacy translation. We show in experiments that the translation only happens when the query does not hit the existing cache (which is not too often), and the per-query translation processing time is negligible. Second, to pre-compute all the synopses, one needs to pre-allocate privacy budgets to the synopses. We has shown as well using empirical results that this approach achieves less utility than DProvOB.
## 8. Related Work
Enabling DP in query processing is an important line of research in database systems (Kendal, 2017). Starting from the first end-to-end interactive DP query system (Kendal, 2017), existing work has been focused on generalizing to a larger class of database operators (Bauer et al., 2016; Bauer et al., 2016; Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017), building a programmable toolbox for experts (Kendal, 2017; Bauer et al., 2017), or providing a user-friendly accuracy-aware interface for data exploration (Kendal, 2017; Bauer et al., 2017). Another line of research investigated means of saving privacy budgets in online query answering, including those based on cached views (Bauer et al., 2016; Bauer et al., 2017) and the others based on injecting correlated noise to query results (Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017). Privacy provenance (on protected data) is studied in the personalized DP framework (Kendal, 2017) as a means to enforce varying protection on database records, which is dual to our system. The multi-analyst scenario is also studied in DP literature (Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017; Bauer et al., 2017). Our multi-analyst DP work focuses on the online setting, which differs from the offline setting, where the entire query workload is known in advance (Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017). The closest related work (Bauer et al., 2017) considers an online setup for multi-analyst DP, but the problem setting differs. The multi-analyst DP in (Bauer et al., 2017) assumes the data analysts have the incentive to share their budgets to improve the utility of their query answers, but our multi-analyst DP considers data analysts who are obliged under laws/regulations should not share their budget/query responses to each other (e.g., internal data analysts should not share their results with an external third party). Our mechanism ensures that (i) even if these data
analysts break the law and collude, the overall privacy loss is still minimized; (ii) if they do not collude, each of the analysts \(A_{i}\) has a privacy loss bounded by \(\epsilon_{i}\) (c.f. our multi-analyst DP, Definition 5). However, (Friedman et al., 2017) releases more information than the specified \(\epsilon_{i}\) for analyst \(A_{i}\) (as (Friedman et al., 2017) guarantees DP). Some other DP systems, e.g. Sage (Sage, 2018) and its follow-ups (Sage et al., 2018; Sage et al., 2019; Sage et al., 2020), involve multiple applications or end-users, and they care about the budget consumption (Sage et al., 2018; Sage et al., 2019) or fairness constraints (Sage et al., 2019) in such scenarios. Their design objective is orthogonal to ours -- they would like to avoid running out of privacy budget and maximize utility through batched execution over a growing database.
The idea of adding correlated Gaussian noise has been exploited in existing work (Bao et al., 2021; Sage et al., 2020). However, they all solve a simpler version of our problem. Li et al. (Li et al., 2022) resort algorithms to release the perturbed results to different users _once_ and Bao et al. (Bao et al., 2021) study the sequential data collection of _one user_. When putting the two dimensions together, _understudied_ questions, such as how to properly answer queries to an analyst that the answer to the same query with lower accuracy requirements has already been released to another analyst, arise and are not considered by existing work. Therefore, we propose the provenance-based additive Gaussian approach (Section 5.2) to solve these challenges, which is not merely injecting correlated noise into a sequence of queries.
## 9. Conclusion and Future Work
We study how the query meta-data or provenance information can assist query processing in multi-analyst DP settings. We developed a privacy provenance framework for answering online queries, which tracks the privacy loss to each data analyst in the system using a provenance table. Based on the privacy provenance table, we proposed DP mechanisms that leverage the provenance information for noise calibration and built DProvDB to maximize the number of queries that can be answered. DProvDB can serve as a middle-ware to provide a multi-analyst interface for existing DP query answering systems. We implemented DProvDB and our evaluation shows that DProvDB can significantly improve the number of queries that could be answered and fairness, compared to baseline systems.
While as an initial work, this paper considers a relatively restricted but popular setting in DP literature, we believe our work may open a new research direction of using provenance information for multi-analyst DP query processing. We thereby discuss our ongoing work on DProvDB and envision some potential thrusts in this research area.
* **Tight privacy analysis.** In the future, we would like to tighten the privacy analysis when composing the privacy loss of the local synopses generated from correlated global synopses.
* **Optimal processing for highly-sensitive queries.** While currently DProvDB can be extended to answer these queries naively (by a truncated and fixed sensitivity bound), instance optimal processing of these queries (Han et al., 2019) requires data-dependent algorithms, which is not supported by our current approaches. Our ongoing work includes enabling DProvDB with the ability to optimally answer these queries.
* **System utility optimization.** We can also optimize the system utility further by considering a more careful design of the structure of the cached synopses (Bao et al., 2021; Sage et al., 2020), e.g. cumulative histogram views, making use of the sparsity in the data itself (Sage et al., 2020), or using data-dependent views (Sage et al., 2020).
* **Other DP settings.**DProvDB considers minimizing the collusion among analysts over time an online system. The current design enforces approximate DP due to the nature of Gaussian properties. Our ongoing work extends to use Renyi DP or zCDP for privacy composition. Future work can also consider other noise distributions, e.g. Skellam (Skelam, 2018), to support different DP variants, or other utility metrics, e.g., confidence intervals (Sage et al., 2019) or relative errors, for accuracy-privacy translation, or other application domain, e.g., location privacy (Sage et al., 2019; Sage et al., 2020).
* for example, the privacy budget consumed by a lower-privileged analyst during delegation is accounted to the analyst who grants this delegation.
###### Acknowledgements.
This work was supported by NSERC through a Discovery Grant. We would like to thank the anonymous reviewers for their detailed comments which helped to improve the paper during the revision process. We also thank Runchao Jiang, Semih Salihooglu, Florian Kerschbaum, Jiayi Chen, Shuran Zheng for helpful conversations or feedback at the early stage of this project.
| 近年の動向として、PINQ、FLEX、PrivateSQLといった実用的なデータベースシステムで、差分プライバシー(DP)が採用されています。これらのシステムは、データ分析者が、プライバシーの保証が厳密で証明できる限り、敏感なデータをクエリすることが可能です。しかし、これらのシステムの設計は、データ分析者を権限レベルや信頼レベルの異なる層に区分していません。この設計は、単一エンティティとしてデータ分析者を扱う場合、プライバシー予算の不公平な配分を招き、または、それらを相互に協力しない立場として考える場合、プライバシー予算を無駄にする可能性があります。この論文では、DProvDBという、多個のデータ分析者Scenarioにおける詳細なプライバシーProvenanceフレームワークを提案します。このフレームワークは、プライバシー損失を各個のデータ分析者に追跡します。このフレームワークに基づいて、一定の |
2309.00053 | The first comprehensive study of a giant nebula around a radio-quiet
quasar in the $z < 1$ Universe | We present the first comprehensive study of a giant, $\approx \! \! 70$
kpc-scale nebula around a radio-quiet quasar at $z<1$. The analysis is based on
deep integral field spectroscopy with MUSE of the field of HE$\,$0238$-$1904, a
luminous quasar at $z=0.6282$. The nebula emits strongly in $\mathrm{[O \,
II]}$, $\rm H \beta$, and $\mathrm{[O \, III]}$, and the quasar resides in an
unusually overdense environment for a radio-quiet system. The environment
likely consists of two groups which may be merging, and in total have an
estimated dynamical mass of $M_{\rm dyn}\approx 4\times 10^{13}$ to $10^{14}\
{\rm M_\odot}$. The nebula exhibits largely quiescent kinematics and irregular
morphology. The nebula may arise primarily through interaction-related
stripping of circumgalactic and interstellar medium (CGM/ISM) of group members,
with some potential contributions from quasar outflows. The simultaneous
presence of the giant nebula and a radio-quiet quasar in a rich environment
suggests a correlation between such circum-quasar nebulae and environmental
effects. This possibility can be tested with larger samples. The upper limits
on the electron number density implied by the $\mathrm{[O \, II]}$ doublet
ratio range from $\log(n_{\rm e, \, [O \, II]} / \mathrm{cm^{-3}}) < 1.2$ to
$2.8$. However, assuming a constant quasar luminosity and negligible projection
effects, the densities implied from the measured line ratios between different
ions (e.g., $\mathrm{[O\,II]}$, $\mathrm{[O\,III]}$, and $\mathrm{[Ne\,V]}$)
and photoionization simulations are often $10{-}400$ times larger. This large
discrepancy can be explained by quasar variability on a timescale of $\approx
10^4{-}10^5$ years. | Zhuoqi Will Liu, Sean D. Johnson, Jennifer I-Hsiu Li, Gwen C. Rudie, Joop Schaye, Hsiao-Wen Chen, Jarle Brinchmann, Sebastiano Cantalupo, Mandy C. Chen, Wolfram Kollatschny, Michael V. Maseda, Nishant Mishra, Sowgat Muzahid | 2023-08-31T18:00:23 | http://arxiv.org/abs/2309.00053v3 | # The first comprehensive study of a giant nebula around a radio-quiet quasar in the \(z<1\) Universe
###### Abstract
We present the first comprehensive study of a giant, \(\approx\)70 kpc-scale nebula around a radio-quiet quasar at \(z<1\). The analysis is based on deep integral field spectroscopy with MUSE of the field of HE 0238\(-\)1904, a luminous quasar at \(z=0.6282\). The nebula emits strongly in [O II], H\(\beta\), and [O III], and the quasar resides in an unusually overdense environment for a radio-quiet system. The environment likely consists of two groups which may be merging, and in total have an estimated dynamical mass of \(M_{\rm dyn}\approx 4\times 10^{13}\) to \(10^{14}\) M\({}_{\odot}\). The nebula exhibits largely quiescent kinematics and irregular morphology. The nebula may arise primarily through interaction-related stripping of circumgalactic and interstellar medium (CGM/ISM) of group members, with some potential contributions from quasar outflows. The simultaneous presence of the giant nebula and a radio-quiet quasar in a rich environment suggests a correlation between such circum-quasar nebulae and environmental effects. This possibility can be tested with larger samples. The upper limits on the electron number density implied by the [O II] doublet ratio range from \(\log(n_{e,\rm[O\,II]}/\rm cm^{-3})<1.2\) to 2.8. However, assuming a constant quasar luminosity and negligible projection effects, the densities implied from the measured line ratios between different ions (e.g., [O II], [O III], and [Ne V]) and photoionization simulations are often 10\(-\)400 times larger. This large discrepancy can be explained by quasar variability on a timescale of \(\approx 10^{4}-10^{5}\) years.
keywords: quasars: supermassive black holes - galaxies: groups - intergalactic medium
## 1 Introduction
Galaxy evolution is a complex process that involves gas inflows and outflows thought to control star formation and black hole growth (for a review, see Naab & Ostriker, 2017). Observations of interstellar medium (ISM) gas masses and star formation rates suggest that massive star-forming galaxies have an ISM depletion timescale much smaller than the age of the Universe at \(z<3\)(Kennicutt & Evans, 2012; Tacconi et al., 2013). This can be explained if galaxies accrete gas from external sources to maintain their star-forming activity and black hole growth (though see Leitner & Kravtsov, 2011). At the same time, the ISM of galaxies can lose gas through various processes including stellar (for a review, see Zhang, 2018) and AGN feedback (for a review, see Fabian, 2012), ram pressure stripping (e.g., Hester, 2006), and tidal interactions with neighboring galaxies (e.g., Marasco et al., 2016). Therefore, observations of the physical conditions, kinematics, and distribution of gas around galaxies can provide insights into the mechanisms governing galaxy formation and evolution. For these reasons, observations of the gaseous cosmic ecosystems of galaxies were highlighted as a key long-term priority by the 2020 Decadal Survey for Astronomy and Astrophysics (National Academies of Sciences, 2021).
The properties of gas flows around galaxies, including their morphology and kinematics, can be directly traced by observations of giant gas nebulae with state-of-the-art wide-field integral field spectrographs (IFSs) such as the Multi-Unit Spectroscopic Explorer (MUSE; Bacon et al., 2010) and the Keck Cosmic Web Imager (KCWI; Martin et al., 2010). At \(z>2\), systematic IFS surveys around radio-quiet quasars discovered ubiquitous giant H I Ly\(\alpha\) nebulae (e.g., Cantalupo et al., 2014; Borisova et al., 2016; Cai et al., 2019; O'Sullivan et al.,
2020; Fossati et al., 2021; Mackenzie et al., 2021). More recently, a study of the ionization states of one of these nebulae found that the gas has a surprisingly large density for halo-scale emission or a very broad density distribution (Cantalupo et al., 2019). However, due to redshifting of optical emission lines into the infrared, surface brightness dimming, and the faintness of galaxies at high redshift, more fully characterizing these \(z>2\) nebulae is time-consuming even with large space- or ground-based telescopes (though see Langen et al., 2023).
At low redshift, on the other hand, non-resonant emission lines such as [O II], H\(\beta\), and [O III] are available at optical wavelengths, and collecting galaxy spectra is less expensive. The power of IFSs enabled the discoveries of giant nebulae around starburst galaxies, galaxy groups, and quasars (e.g., Epinat et al., 2018; Boselli et al., 2019; Chen et al., 2019; Rupke et al., 2019; Zabl et al., 2021; Burchett et al., 2021; Leclercq et al., 2022; Dutta et al., 2023), arising from outflows, interactions, and filamentary accretion. These low redshift nebulae provide an opportunity to study the physical conditions and the processes that may produce giant nebulae at higher redshift. Most published studies of giant nebulae around \(z<1\) quasars have focused on radio-loud systems (Johnson et al., 2018; Helton et al., 2021; Johnson et al., 2022), which represent a small fraction of the general quasar population (e.g., Kellermann et al., 1989). Furthermore, clustering measurements indicate that radio-loud quasars typically reside in massive galaxy groups with halo masses of \(M\sim 10^{13}\) M\({}_{\odot}\) while the halo masses of more common radio-quiet systems are approximately five times lower on average (e.g., Shen et al., 2009). This mass miss-match and the possibility of radio jet feedback make the comparison between low-redshift giant nebulae around radio-loud quasars and high-redshift radio-quiet ones difficult.
Recently, Chen et al. (2023) demonstrated the existence of giant nebulae around two radio-quiet quasars as part of a study focused on turbulence using the observed velocity structure function. In this paper, we present the first comprehensive characterization of a giant nebula and associated galaxy environment around a radio-quiet quasar at \(z<1\), HE 0238\(-\)1904. Recently, this nebula was independently discovered and reported by Zhao & Wang (2023). However, our interpretation of the system differs substantially from the one presented by Zhao & Wang (2023) due to adoption of a significantly different quasar systemic redshift. In particular, Zhao & Wang (2023) adopted a Mg II emission-based redshift of \(z=0.631\) from the Hamburg/ESO Survey of bright Quasars (Wisotzki et al., 2000). On the other hand, we adopt a redshift estimate of \(z=0.6282\) based on the [O II] emission-line centroid measured in the spectrum of the quasar extracted from the same MUSE dataset used to measure the kinematics of the giant nebula. The paper is organized as follows: In Section 2, we discuss the observations, data reduction, and processing. In Section 3, we describe our measurements and investigate the group environment and giant nebula properties. In Section 4, we investigate the origin of the nebula and the physical conditions of the gas. In Section 5, we summarize our findings and discuss their implications.
Throughout the paper, we adopt a flat \(\Lambda\) cosmology with \(\Omega_{\rm m}=0.3\), \(\Omega_{\Lambda}=0.7\), and \(H_{0}=70\) km s\({}^{-1}\)Mpc\({}^{-1}\). All magnitudes are given in the AB system unless otherwise stated.
## 2 Observations and Data
The \(z\approx 0.63\) quasar HE 0238\(-\)1904 has high-quality archival UV _HST_ absorption spectra used to study the CGM of the Milky Way (Zheng et al., 2019; Bish et al., 2021) and distant galaxies (Muzahid et al., 2018; Lehner et al., 2018) in addition to a highly ionized, fast outflow from the quasar itself (Muzahid et al., 2012; Arav et al., 2013). To identify faint foreground galaxies in the quasar field, we observed it with MUSE as part of the Quasar-field Blind Emitter Survey (MUSE-QuBES; Muzahid et al., 2020; Dutta et al., 2023) on the Very Large Telescope (VLT; PI: J. Schaye, PID: 094.A-0131(B) & 096.A-0222(A)). MUSE is an integral-field spectrograph on the UT4 VLT with a field of view (FoV) of \(1^{\prime}\times 1^{\prime}\) and a spaxel size of \(0.2^{\prime\prime}\) in wide-field mode (WFM). MUSE covers the spectral range between 4750 A to 9350 A and a resolution of \(R\sim 3000\). The MUSE observations are centered near the quasar sightline, and we obtained eleven exposures collected between November 18th, 2014 and February 2nd, 2016 with a total exposure time of 8.75 hr with median seeing full-width-at-half-maximum (FWHM) conditions of \(0.7^{\prime\prime}\). At the redshift of HE 0238\(-\)1904, the MUSE FoV corresponds to a projected size of \(\approx 400\) proper kpc (pkpc) on a side, and the spectral coverage includes emission lines such as [O II], H\(\beta\), and [O III]. These emission lines enable sensitive studies of any ionized nebulae and galaxies in the quasar's environment.
To ensure robustness of results, we analyzed the MUSE data reduced through three independent pipelines including CubEx (Cantalupo et al., 2019), the MUSE GTO team pipeline (Weilbacher et al., 2014), and the ESO reduction pipeline (Weilbacher et al., 2012) and found consistent results with all three. All three pipelines include bias subtraction, flat fielding, wavelength calibration, geometric calibration, sky subtraction, flux calibration, and stacking of exposures. For the ESO reductions, we obtained the final, stacked datacube from the ESO Science Archive and performed additional post-processed sky subtraction with the Zurich Atmosphere Purge package (ZAP; Soto et al., 2016). For simplicity, we converted the air wavelengths delivered by the three pipelines to vacuum.
To enable more sensitive and higher angular resolution photometric measurements of galaxies in the quasar field, we also obtained an image from the Advanced Camera for Surveys (ACS) on the _Hubble Space Telescope (HST)_ with the F814W filter (PI: L. Straka, PID: 14660) with a total exposure time of 2182 seconds split between four dithered exposures. We obtained the reduced, stacked image from the Barbara A. Mikulski Archive for Space Telescopes (MAST). In addition, to measure the UV luminosity of the quasar, we obtained the archival UV spectrum from the Cosmic Origins Spectrograph (COS; Green et al., 2012) from MAST. The spectrum consists of a total exposure time of 14400 seconds and 7496 seconds in the G130M and G160M gratings, respectively (PI: J. Green and S. Pentton, PID: 11541 and 12505). We reduced and coadded the COS spectrum following procedures outlined in Johnson et al. (2015); Chen et al. (2020).
### Quasar Light Subtraction
HE 0238\(-\)1904 has a Gaia (Gaia Collaboration et al., 2018) \(G\)-band magnitude of \(m_{G}=15.2\), and this brightness combined with the broad wings of the MUSE point spread function (PSF) causes contamination of nearby galaxy spectra with quasar light. This contamination includes both continuum and line emission due to the unresolved narrow-line region in the nucleus. To study faint extended emission, we removed the contamination by performing quasar light subtraction as described in Helton et al. (2021). In summary, our method of quasar light subtraction does not rely on PSF measurements. Instead, it uses spectral information and the fact that quasars and galaxies have different spectral energy distributions (see also Rupke et al., 2017; Chen et al., 2023).
In ground-based observations, the Earth's atmosphere scatters bluer photons more than redder ones so that the PSF is wider at bluer
wavelengths. The differential scattering makes the spectral slope observed in a spaxel depend on the angular separation from the quasar with steeper (shallower) slopes further from (closer to) the quasar centroid. To account for this, we used a two-component non-negative matrix factorization (NMF; Blanton & Roweis, 2007; Ren et al., 2018) of the quasar light, with one component having a shallow slope and a second having a steep slope. Adding additional a third or fourth NMF component(s) did not noticeably improve the results. In general, the spectrum for each spaxel near the quasar has some light from the quasar and potentially nearby galaxies as well. To subtract quasar light while avoiding subtraction of galaxy light, we fit each spaxel with a linear combination of the two quasar non-negative components and the first two Sloan Digital Sky Survey-Baryon Oscillation Spectroscopic Survey (SDSS-BOSS) galaxy eigenspectra (Bolton et al., 2012) and then subtracted the quasar component of the model. Unlike with some other systems (e.g., Johnson et al., 2018), the host of HE 0238\(-\)1904 does not exhibit bright, extended starlight, so the contribution inferred by the galaxy model was not significant.
## 3 Measurements and Environment
### Quasar Properties
HE 0238\(-\)1904 is a luminous, radio-quiet quasar (Veron-Cetty & Veron, 2006; Arav et al., 2013). To ensure self-consistent measurements of the quasar properties, we estimated its redshift, luminosity, and black hole mass using the MUSE spectrum extracted via **MPDAF**(Bacon et al., 2016) with a \(r=3\arcsec\) aperture. To measure the systemic redshift of the quasar, we fit the [O II]\(\lambda\lambda 3727,3729\) doublet with a Gaussian profile following Hewett & Wild (2010) and found \(z=0.6282\pm 0.0002\), where the uncertainty represents the scatter between the [O II] centroid and stellar absorption lines of SDSS quasars at similar redshift. This redshift is \(\approx\) 4500 km s\({}^{-1}\) from a previously reported Mg II based estimate from Wisotzki et al. (2000). Even so, a more recent Mg II based redshift of \(z=0.628\) from Monroe et al. (2016) confirms our [O II]-based redshift estimate. In general, quasar redshifts measured from the [O II] doublet are more accurate than those measured from broad-lines like Mg II, as we argue in Section 4.1.
In addition, we estimated the bolometric luminosity and the black hole mass of HE 0238\(-\)1904 by fitting the extracted MUSE spectrum with the Python QSO fitting code (PyQSOFit; Guo et al., 2019). PyQSOFit fits a quasar's spectrum with a combination of a power-law continuum, Fe II template, and sets of Gaussian line profiles for both the broad- and narrow-lines. We modelled the H\(\beta\) and [O III] spectral region with the continuum components, three Gaussian profiles for the broad H\(\beta\), and two for the narrow H\(\beta\) and [O III]. From the fit, we computed a monochromatic luminosity at 5100A \(\lambda L_{5100}\approx 1.6\times 10^{46}\) erg s\({}^{-1}\) and a bolometric luminosity of \(L_{\rm bol}\approx 1.7\times 10^{47}\) erg s\({}^{-1}\) using the bolometric correction factor from Richards et al. (2006). Finally, we inferred a black hole mass of \(M_{\rm BH}\approx 10^{9.8}\) M\({}_{\odot}\) using the single-epoch virial theorem-based approach from Vestergaard & Peterson (2006). Following Kormendy & Ho (2013), this black hole mass corresponds to a stellar mass of \(M_{\star}\approx 10^{12.0}\) M\({}_{\odot}\) for the host galaxy, but we caution this stellar mass may be significantly overestimated due to uncertainty in single-epoch virial theorem-based black hole masses and observed scatter in the black hole mass-stellar mass relation. For example, if the true black hole mass is \(1\sigma\) below the mean single-epoch virial theorem estimate, and the stellar mass is \(1\sigma\) below the estimate from the black hole mass-stellar mass relation, the inferred stellar mass would be \(M_{\star}\approx 10^{11.4}\) M\({}_{\odot}\). Furthermore, the single-epoch virial theorem-based relation used here is not calibrated for quasars as luminous as HE 0238\(-\)1904, which may drive disk wind, erroneously inflating the black hole mass estimate. The fitted quasar spectrum is shown in Figure 1.
### Galaxy Measurements and Properties
To study the environment of HE 0238\(-\)1904, we conducted a galaxy survey by first identifying all continuum sources in MUSE and the ACS+F814W image. We identified continuum sources by running Source Extractor (SE; Bertin & Arnouts, 1996) on a median MUSE white light image and the _HST_ image separately. To ensure completeness, we also added sources based on visual inspection.
Figure 1: MUSE spectrum of HE 0238\(-\)1904 overplotted with best-fit models. The MUSE spectrum is shown as a solid black line, the power-law continuum model is shown as a dashed purple line, and the iron template model is shown using a solid blue line. The bottom left inset panel shows the [O II] line emission with the best-fit continuum+line model shown in red. The top right inset panel shows the H\(\beta\) and [O III] emission with the best-fit shown in red. We measured the systemic redshift of the quasar from the [O II] doublet, and inferred the black hole mass from the H\(\beta\) broad component and the continuum luminosity at 5100Å as described in detail in Section 3.1.
Typically, sources are missing from MUSE due to biased background estimation caused by bright objects in the field or due to blending. Based on the background sky standard deviation and source counts in the ACS+F814W image, the imaging catalog is complete for objects brighter than \(m_{\rm F814W}\approx 26-27\), depending on angular size.
For each identified object, we extracted a MUSE spectrum with MPDAF with a circular aperture of \(r=0.7^{\prime\prime}\), which is roughly the size of the MUSE seeing FWHM. The choice of this modest aperture may result in some wavelength dependent aperture losses but helps increase S/N for redshift estimation. We then fit each spectrum as a linear combination of SDSS galaxy eigenspectra as described in Helton et al. (2021) to measure the source redshift. In summary, we computed the best-fit linear combination on a grid from \(z=0\) to \(z=1\) with a step size of \(\Delta z=0.0001\) and recorded the goodness-of-fit statistic (\(\chi^{2}\)) over the entire grid. We adopted the redshift with the minimum global \(\chi^{2}\) as our initial solution. We then visually inspected each best-fit model to ensure robustness and assigned the redshift quality. For galaxies with both emission and absorption lines, we masked out strong emission lines and measured the redshift based on stellar absorption features when possible to avoid a potential bias in redshift from large-scale nebulae in the field (which may not be closely associated with the galaxies in question). Finally,
Figure 2: _HST_ ACS+F814W image of the field of HE 0238-1904. The full image has a FoV of \(1.5^{\prime}\times 1.5^{\prime}\). The larger dashed box shows the \(1^{\prime}\times 1^{\prime}\) MUSE FoV. The smaller dashed box marks the \(30^{\prime\prime}\times 30^{\prime\prime}\) region displayed in Figure 4. The LOS velocities of galaxies relative to the quasar are denoted with outlining colors and the corresponding colorbar is shown on the bottom left. The histogram in the bottom right inset panel shows the velocity distribution of galaxies where galaxies in both orange and purple outlined regions are plotted separately. We note that the orange and purple regions and corresponding histograms are only for visualization. The two-Gaussian fitting of the velocity distribution does not rely on any spatial information. Galaxies in the quasar host environment are labeled with black circles and labeled by their IDs. The approximate stellar mass weighted group center is marked with a white asterisk while the weighted centers of the richer, redshifted group and less rich, blueshifted group are marked with red and blue asterisks, respectively. Based on spatial distribution and kinematics, HE 0238\(-\)1904 resides in a massive, rich environment potentially consisting of two galaxy groups which may be merging.
we classified our confidence in the redshift measurements based on the number of the detected spectral features. All of the galaxies in the quasar environment have two or more spectral features except for G11 and G18. According to Helton et al. (2021), the uncertainty in galaxy redshifts measured in MUSE spectra with these techniques is \(\sigma\approx 20\,\mathrm{km\,s^{-1}}\). Comparing the continuum source catalog and the corresponding redshift measurements, the redshift survey is approximately 100% complete for sources brighter than \(m_{\mathrm{F814W}}\approx 24\) and approximately 95% complete for those brighter than \(m_{\mathrm{F814W}}\approx 25\). For comparison, an \(L_{*}\) galaxy at \(z\approx 0.6\) has \(m_{\mathrm{F814W}}\approx 20.6\) assuming the luminosity function from Faber et al. (2007). The high completeness of the galaxy survey at faint magnitudes enables us to study the origins of nebulae, even if they arise from interactions involving relatively faint dwarf galaxies.
To examine properties of the quasar host environment, we identified candidate group members based on their LOS velocities relative to the quasar (\(\Delta v=v-v_{\mathrm{QSO}}\)). In particular, we selected galaxies with \(|\Delta v|<2000\)\(\mathrm{km\,s^{-1}}\). We inferred the physical properties of the selected galaxies with Bagpipes (Carall et al., 2018, 2019). Bagpipes performs stellar population synthesis (SPS) with a stellar evolution model from Bruzual and Charlot (2003), an initial mass function from Kroupa (2001), and the Bayesian inference package Multinest(Buchner et al., 2014; Feroz et al., 2009, 2019). We fit both spectroscopic and photometric data simultaneously with Bagpipes. Many of the galaxies in our sample only have one photometric datapoint available, necessitating the use of the spectra to further inform the stellar population synthesis. In our fitting procedure, we assumed an exponential star formation history with e-folding time scale of \(0.01<\tau/\mathrm{Gyr}<8.00\), solar stellar metallicity, and dust attenuation model from Calzetti et al. (2000) with \(0<A_{V}/\mathrm{mag}<2\). The choice of exponentially declining star formation histories enables more direct comparison with surveys such as MUSE-Wide (Urrutia et al., 2019) and the MUSE Ultra DEEP Field (Fossati et al., 2019). We introduced a 2nd order multiplicative polynomial to reconcile the potential artificial differences between SED measured in photometry and spectra. This polynomial accounts for systematic uncertainty in the MUSE flux due to wavelength dependent aperture losses and uncertainty in the flux calibration (Weilbacher et al., 2020). We also used Bagpipes spectrum noise scaling to allow the relative weighting of the photometry and spectrum to be a nuisance parameter. We note that the results are not sensitive to this scaling in our case (see Carnall et al., 2019). In addition to the ACS+F814W photometry, we also included \(grizY\) photometric data from the Dark Energy Survey (DES; Abbott et al., 2021) available for 16 galaxies. The resulting stellar mass estimates and dust attenuation \(A_{V}\) values are reported in Table 1. The stellar masses have associated systematic uncertainties of \(\approx 0.2\) dex. Galaxies close to the quasar (G1-G7) are contaminated by the quasar light, and we used the quasar-light subtracted spectra for Bagpipes fitting when possible. Galaxies G1, G3, G11, G13, G18, G20, and G31 do not have a stellar mass estimate because their continua are too faint or are too badly contaminated by the quasar continuum. To further characterize these galaxies, we also report
Figure 3: MUSE galaxy spectra with the best-fit spectral models. The MUSE spectrum is shown by a solid black line. The uncertainty is shown by a solid grey line. The best-fit model used for redshift measurement is shown by a solid red line.
4000 A break strength (D4000; Gallazzi et al. 2005) and rest-frame \(B\)-band absolute magnitude with \(K\)-corrections calculated using templates from Coleman et al. (1980) chosen based on the strength of the 4000 A break. The IDs, galaxy coordinates (R.A., Decl.), redshifts, ACS+F814W apparent magnitudes, absolute \(B\)-band magnitudes, adopted K-correction templates (SO, Scd, or Irregular), and D4000 measurements are reported in Table 1, along with the angular distances, projected distances, and LOS velocity differences from the quasar sightline. The locations of these galaxies are shown in Figure 2 and several example MUSE spectra are overplotted with their best-fit PCA spectral models in Figure 3. An interactive view of the galaxy environment and spectra is available online1.
Footnote 1: [http://zhuoqiliu.com/HE0238-1904.html](http://zhuoqiliu.com/HE0238-1904.html)
### The Galactic Environment
In the MUSE field of HE 0238\(-\)1904 we identified 35 galaxies, including the quasar host, with LOS velocities \(|\Delta v|<2000\) km s\({}^{-1}\) of the quasar systemic velocity, which is sufficient to encompass most members of even massive galaxy clusters. Figure 2 shows a \(1.5\arcmin\times 1.5\arcmin\) FoV image from the ACS+F814W observations of the field where
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline ID & R.Aa & Decl.b & \(z\)c & \(m_{\rm F814W}\)d & \(M_{B}\)e & \(K\)-correction & D4000 & \(A_{V}\) & \(\log{(M_{*}/{\rm M}_{\odot})}\)f & \(\Delta\rho\)f & \(d\)h & \(\Delta\nu\)f \\ & (J2000) & (J2000) & & (AB) & (AB) & template & (mag) & & (′′) & (pkpc) & (km s\({}^{-1}\)) \\ \hline Host & 02:40:32.58 & \(-\)18:51:54. & 0.6282 &... &... &... &... &... &... & 0.0 & 0.0 & 0 \\ G1 & 02:40:32.63 & \(-\)18:51:55. & 0.6278 & 24.3 & \(-\)17.5 & S0 & 1.26 \(\pm\) 0.57 &... & 9.3 & 4.4 & 30.4 & -76 \\ G2 & 02:40:32.73 & \(-\)18:51:47. & 0.6270 & 23.3 & \(-\)18.5 & S0 & 1.56 \(\pm\) 0.08 & 0.1 & 9.5 & 4.8 & 32.7 & -224 \\ G3b & 02:40:32.74 & \(-\)18:51:55. & 0.6280 & 23.8 & \(-\)18.3 & Irr &... &... &... & 9.6 & 5.0 & 34.3 & -40 \\ G4 & 02:40:32.57 & \(-\)18:51:56. & 0.6284 & 24.9 & \(-\)17.3 & Irr & 1.05 \(\pm\) 0.07 & 0.2 & 8.3 & 5.4 & 36.7 & +34 \\ G5 & 02:40:32.71 & \(-\)18:51:57.0 & 0.6280 & 25.2 & \(-\)17.0 & Irr & 0.64 \(\pm\) 0.08 & 0.1 & 7.4 & 5.9 & 40.1 & -40 \\ G6 & 02:40:32.96 & \(-\)18:51:54. & 0.6295 & 22.4 & \(-\)19.4 & S0 & 1.35 \(\pm\) 0.02 & 0.1 & 10.1 & 6.1 & 41.5 & +237 \\ G7 & 02:40:33.04 & \(-\)18:51:53. & 0.6275 & 23.8 & \(-\)18.0 & S0 & 1.30 \(\pm\) 0.04 & 0.0 & 9.3 & 6.9 & 46.9 & -132 \\ G8 & 02:40:32.21 & \(-\)18:51:58. & 0.6284 & 21.8 & \(-\)20.0 & S0 & 1.62 \(\pm\) 0.02 & 0.2 & 10.4 & 9.1 & 61.9 & +34 \\ G9 & 02:40:33.44 & \(-\)18:51:50.7 & 0.6330 & 23.8 & \(-\)18.1 & S0 & 1.49 \(\pm\) 0.05 & 0.2 & 9.7 & 12.2 & 82.2 & +882 \\ G10 & 02:40:33.53 & \(-\)18:51:48. & 0.6323 & 20.0 & \(-\)21.9 & S0 & 1.71 \(\pm\) 0.01 & 0.8 & 11.5 & 13.8 & 94.3 & +753 \\ G11 & 02:40:32.37 & \(-\)18:51:37.6 & 0.6302 &... &... &... &... &... &... &... & 14.1 & 96.3 & +360 \\ G12 & 02:40:32.00 & \(-\)18:51:39. & 0.6297 & 21.4 & \(-\)20.4 & S0 & 1.64 \(\pm\) 0.02 & 0.2 & 10.6 & 14.1 & 96.5 & +274 \\ G13 & 02:40:32.28 & \(-\)18:52:04.9 & 0.6272 &... &... &... &... &... &... & 14.2 & 97.0 & -187 \\ G14 & 02:40:33.17 & \(-\)18:51:37.9 & 0.6310 & 22.6 & \(-\)19.2 & S0 & 1.37 \(\pm\) 0.03 & 0.7 & 10.0 & 15.8 & 108.0 & +513 \\ G15 & 02:40:33.62 & \(-\)18:51:43.2 & 0.6253 & 24.8 & \(-\)17.0 & S0 & 1.99 \(\pm\) 0.22 & 0.4 & 9.0 & 16.8 & 115.0 & -537 \\ G16 & 02:40:31.85 & \(-\)18:52:05. & 0.6279 & 23.8 & \(-\)18.0 & S0 & 1.98 \(\pm\) 0.16 & 1.1 & 9.5 & 17.5 & 119.8 & -58 \\ G17 & 02:40:33.75 & \(-\)18:51:45. & 0.6332 & 22.7 & \(-\)19.1 & S0 & 1.57 \(\pm\) 0.03 & 0.6 & 10.1 & 17.6 & 120.3 & +919 \\ G18 & 02:40:33.53 & \(-\)18:51:39.6 & 0.6332 &... &... &... &... &... &... & 17.9 & 121.9 & +922 \\ G19 & 02:40:33.69 & \(-\)18:52:00.1 & 0.6358 & 22.2 & \(-\)19.7 & S0 & 1.60 \(\pm\) 0.02 & 0.4 & 10.3 & 18.0 & 122.9 & +1398 \\ G20 & 02:40:31.97 & \(-\)18:52:07.9 & 0.6271 &... &... &... &... &... &... & 18.8 & 128.1 & -205 \\ G21 & 02:40:33.48 & \(-\)18:51:36.9 & 0.6341 & 22.1 & \(-\)19.7 & S0 & 1.26 \(\pm\) 0.02 & 1.4 & 10.3 & 19.3 & 131.8 & +1084 \\ G22 & 02:40:31.34 & \(-\)18:52:02.5 & 0.6268 & 23.0 & \(-\)18.9 & S0 & 1.66 \(\pm\) 0.05 & 0.5 & 10.1 & 20.9 & 142.8 & -261 \\ G23 & 02:40:33.76 & \(-
we marked the quasar with a grey star and labelled galaxies with circles as well as their ID. The color of the circle represents the LOS velocity of each galaxy relative to the quasar. Additionally, we display the \(1^{\prime}\times 1^{\prime}\) MUSE FoV, and a smaller \(30^{\prime\prime}\times 30^{\prime\prime}\) region which is the focus of later figures in this work.
Among the 35 galaxies in the environment of HE 0238\(-\)1904, four (two) exhibit stellar masses of \(\log(M_{*}/\mathrm{M}_{\odot})>10.5\) (\(>11\)) (excluding the quasar), indicating a significant overdensity and likely a massive group. To further characterize the environment, we show the distribution of galaxies' LOS velocities relative to the quasar (\(\Delta v=v-v_{\mathrm{QSO}}\)) in the bottom right panel of Figure 2. The LOS velocity distribution peaks around \(-100\,\mathrm{km\,s^{-1}}\) but exhibits a non-Gaussian tail toward higher velocity of \(+100\,\mathrm{km\,s^{-1}}\) to \(+1400\,\mathrm{km\,s^{-1}}\). There is a clear trend between LOS velocity and location on the sky visible in Figure 2 with galaxies with \(\Delta v>0\,\mathrm{km\,s^{-1}}\) largely falling North East of the quasar and those with \(\Delta v<0\,\mathrm{km\,s^{-1}}\) falling near the quasar or South West of it. To better visualize the location\(-\)velocity trend, we divided the field into two regions, one NE of the quasar and one SW of it. The NE (SW) one is marked by an orange (purple) trapezoid in Figure 2. We also show the LOS velocity distribution of the galaxies in each trapezoidal region by the corresponding histograms in the inset panel in Figure 2. The peak and the tail in the histogram correspond closely to these two regions respectively. The non-Gaussian LOS velocity distribution and correlation with spatial location suggests that the overdensity near the quasar host may consist of two distinct, but possibly interacting, galaxy groups.
To quantify the velocity dispersions of these two potential groups, we fit two Gaussians to the entire LOS velocity distribution. This results in one narrow, blueshifted Gaussian and one broader, redshifted one. The blueshifted Gaussian has a mean LOS velocity of \(\Delta v_{\mathrm{group}}=-99\pm 25\,\mathrm{km\,s^{-1}}\) and a 1D velocity dispersion of \(\sigma_{\mathrm{group}}=92\pm 50\,\mathrm{km\,s^{-1}}\) and includes \(\approx 35\%\) of the galaxies near HE 0238\(-\)1904. The redshifted Gaussian has \(\Delta v_{\mathrm{group}}=629\pm 140\,\mathrm{km\,s^{-1}}\) and \(\sigma_{\mathrm{group}}=506\pm 90\,\mathrm{km\,s^{-1}}\) and includes \(\approx 65\%\) of the galaxies. In both cases, the uncertainty estimates are based on bootstrap resampling. While the Gaussian fitting did not include any spatial information, the two Gaussians closely match the purple and orange velocity histograms formed from a spatial separation (see Figure 2). These fitting results suggest that the environment around the quasar includes one massive group at \(\Delta v_{\mathrm{group}}\approx 600\,\mathrm{km\,s^{-1}}\) and one less massive group closer to the quasar velocity. Assuming each group is virialized, we estimate dynamical masses of \(M_{\mathrm{dyn}}\sim 9.8\times 10^{13}\,\mathrm{M}_{\odot}\) and \(M_{\mathrm{dyn}}\sim 5.7\times 10^{11}\,\mathrm{M}_{\odot}\)(Munari et al., 2013) for the richer, redshifted group and less rich, blueshifted group, respectively. To place a lower limit on the mass estimate, we fit a single Gaussian to galaxies with \(\Delta v>200\,\mathrm{km\,s^{-1}}\). We found the velocity dispersion is \(\approx 400\,\mathrm{km\,s^{-1}}\), corresponding to a mass of \(M_{\mathrm{dyn}}\sim 3.8\times 10^{13}\,\mathrm{M}_{\odot}\). The mass range of \(M_{\mathrm{dyn}}\approx 4\times 10^{13}-10^{14}\,\mathrm{M}_{\odot}\) is consistent with massive group or modest mass cluster. However, we caution that the assumption that the groups are virialized introduces additional uncertainty given the complex environment. Finally, in Figure 2, we show the stellar mass weighted group center as a white asterisk, and membership weighted (\(\frac{P_{\mathrm{blue/red}}}{P_{\mathrm{blue+P}}+P_{\mathrm{red}}}\)) centers as red and blue asterisks for the richer, redshifted group and less rich, blueshifted group respectively.
To test the expectation that dynamically more massive groups will contain more massive galaxies, we investigate the most massive galaxies in each group. G8 and G22 are the most massive galaxies with a stellar mass of \(\log(M_{*}/\mathrm{M}_{\odot})=10.4\) and 10.1 respectively in the less rich, blueshifted group. On the other hand, the richer, redshifted group includes two massive elliptical galaxies, G10 and G34, with \(\log(M_{*}/\mathrm{M}_{\odot})=11.5\) and 11.2, respectively. Furthermore, the richer, redshifted group contains a massive disc galaxy, G33, with \(\log(M_{*}/\mathrm{M}_{\odot})=10.8\). This is consistent with HE 0238\(-\)1904 residing in an overdense region likely made of two groups with the redshifted one being richer and more massive. However, the quasar redshift falls between the centroids of the two groups indicating that it could arise in either or truly be located between them. Despite the large uncertainty in the stellar mass of the quasar host galaxy (see Section 3.1), the large black hole mass suggests its a massive galaxy, possibly the largest in the overdensity around HE 0238\(-\)1904. It is therefore more probable that HE 0238\(-\)1904 resides in the richer, redshifted group. Nonetheless, we cannot completely rule out the possibility that HE 0238\(-\)1904 originates from the less rich, blueshifted group. In either case, the dynamically rich and likely unrelaxed environment could result in galaxy interactions that can produce giant nebulae via ram pressure and tidal stripping.
### Nebular Environment
Due to ionizing radiation from the accretion disk, wide-field IFS observations of quasar fields often find large nebulae (Johnson et al., in prep). To search for the nebulae around HE 0238\(-\)1904, we conducted continuum subtraction of the datacube locally for the [O II], H\(\beta\), and [O III] emission lines around the quasar. For continuum fitting near each of the three lines, we masked the spectral region within \(\pm 500\)\(-\)1000 km s\({}^{-1}\) of the expected observed wavelength at the quasar's redshift. We fine-tuned the masked region individually for each of the three lines to avoid skyline contamination and to account for the larger width [O II] doublet. For each spaxel in the masked datacube, we then fit a third-order polynomial to the continuum regions around each line and subtracted the best-fit model to complete the continuum subtraction.
This continuum-subtracted MUSE datacube enabled the discovery of a giant ionized nebula in [O II], H\(\beta\), and [O III] around HE 0238\(-\)1904 with a total area of \(\approx 5000\) kpc\({}^{2}\) which is visualized in Figure 4. This nebula surrounds the quasar with projected radii of \(d\approx 30\) to 50 kpc and with LOS velocities of \(\Delta v\approx-250\) to \(+250\) km s\({}^{-1}\) from the quasar. The nebula is more extended to the South East and the South West of the quasar. The South East extension of the nebula is spatially coincident with galaxies G1, G3, G4, and G5. Additionally, the tail extending South West of the quasar is distinct from but approximately in the direction of G8.
To examine the nebula and any relationship with galaxies in the quasar environment, we show [O II] and [O III] emission contours over the HST image in panel (a) of Figure 4. We also display a nebular LOS velocity map in panel (b) and a [O III]\(/\)[O II] line ratio map in panel (c). We constructed these two maps by jointly fitting Gaussian line profiles to the continuum-subtracted [O II], H\(\beta\), and [O III] datacubes. Instead of fitting the spectrum of each individual spaxel, we averaged over circular apertures of \(r=1^{\prime\prime}\) to enhance S/N. We chose this aperture radius based on experimentation to visualize even faint parts of the nebula. These two maps provide an opportunity to study the spatial dependence of the kinematics and the ionization state of the gas. In addition, we show three panels of narrowband images generated from the continuum subtracted datacubes for each of [O II] and [O III] in velocity ranges of \(-300\) to \(-100\) km s\({}^{-1}\), \(-100\) to \(+100\) km s\({}^{-1}\), and \(+100\) to \(+300\) km s\({}^{-1}\) in panel (d)-(f) and (g)-(i) respectively.
The nebula exhibits an irregular morphology but with a spatial trend in kinematics. In particular, the region North of the quasar is redshifted relative to the quasar and has a LOS velocity of \(\Delta v=
Figure 4: Visualizations of the nebula discovered around HE 0238\(-\)1904. Panel (a): HST ACS+F814W image of the field. Galaxies are circled in black and labelled with their IDs. Panel (b): map of the nebular LOS velocity relative to the quasar systemic velocity. Galaxies are circled in black and colored with their velocities. Panel (c): map of nebular photoionization shown as the line ratio [O III] \(\lambda 5008/\)[O II] \(\lambda 43727+3729\). Panel(d)-(f) and Panel (g)-(i): narrow-band [O II] and [O III] surface brightness maps extracted from the MUSE datacube over the velocity intervals labelled in each panel. The inset panel in Panel (h) shows a zoomed, unsmoothed map around G3 and G5 to emphasize the possible existence of a tidal tail. These maps are overlaid with [O II] and [O III] surface brightness contours at levels of 0.08 and \(0.3\times 10^{-17}\) erg cm\({}^{-2}\) s\({}^{-1}\) arcsec\({}^{-2}\). The contours shown in panel (e) and panel (h) are overlaid on the HST image in blue and red respectively. We note that surface brightness maps and contours are smoothed with 3 pixel kernels. A version of this figure with the region circles marked in every velocity panel is available online1.
\(0-250\,\mathrm{km\,s^{-1}}\). The region South of the quasar including the tail to the West is mainly blueshifted relative to the quasar but with a small redshifted region in the most Southern points. This southern region is spatially coincident and potentially kinematically coincident with G1, G3, G4 and G5. However, the continua of these galaxies are too faint to measure stellar absorption-based redshifts. This raises the possibility that their nebular spectra may be contaminated by the surrounding nebulae, resulting in a biased redshift measurement. In the case of G3 and G4, the line width of the nebular emission near the galaxies is significantly narrower than the more extended emission from nearby parts of the giant nebula, indicating that the galaxy line emission likely arises in the ISM of the two dwarfs.
The nebula also shows a spatial trend in the ionization-state-sensitive [O III]\(/\)[O II] line ratio. The majority of the nebula is [O II] dominated but the region South East of the quasar has greater [O III] emission, particularly, at a few [O III] knots near G1, G3 and G5. The knots near G3 and G5 have the highest surface brightness in the nebula. Furthermore, the bright region extending to the South of the brightest knot near G3 is reminiscent of a tidal tail.
To better explore the properties of the nebula, we selected several representative regions in it and extracted their full spectra to infer physical conditions from both strong ([O II], H\(\beta\), and [O III]) and weak lines ([Ne V]\(\lambda 3427\), H\(\delta\), H\(\gamma\), [O III]\(\lambda 4364\), and He II\(\lambda 4687\)2). We picked the locations of these regions to cover a wide range in line ratios, surface brightness, and projected locations relative to the quasar. These regions are shown in panel (g) of Figure 4 and labelled with letters and numbers where S# refers to regions with higher surface brightness for which we used an extraction radius of \(0.7\arcsec\) while B# labels low surface brightness regions which required a larger extraction radius (\(>1\arcsec\)) to achieve sufficient S/N.
Footnote 2: Other weak lines such as [Ne III]\(\lambda 3869\), He I\(\lambda 3889\) & H\(\epsilon\) are covered by MUSE but we do not use them in this work because of contaminating sky lines or blending with other lines.
To measure the emission properties for each region, we jointly fit the strong and weak emission lines described above with Gaussian profiles using LMFT (Newville et al., 2014). For each region, all fitted lines share the same redshift and velocity width, but line fluxes are free parameters except for cases with line ratios set by atomic physics (e.g., [O III]\(\lambda 4960\) and [O III]\(\lambda 5008\)). In most cases, a single set of Gaussians is enough to describe the emission line profiles, except for S3, S4, and B4 which require a second set of Gaussians to account for broader (\(\sigma\approx 100\)-\(170\,\mathrm{km\,s^{-1}}\)) emission wings. Such emission wings are often seen around luminous quasars due to quasar-driven outflows (Heckman et al., 1981; Liu et al., 2013, 2013), but the wings on S3, S4, and B4 may also be due to projection effects. We summarize the measurements for these regions, including their distances from the quasar, extraction radii, line fluxes, LOS velocities, and 1-D velocity dispersions, in Table 2. We display strong and weak line spectra as well as their best-fit models in Figure 5 and Figure 6 respectively for a representative subset of the regions.
## 4 Discussion
As discussed in Section 3.3, the environment of HE 0238\(-\)1904 is overdense and includes a massive galaxy group or cluster. Based on clustering studies, this environment is richer than those of most radio-quiet systems, but consistent with expectation for radio-loud ones. This demonstrates that radio-quiet systems like HE 0238\(-\)1904 are diverse in terms of their host environment. Nevertheless, the lack of detected radio emission and amorphous morphology of the nebula suggests that it is not jet related. Considering that most published giant nebulae at \(z<1\) are in a rich environments, the presence of giant nebulae might be correlated with group properties. A larger sample size of quasars with wide IFS observations is required to investigate this possibility.
Alternatively, such a rich environment can be explained by variable radio quasars. Quasars are capable of changing from radio-quiet to radio-loud or vice versa. Nyland et al. (2020) found 26 sources showing radio variability over timescales of decades from the SDSS DR14
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline ID & Distancea & Extraction & [O II] & H\(\beta\) & [O III] & [Ne V] & [O III] & He II & \(\Delta v\)b & \(\sigma\)c \\ & (kpc) & radius & \(\lambda\lambda 3727+3729\) & \(\lambda 5008\) & \(\lambda 3346\) & \(\lambda 4364\) & \(\lambda 4687\) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) \\ & & (\(\arcsec\)) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & \\ & & s\({}^{-1}\)\({}^{-2})\) & s\({}^{-1}\)\({}^{-2}\)) & s\({}^{-1}\)\({}^{-2}\)) & s\({}^{-1}\)\({}^{-2}\)) & s\({}^{-1}\)\({}^{-2}\)) & \\ \hline S1 & 45 & 0.7 & \(1.73\pm 0.05\) & \(0.69\pm 0.06\) & \(9.17\pm 0.05\) & \(0.15\pm 0.03\) & \(0.21\pm 0.02\) & \(<0.21\) & \(-11\pm 3\) & \(62\pm 4\) \\ S2 & 36 & 0.7 & \(3.55\pm 0.08\) & \(1.14\pm 0.14\) & \(23.48\pm 0.10\) & \(0.37\pm 0.05\) & \(0.40\pm 0.04\) & \(0.35\pm 0.11\) & \(-55\pm 3\) & \(43\pm 4\) \\ S3 & 25 & 0.7 & \(<0.30\) & \(<0.27\) & \(6.27\pm 0.22\) & \(<0.15\) & \(<0.09\) & \(<0.18\) & \(-107\pm 3\) & \(61\pm 4\) \\ S3\({}_{\rm uing}\) & 25 & 0.7 & \(2.90\pm 0.10\) & \(0.73\pm 0.09\) & \(2.44\pm 0.22\) & \(<0.18\) & \(<0.12\) & \(<0.21\) & \(-14\pm 9\) & \(104\pm 5\) \\ S4 & 17 & 0.7 & \(1.34\pm 0.18\) & \(0.28\pm 0.08\) & \(3.39\pm 0.10\) & \(<0.09\) & \(<0.15\) & \(-114\pm 3\) & \(45\pm 4\) \\ S4\({}_{\rm uing}\) & 17 & 0.7 & \(4.17\pm 0.20\) & \(0.52\pm 0.09\) & \(3.14\pm 0.12\) & \(<0.27\) & \(<0.15\) & \(<0.27\) & \(+12\pm 8\) & \(169\pm 6\) \\ S5 & 9 & 0.7 & \(5.96\pm 0.28\) & \(0.77\pm 0.26\) & \(2.51\pm 0.22\) & \(<0.84\) & \(<0.51\) & \(<0.78\) & \(+8\pm 11\) & \(140\pm 11\) \\ S6 & 20 & 0.7 & \(5.04\pm 0.07\) & \(1.47\pm 0.12\) & \(14.03\pm 0.07\) & \(0.15\pm 0.05\) & \(0.22\pm 0.04\) & \(0.34\pm 0.09\) & \(-62\pm 3\) & \(68\pm 4\) \\ S7 & 29 & 0.7 & \(0.99\pm 0.04\) & \(0.18\pm 0.06\) & \(0.63\pm 0.04\) & \(<0.09\) & \(<0.06\) & \(<0.18\) & \(-72\pm 8\) & \(111\pm 8\) \\ S8 & 18 & 0.7 & \(2.33\pm 0.04\) & \(0.52\pm 0.06\) & \(1.98\pm 0.04\) & \(<0.09\) & \(<0.06\) & \(<0.15\) & \(-119\pm 4\) & \(89\pm 4\) \\ S9 & 11 & 0.7 & \(3.71\pm 0.16\) & \(1.10\pm 0.15\) & \(2.56\pm 0.13\) & \(<0.45\) & \(<0.27\) & \(<0.39\) & \(+173\pm 7\) & \(110\pm 7\) \\ S10 & 15 & 0.7 & \(1.96\pm 0.05\) & \(0.47\pm 0.05\) & \(1.58\pm 0.04\) & \(<0.12\) & \(<0.09\) & \(<0.15\) & \(+58\pm 4\) & \(79\pm 5\) \\ B1 & 49 & 1.4 & \(1.14\pm 0.08\) & \(0.89\pm 0.12\) & \(2.21\pm 0.0
Figure 5: Examples of nebular spectra (stronger lines) and best-fit spectral models for multiple regions. The locations of these regions are shown as circles and labelled by their IDs in Figure 4. The extracted spectrum is shown as solid black lines and the error array is shown as grey lines. The best-fit models are shown as red solid lines. In most nebular regions, we detected strong emission lines such as [O II], H\(\beta\), and [O III].
Figure 6: Examples of nebular spectra (fainter lines) and best-fit spectral models for multiple regions. The locations of these regions are shown as circles and labelled by their IDs in Figure 4. The plotting style is as described in Figure 5. Only in the most luminous nebular regions, we detected weak emission lines such as [Ne V]\(\lambda\)3427, H\(\delta\), H\(\gamma\), [O III]\(\lambda\)4364, and He II\(\lambda\)4687.
quasar catalog (Paris et al., 2018) and the Wide-field Infrared Survey Explorer (WISE; Wright et al., 2010) R90 quasar catalog (Assef et al., 2018). These sources, once considered radio-quiet quasars, now meet the criteria for radio-loud ones. It implies that the probability that any particular radio-quiet quasar becomes radio-loud on the light-crossing timescale of the nebula is approximately 1%. However, the presence of a massive group and nebula mean that HE 0238\(-\)1904 is not a representative quasar and so may be more likely to transition to radio-loud relatively soon. On the other hand, the possibility that HE 0238\(-\)1904 was previously radio-loud and is now radio-quiet is harder to address since such transitions are not well studied.
In the following subsections, we discuss insights into the physical origins and state of the giant nebula which includes analyses of density and ionization-state sensitive diagnostic emission lines. Several of these analyses require priors on the dust content and density of the gas. To investigate dust content, we estimate Balmer line ratios, and find H\(\delta\)/H\(\gamma\) ratios of \(\approx 0.55\). These ratios are consistent with Case B recombination (Osterbrock and Ferland, 2006) in the absence of dust. To obtain density estimates, we infer emission measure of the nebula from the surface brightness of H\(\beta\) following Chen et al. (2019). Assuming H\(\alpha\)/H\(\beta\approx 3\), a clumping factor of 1, and length-scale 30 pkpc, we found an electron density of \(\log(n_{\rm e}/{\rm cm}^{-3})\approx-1\). However, this density estimate has a large uncertainty and is effectively a lower limit due to the assumption of a unity clumping factor.
### Origin of the Nebular Gas
Giant nebulae can be produced via ram pressure and tidal stripping, AGN and stellar feedback, or filamentary accretion. The nebula around HE 0238\(-\)1904 is unlikely to arise from a jet-driven outflow given the fact that the quasar is radio-quiet and exhibits no detectable radio jet. While S3 and S4 exhibit broad emission wings, most regions are well characterized by a single Gaussian profile with narrow velocity dispersion (\(\sigma<120\) km s\({}^{-1}\); see Table 2). These quiescent kinematics are inconsistent with the broad velocity dispersion expected from radio-quiet AGN and stellar feedback (Liu et al., 2013; Rupke et al., 2019). In addition, the morphology is inconsistent with expectations for filamentary accretion (Johnson et al., 2022). On the other hand, the nebula is spatially and kinematically coincident with likely interacting galaxies in the field of HE 0238\(-\)1904, suggesting that stripping from interactions is likely responsible for most of the nebula with possible subdominant contributions from outflows.
The nebula spatially surrounds the Host, G1, G3, G4, and G5, and extends to the South West of the quasar to a projected distance of \(d\sim 70\) pkpc. This spatial coincidence suggests that the nebula likely arises from interaction-related stripping. The dwarf galaxies G3 and G5 show a possible tidal-tail-like structure as shown in panels (e) and (h) of Figure 4, suggesting that this part of the nebula might be created from tidal stripping. In addition to this, the emission maps on larger scales resemble a head-tail morphology with the head around the quasar and with the tail extending to the South West of the quasar. Head-tail morphologies are commonly seen in nebulae originated from ram pressure stripped ISM (e.g., Poggianti et al., 2016; Boselli et al., 2019; Chen et al., 2019). Interestingly, while the nebula exhibits a head-tail morphology, it does not exhibit multiple filaments like some "jellyfish" galaxies observed in the optical line emission. Instead, it resembles the smoother emission profile sometimes seen in ram-pressure debris observed in H I 21-cm (Hess et al., 2017). There are two plausible explanations for ram pressure stripping in the environment of HE 0238\(-\)1904. First, the nebula may arise from stripping of the quasar host's ISM and CGM if it is falling into the richer, redshifted group and passing through the associated hot halo. Second, dwarf galaxies may have travelled through the hot halo of the massive group from West to East, leaving their ram pressure stripped ISM and CGM behind along their path.
The discovery of a giant nebula requires both the presence of gas and its positioning within quasar's ionization cone. However, due to projection effects, the relative position between the quasar and the nebula remains uncertain. The two previously mentioned hypotheses provide potential frameworks. (1) If the gas results from stripping of the quasar host's ISM, the nebula is likely to surround the quasar. In this case, it will naturally be illuminated by the quasar. Alternatively (2) if the nebula arises from the stripped CGM/ISM of other galaxies in the overdensity, the gas will be widely distributed throughout the groups and more distant from the quasar. Only a fraction of this gas might coincidentally fall within the quasar's ionization cone, consistent with the large opening angle suggested by Trainor and Steidel (2013); Borisova et al. (2016); Schmidt et al. (2018); den Brok et al. (2020).
To examine between these scenarios, we show the surface brightness profiles of [O II] and [O III] made with Photutils (Bradley, 2023) in Figure 7. The profile of [O II] declines smoothly as a function of radius, and plateaus at \(\approx 50\) pkpc. In contrast, the [O III] profile exhibits shallower drop due to the bright knots seen in the narrow-band images. The plateau in the [O II] profile corresponds to the head-tail morphology of the nebula, and the bright knots hints at a dwarf-related origin for part of the nebula. Collectively, the [O II] and [O III] profiles suggest a complex scenario. The centroids of narrow-band [O II] and [O III] surface brightness maps are 10 and 19 pkpc away from the quasar respectively, an alignment to within 15% of the size of the nebula. This coincidence could be explained if the gas surrounds the quasar or if the quasars ionization cone is fairly well centered on our LOS. However, the significant contributions of individual dwarf galaxies to the [O III] surface brightness profile underscore the challenge in precisely determining the nebula's position
Figure 7: Emission line surface brightness profile for the nebula around HE 0238\(-\)1904. The [O II] and [O III] profiles are extracted over a velocity interval of \(-600\) to \(600\) km s\({}^{-1}\), and are circularly averaged at different distances from the quasar centroid. The profile of [O II] declines smoothly as a function of radius, while the [O III] exhibits shallower drop due to the bright knots seen in the narrow-band images.
relative to the quasar. Consequently, it is plausible that both scenarios (1) and (2) contribute to the nebula.
The giant nebulae around HE 0238\(-\)1904 was independently discovered and reported by Zhao & Wang (2023). They attributed the gas to a superbubble driven by the quasar based on an apparent large velocity shift between the nebula and the quasar redshift and as well as broad line widths reported near the quasar. However, the large velocity shift is due to the reliance on an older, Mg II-based redshift of \(z=0.631\), which is \(\approx+500\) km s\({}^{-1}\) from our [O III]-based redshift of \(z=0.6282\). Rather than relying on a redshift estimate from the literature, we measured the quasar redshift and kinematics of the giant nebula from the same MUSE dataset to avoid any systematic uncertainty due to wavelength calibration errors. Moreover, quasar redshifts based on [O II] are generally more accurate than those measured from Mg II due to the narrowness of the line and lack of blueshifted wings on [O II]. In particular, quasar redshifts measured from [O II] trace the underlying quasar host redshifts measured in stellar absorption to within \(\approx\pm 20\) km s\({}^{-1}\)(Hewett & Wild, 2010). Finally, our redshift estimate of \(z=0.6282\) is more consistent with the centroid of the broad H\(\beta\) line, aligns with the peak of the quasar's [O III] emission line, and matches a more recent Mg II-based redshift of \(z=0.628\) from the UV-bright Quasar Survey (Monroe et al., 2016). Furthermore, we measured significantly narrower line widths near the quasar. This is likely due to our removal of [O III] and [O II] emission from the unresolved narrow-line emission region of the quasar while Zhao & Wang (2023) only removed emission from the broad-line region. In summary, the modest velocity shifts and largely narrow emission line widths are consistent with much of the gas originating from interactions with more minor possible contributions from an outflow. When using the updated quasar redshift and quasar-light subtracted datacube, we find no evidence for a fast, quasar driven superbubble in the system.
### Physical Conditions of the Emitting Gas
Previous studies of giant nebulae have attributed the ionization of the gas to ionizing photons from AGN, shocks, and young stellar populations (e.g., Johnson et al., 2018; Rupke et al., 2019; Chen et al., 2019; Helton et al., 2021; Zhang et al., 2023). The presence of the quasar suggests the source of ionization is AGN-related. To study the physical conditions of the gas, we measured the the density- and temperature-sensitive [O II]\(\lambda 3729/\)[O II]\(\lambda 3727\) and [O III]\(\lambda 4364/\)[O III]\(\lambda 5008\) line ratios as well as ionization state-sensitive strong and weak line ratios in each region. These line ratio measurements are reported in Table 2 and 4 [O III]/[O II] map is shown in panel (c) of Figure 4. We discuss these measurements and their implications in the following three subsections.
#### 4.2.1 Direct Density and Temperature Estimates
With spectral coverage of [O II]\(\lambda 3727\), [O II]\(\lambda 3729\), [O III]\(\lambda 4364\), and [O III]\(\lambda 5008\), we can directly measure electron density (\(n_{\rm e}\)) and temperature (\(T_{\rm e}\)), as discussed in Osterbrock & Ferland (2006). The [O II] doublet is a good density estimator because the difference in excitation energy between these two upper states is small so that the relative population in the two states is determined by electron density and is insensitive to temperature. In contrast, the [O III] doublet upper states have a larger excitation energy difference, making the populations of these states mainly sensitive to electron temperature and insensitive to electron density. Electron number densities from the [O II] doublet are reasonable proxies for the overall densities of ionized nebulae because H and O share similar ionization energies of \(13.6\) eV.
To translate line ratios into physical conditions, we used Pyneb (Luridiana et al., 2015) which predicts the [O II] and [O III] line ratios at a given density and temperature by solving the detailed balance equation for an \(n\)-level atom. We fit the measured line ratios with Pyneb models by performing Markov chain Monte Carlo (MCMC) analysis with emcee(Foreman-Mackey et al., 2013), and inferred physical conditions from the resulting posteriors. We report the densities in Table 3, though we omit measurements in cases where the S/N or broad line width results in poorly constrained conditions.
For all regions where the [O II] doublet is resolved, the line ratio is in the low density limit except S6. We therefore report 95% upper limits in density for all but S6. The inferred electron number density upper limits range from \(1.2<\log(n_{\rm e,[O\,{\rm II}]}/{\rm cm}^{-3})<2.8\), with a median of \(\log(n_{\rm e,[O\,{\rm II}]}/{\rm cm}^{-3})<1.6\). These density upper limits are consistent with gas arising from ionized ISM (Draine, 2011) or CGM. We detected [O III]\(\lambda 4364\) in only three luminous regions, S1, S2, and S6. The inferred temperatures for S1, S2, and S6 are \(\log(T/{\rm K})\approx 4.2\), 4.2, and 4.1 respectively.
#### 4.2.2 Indirect Density Estimates from Photoionization Simulations
Under the assumption that the nebula is ionized by the quasar, its ionization states are set by the luminosity of the quasar, density of the gas, and distance from the quasar, with secondary effects from metallicity and ionizing spectral shape. With an estimate of the quasar's luminosity and assuming projection effects are negligible, the density structure of the gas can be inferred from measured line ratios (see Cantalupo et al., 2019). Studies of high redshift quasar nebulae found ionization states can only be explained by a density of \(\log(n_{\rm H}/{\rm cm}^{-3})\approx 1.9\), significantly higher than expected CGM/IGM densities, or alternatively by a broad density distribution (see Cantalupo et al., 2019). At low redshift, this kind of scenario can be further explored with insight from rest-optical lines to compare ionization-based densities with more direct density estimates from the [O II] doublet.
To infer the physical conditions from the line ratios in Table 2, we ran photoionization simulations for each region with Cloudy version C17.03 (Ferland et al., 2017). We modelled the quasar's radiation field using a power law (\(I\propto\nu^{\alpha}\)) between 0.37 and 73.5 Ryd, with \(\alpha\) between \(-1.8<\alpha<0\) following Groves et al. (2004) but extending to a higher \(\alpha\). We set the modeled quasar luminosity at 1 Ryd using direct measurement of the monochromatic UV luminosity from COS. For the gas, we adopted single density and single metallicity models, with density of \(-2<\log(n_{\rm H}/{\rm cm}^{-3})<4.6\) and metallicity of \(-1.5<\log(Z/Z_{\odot})<0.5\). We chose this metallicity range to cover the characteristic metallicities of the cool CGM around massive elliptical galaxies (Zahedy et al., 2019) but extended it to higher metallicity in case some gas has ISM origins. Due to limited ion coverage, metallicity and \(\alpha\) are degenerate in some cases, so we treated them as nuisance parameters and focused on inferred densities. We note that there is relatively little degeneracy between density and metallicity except at high metallicities of \(\log(Z/Z_{\odot})>0.2\) when increased cooling from metal lines begins to substantially change the equilibrium temperature.
For each region, we conducted these models in grids with a step of 0.2 dex in density and metallicity, and 0.2 in \(\alpha\). We then interpolated these models with the RegularGridInterpolator function from scipy.interpolate(Virtanen et al., 2020) within these ranges after checking for convergence. Finally, we ran emcee to estimate
posteriors given the measured line ratios and uncertainties. We verified the quality of the fits by comparing the posteriors of the model line ratios with the measured line ratios using violin plots shown in Figure 9. The violin plots verify that the ionization-state-sensitive line ratios (shown in the middle panels) are consistent with the measured line ratios. The best-fit \(\alpha\) values for most regions are within \(-1.0<\alpha<-0.6\), somewhat greater than ones given in Groves et al. (2004). Inferred metallicities for S1, S2, and S6, with He II and [Ne V] detections, are well-constrained to be \(-0.2<\log(Z/Z_{\odot})<0.2\). The densities inferred from these photoionization simulations range from \(\log(n_{\rm H,Cloudy}/{\rm cm}^{-3})=1.6\) to 4.2 and are reported in the right column of Table 3, though we stress that these densities neglect potential quasar variability and projection effects.
#### 4.2.3 Comparison of the Density Estimates
Previous photoionization-based estimates of the density of quasar nebulae at high-redshift found unexpectedly high densities, close to or exceeding typical densities for the ISM, despite being measured on CGM/IGM scale (Cantalupo et al., 2019). The ionization sensitive line ratios of the nebula around HE 0238\(-\)1904 also imply high photoionization-based densities of \(1.6<\log(n_{\rm H,\ Cloudsy}/{\rm cm}^{-3})<4.2\). However, the more direct [O II]-based densities are inconsistent with and significantly smaller than the photoionization-based densities for most regions as shown in Table 3. To better demonstrate this inconsistency, Figure 9 shows both the measured line ratios and the posteriors inferred from the photoionization models for S2, S6, and S9. The ionization-state-sensitive line ratios are consistent with the model posteriors for all three regions, while the [O II] line ratios are highly discrepant for S6 and S9. The right panel of each subfigure shows the density posteriors from both direct and indirect density estimates.
As shown in Table 3, we found that all regions with photoionization-based density estimates except S1, S2, B1, and B3 have a large (1\(-\)2 dex) discrepancy when compared to the [O II] doublet-based densities. In the most extreme case, S5, the two density estimates are off by 2.6 dex or a factor of 400. In principle, the inferred density mismatch could be explained by a non-uniform density distribution if the [O II] arises from less dense gas than the other emission lines. To test whether a more complicated density structure could explain the density mis-match, we modeled the emitting gas as a multi-phase system consisting of one low density component and one high density component with the relative contribution of each treated as an additional free parameter. This model successfully reproduces the observed emission-line ratios, and the density inferred for the high density component matches the single-phase model results. Furthermore, the posteriors of the two-component model indicate that the high density component dominates the [O II] emission. Therefore, a two-phase model cannot explain the density discrepancy between the direct [O II]-based density measurements and the ionization-state-based density estimates.
To test if a broad, continuous density distribution can explain the discrepancy, we modelled the emitting gas with a log-normal density distribution (see Cantalupo et al., 2019). A log-normal distribution is defined as
\[{\rm PDF}(n){\rm d}n=\frac{1}{\sqrt{2\pi}\sigma}{\rm exp}\Big{[}-\frac{[\ln(n )-\ln(\mu)]^{2}}{2\sigma^{2}}\Big{]}{\rm d}{\ln}(n) \tag{1}\]
where \(\sigma\) is the dispersion and \(\mu\) is the mean density. We started with calculating emission line emissivity in an extended Cloudy model grid, similar to ones discussed in Section 4.2.2. We then computed the predicted line ratios for a log-normal density distribution by interpolating Cloudy models and integrating over the PDF. Our results show that a log-normal distribution with a large \(\sigma\) can reproduce the ionization-sensitive line ratios, but the log-normal models predict that the [O II] emission arises from dense gas, resulting in [O II] line ratios of \(\log(\frac{43729}{37272})=-0.4\) to \(-0.1\), inconsistent with the observed ratios of \(\log(\frac{43729}{37272})>0.1\). Therefore, a broad density distribution is unlikely to reconcile the density discrepancy.
Alternatively, projection effects can also result in disagreement
\begin{table}
\begin{tabular}{l c c c} \hline ID & \(\log(n_{\rm e,[O\,II]}/{\rm cm}^{-3})\)a & \(\log(m_{\rm H,Cloudy}/{\rm cm}^{-3})\)b & \(\log(U_{\rm Cloudsy})\)c \\ \hline S1 & \(<1.6\) & \(1.6_{-0.1}^{+0.1}\) & \(-2.2_{-0.1}^{+0.1}\) \\ S2 & \(<1.7\) & \(1.7_{-0.1}^{+0.1}\) & \(-2.1_{-0.1}^{+0.1}\) \\ S3 & — & — & — \\ S4 & — & — & — \\ S5 & \(<1.6\) & \(4.2_{-0.3}^{+0.2}\) & \(-3.0_{-0.3}^{+0.2}\) \\ S6 & \(1.8_{-0.1}^{+0.1}\) & \(2.7_{-0.1}^{+0.1}\) & \(-2.5_{-0.1}^{+0.1}\) \\ S7 & \(<1.9\) & \(3.0_{-0.3}^{+0.3}\) & \(-3.2_{-0.3}^{+0.3}\) \\ S8 & \(<1.3\) & \(3.5_{-0.2}^{+0.2}\) & \(-3.3_{-0.2}^{+0.2}\) \\ S9 & \(<2.3\) & \(4.1_{-0.3}^{+0.2}\) & \(-3.5_{+0.3}^{+0.2}\) \\ S10 & \(<1.4\) & \(3.6_{-0.2}^{+0.2}\) & \(-3.3_{-0.2}^{+0.2}\) \\ B1 & \(<2.8\) & \(2.1_{-0.2}^{+0.1}\) & \(-2.7_{+0.2}^{+0.1}\) \\ B2 & \(<1.2\) & \(2.9_{-0.3}^{+0.1}\) & \(-3.4_{+0.3}^{+0.1}\) \\ B3 & \(<2.5\) & \(1.9_{-0.2}^{+0.1}\) & \(-2.8_{+0.2}^{+0.1}\) \\ B4 & — & — & — \\ \hline \end{tabular} 1
\end{table}
Table 3: Summary of nebula regions in the Field of HE 0238\(-\)1904.
between the two density estimates. However, assuming that the gas is randomly and approximately spherically distributed around the quasar, the projected distance is unlikely to be much smaller than the radial distance between the quasar and the nebula. For example, producing a factor of 400 mismatch in density requires the radial distance to be 20 times larger than the projected distance. While such projection effects are possible in principle, the required contrived geometry is unlikely.
In principle, the discrepancy in density could be explained if the nebula is not directly ionized by the quasar due to obscuring dust or translucent clouds blocking its light from reaching this gas. Filtering the quasar's radiation through dust would soften the incident ionizing radiation field. However, the best-fit \(\alpha\) values from our photoionization analysis suggests a hard ionizing spectrum for almost all regions. The hard inferred ionizing spectrum is inconsistent with expectations from a quasar SED filtered through dust clouds.
Alternatively, translucent clouds of moderate optical thickness to ionizing photons can also filter the quasar's radiation. Depending on the density and the physical size, these clouds could produce distinct line ratios as a function of depth into the cloud (Liu et al., 2013). Typically, the outer parts of the cloud produce no significant [O II] or [O III] emission because oxygen is highly ionized. However, H\(\beta\) is a recombination line and so a non-negligible fraction of the H\(\beta\) emission arises from outer parts of the cloud that do not emit in [O II] or [O III]. As a result, translucent regions are expected to have stronger H\(\beta\) emission than [O II] and [O III]. Yet, none of the nebular regions have such \(\rm[\,O\,III]/H\beta\) ratio. If these translucent clouds exist around HE 0238\(-\)1904, they therefore must be blended with optically thick clouds due to seeing conditions and projection effects. The presence of unresolved translucent clouds could be investigated by observing the nebula with higher spatial resolution instruments such as NIRSpec on the JWST or with adaptive optics from the ground. Nevertheless, while translucent clouds may help reconcile the density discrepancy in some cases, moderate optical depth clouds can only absorb a modest portion of the quasar's radiation. Therefore, it is unlikely to explain the largest density discrepancies.
On the other hand, the ionization of the nebulae could be due to young stellar populations (Morisset et al., 2015) or fast shocks (Allen et al., 2008). However, there is no evidence of extended star-formation in rest-frame \(u\)-band images of the system formed from the MUSE datacube. To investigate the possibility of fast shocks, we show two emission line diagnostic diagrams overlaid with shock models in a grid of shock velocity and magnetic field strength in Figure 8. Producing the observed [O III]/[O II] and [Ne V]/[O II]3 ratios requires shock velocities of \(v_{\rm shock}>250\rm\,km\,s^{-1}\)(Allen et al., 2008). These shock velocities are greater than the LOS velocity and velocity dispersion of the nebula in nearly all locations, even after accounting for projection effects. For example, some regions (S1 and S2) would require shock velocities exceeding \(1000\rm\,km\,s^{-1}\) and most regions (S3, S4, S6, S8, S10, B1, B2, B3, and B4) would require \(>300\rm-400\,km\,s^{-1}\), making them unlikely to be ionized by shocks. On the other hand, while the observed line ratios of S5, S7, and S9 favor AGN photoionization, large uncertainties in their H\(\beta\) flux can accommodate shocks with velocities as low as \(200\rm\,km\,s^{-1}\). This would alleviate the density discrepancy in these three regions. However, for most regions, the shock velocity required to reproduce the observed line ratios exceeds velocities observed in the system. Shocks are therefore unlikely to explain the density discrepancy in most cases.
Footnote 3: We note that [Ne V]/[Ne III] as a better shock tracer cannot be used due to [Ne III]\(\lambda\)3869 is severely contaminated by skylines.
Perhaps more likely, the difference in the density estimates could be due to quasar variability (Richstone & Oke, 1977). Quasar variability is directly observed on timescales of decades (Stone et al., 2022). Observations of "changing-look" AGN, light echoes, and quasar proximity zones suggest the average episodic lifetime of quasars may range from \(10^{4}\) to \(10^{7}\) years and AGN episodes may be highly clustered (e.g., Schirber et al., 2004; Goncalves et al., 2008; Kirkman & Tytler, 2008; Trainor & Steidel, 2013; Syhers & Shull, 2014; Schawinski et al., 2015; Comerford et al., 2017; Schmidt et al., 2018; Shen, 2021). Therefore, each region of the nebula around HE 0238\(-\)1904 may experience a drastically different radiation field from the quasar, depending on the light travel time. For example, S5 and S6 are at a projected distance of 10 to 20 kpc from the quasar, respectively, and their line ratios can be explained if the quasar was 400 and 10 times less luminous than currently observed. In contrast, S1 and S2 are at a projected distance of \(\approx 40\) kpc from the quasar, and their properties can be explained if they received ionizing radiation consistent with the current luminosity of the quasar. We confirmed that quasar variability could explain the ionization state and [O II] ratio by re-running Cloudy models and MCMC analysis after significantly decreasing the quasar luminosity.
## 5 Summary and Conclusions
In this paper, we presented the first comprehensive analysis of a giant nebula around a radio-quiet quasar at \(z<1\) based on MUSE observations of the field of HE 0238\(-\)1904. The wide FoV, high spatial sampling, and wide wavelength coverage enabled us to investigate the origin and the physical condition of the group and gaseous environment with a spatially resolved analysis of the morphologies, kinematics, and nebular photoionization properties. Our finding can be summarized as follows.
1. We found that HE 0238\(-\)1904 resides in an overdense environment containing two potentially merging galaxy groups based on spatial distribution and kinematics. This includes a less rich, blueshifted group with 12 galaxies and a richer, redshifted group with 22 galaxies. Assuming the more massive group is virialized, its dynamical mass is \(M_{\rm dyn}\sim 4\times 10^{13}\)-\(10^{14}\)\(\rm\,M_{\odot}\). Such a massive, rich environment is unusual for a radio-quiet quasar, which typically resides in a halo with a mass of \(\sim 3\times 10^{12}\)\(\rm\,M_{\odot}\)(Shen et al., 2009).
2. We identified a giant nebula covering a projected area of \(\approx 5000\) kpc\({}^{2}\) around HE 0238\(-\)1904 emitting strongly in [O II], H\(\beta\), and [O III]. The nebula has an irregular morphology with a spatial trend in kinematics where the region North of the quasar is redshifted and the region South of the quasar is mainly blueshifted relative to the quasar. The southern region is spatially coincident with four dwarf galaxies.
3. The coincidence with nearby galaxies suggests that it arises from stripping of ISM or CGM, which is consistent with its morphology and largely narrow LOS velocity dispersion. In addition, the nebula shows a head-tail morphology with the head near the quasar and with the tail extending toward South West of the quasar. The head-tail structure may originate from ram pressure if the quasar and the surrounding nebula are infalling toward the massive galaxy group to the North East. However, we note there are some small regions at \(d\approx 20\) kpc from the quasar that have broader emission wings, perhaps suggesting an outflow origin.
4. To better characterize the physical conditions of the nebula, we measured the fluxes of strong and weak emission line fluxes.
The inferred electron number density upper limits from the [O II] doublet range from \(\log(n_{\rm e,[O\,II]}/\rm cm^{-3})<1.2\) to 2.8, with a median of \(\log(n_{\rm e,[O\,II]}/\rm cm^{-3})<1.6\). These density upper limits are consistent with ISM or CGM origin. However, densities inferred from photoionization models are often inconsistent with the [O II]-based density upper limits, reaching values of up to 400 times higher.
* The disagreement in density estimates is unlikely to be due to density inhomogeneities, but can be explained by quasar variability, if the quasar varied significantly on timescales of \(10^{4}\) to \(10^{5}\) years. This finding suggest that long-term quasar variability should be included when considering ionization-based inferences into the physical conditions of giant nebulae around quasars.
The possibility of significant quasar variability on timescales of \(10^{4}\) to \(10^{5}\) years has implications far beyond accretion disk physics in the central engine. In particular, significant fluctuations on these timescales can result in out-of-equilibrium conditions in the low density circumgalactic medium due to the long recombination time of low density gas (Oppenheimer and Schaye, 2013; Segers et al., 2017). Indeed, such AGN "flickering" may be responsible for strong O VI absorption observed around Milky Way-like galaxies at low redshift (Oppenheimer et al., 2018). The recent and upcoming commissioning new IFSs on large telescopes, such as LLAMAS (Furesz et al., 2020), IFUM (Mateo et al., 2022), Blue MUSE (Richard, 2019), and MIRMOS (Konidaris et al., 2020), will continue to drive further discoveries of giant nebulae which could be followed up with IFS like HARMONI (Thatte et al., 2022) on future, 30-meter class telescopes, extending similar insights to higher redshifts and fainter systems.
## Acknowledgements
SDJ and ZQL acknowledge partial support from HST-GO-15280.009-A, HST-GO-15298.007-A, HST-GO-15655.018-A, and HST-GO-15935.021-A. JIL is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program. SC gratefully acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation programme grant agreement No 864361. This paper is based on observations from the European Organization for Astronomical Research in the Southern Hemisphere under ESO (PI: J. Schaye, PID: 094.A-0131(B) & 096.A-0222(A)), and the NASA/ESA Hubble Space Telescope (PI: L. Straka, PID: 14660; PI: J. Green, 11541; PI: S. Penton, PID: 12505). Additionally, this paper made use of the NASA/IPAC Extragalactic Database, the NASA Astrophysics Data System, Astropy (Astropy Collaboration et al., 2022), Aplpy (Robitaille and Bressert, 2012), and Photutils (Bradley, 2023).
## Data availability
The data used in this paper are available from the the ESO and HST data archives.
| We present the first comprehensive study of a giant, $\approx \! \! 70$kpc-scale nebula around a radio-quiet quasar at $z<1$.
その分析は、HE$\,$0238$-$1904の観測フィールドの深層積分多色スペクトル分析に基づいています。そのフィールドには、$z=0.6282$ のアルミニウム質のQuasarが含まれています。このネbulaは $\mathrm{[O \,II]}$, $\rm H \beta$, and $\mathrm{[O \, III]}$ に強く出射しています。このquasarは、通常より密集した環境で位置しています。この環境は、2つのグループからなることが考えられます。これらのグループの合計で、その動的質量は $M_{\rm dyn}\approx 4\times 10^{13}$ to $10^{14}\{\ |